report
stringlengths
320
1.32M
summary
stringlengths
127
13.7k
The magnitude of IRS’ collection workload is staggering. As of the beginning of fiscal year 1996, IRS reported that its inventory of unpaid tax assessments totaled about $200 billion. Of this amount, IRS estimated that about $46 billion had collection potential. In addition, during the fiscal year, an additional $59 billion in unpaid tax assessments were added to the inventory. who have not paid the amount due as determined by the tax assessment.In the first stage of the process, a series of notices are to be sent to the taxpayer from one of IRS’ service centers. Collectively, these notices are to provide the taxpayer with statutory notification of the tax liability, IRS’ intent to levy assets if necessary, and information on the taxpayer’s rights. If the taxpayer fails to pay after being notified, the Internal Revenue Code authorizes a federal tax lien to be filed to protect the government’s interest over other creditors and purchasers of taxpayer property. The second stage of IRS’ collection process involves attempts to collect the taxes by making telephone contact with the taxpayer. IRS carries out this stage through its Automated Collection System (ACS) program. During this stage, IRS may levy taxpayer assets and file notices of federal tax liens. In the final stage of the collection process, information about the tax delinquency is referred to IRS’ field offices for possible face-to-face contact with the taxpayer. During this stage, IRS may also levy taxpayer assets and file notices of federal tax liens. Additionally, as a final collection action, taxpayer property, such as cars or real estate, may be seized. Attachment I presents a flowchart that provides additional detail about the collection process. At any time in the collection process, IRS may find that a taxpayer cannot pay what is owed or does not owe the tax IRS assessed. In such situations, IRS may enter into an installment agreement with a taxpayer, compromise for an amount less than the original tax assessment, suspend or terminate the collection action, or abate an erroneous assessment. Also, if the taxpayer is having a problem resolving a collection action with the initiating IRS office, the taxpayer may go to IRS’ Taxpayer Advocate or to IRS’ appeals program for resolution. If an enforcement action is taken that involves a reckless or intentional disregard of taxpayer rights by an IRS employee, a taxpayer may sue for damages. In the case of an erroneous bank levy, a taxpayer may file a claim with IRS for reimbursement of bank charges incurred because of the levy in addition to a refund of the erroneously levied amount. If a taxpayer believes that enforced collection would be a hardship, the taxpayer may request assistance from the Taxpayer Advocate. IRS produces management information reports that provide some basic information on tax collections and the use of collection enforcement authorities, including the number of liens, levies, and seizures filed and, in the case of seizures, the tax delinquency that resulted in the seizure and the tax proceeds achieved. Also, some offices within IRS collect information on the misuse of these collection enforcement authorities, but the information is not complete. Overall, IRS’ management reports show that IRS’ collection program collected about $29.8 billion during fiscal year 1996, mostly without taking enforced collection action. In attempting to collect on delinquent accounts, the reports show IRS filed about 750,000 liens against taxpayer property, issued about 3.2 million levies on taxpayer assets held by third parties, and completed about 10,000 seizures of taxpayer property. Attachment II presents this overall information on IRS’ use of lien, levy, and seizure authority during fiscal years 1993-96. Attachment III presents a summary of the distribution of seizure cases by type of asset seized in fiscal year 1996. For the seizure cases completed in fiscal year 1996, the average tax delinquency was about $233,700, and the average net proceeds from the seizures was about $16,700. Although complete data were not available on tax delinquencies and associated net proceeds for liens and levies, the best information available from IRS indicates that about $2.1 billion of the $29.8 billion was collected as a result of lien, levy, and seizure actions. The remainder was collected as a result of contacts with taxpayers about their tax delinquencies. The best data that IRS has on the potential misuse of collection authorities are from the Office of the Taxpayer Advocate. However, those data alone are not sufficient to determine the extent of misuse. The data show that about 9,600 complaints involving allegations of inappropriate, improper, or premature collection actions were closed by the Advocate in fiscal year 1996, as were 11,700 requests for relief from collection actions because of hardship. Although the Advocate does not routinely collect data on the resolution of taxpayer complaints, it does collect data on the resolution of requests for relief. According to the Advocate, during fiscal year 1996, the requests for relief resulted in the release—either full or partial—from about 4,000 levy and seizure actions and 156 liens. These Taxpayer Advocate data are not sufficient to determine the extent to which IRS’ initial collection actions were appropriate or not for several reasons. First, the release of a lien could result from a taxpayer subsequently paying the tax liability or offering an alternative solution, or because IRS placed the lien in error. Although the Taxpayer Advocate maintains an information system that accommodates collecting the data to identify whether IRS was the cause of the taxpayer’s problem, the Advocate does not require that such information be reported by the IRS employee working to resolve the case or be otherwise accumulated. Thus, about 82 percent of the taxpayer complaints closed in fiscal year 1996 did not specify this information. Of the remaining 18 percent, about 9 percent specified that IRS’ collection action was in error either through taking an erroneous action, providing misleading information to the taxpayer, or taking premature enforcement action. In addition, the Advocate’s data do not cover the potential universe of cases in which a collection action is alleged to have been made improperly. The Advocate requires each complaint that is covered by its information system to be categorized by only one major code to identify the issue or problem. If a complaint had more than one problem, it is possible that a collection-related code could be superseded by another code such as one covering lost or misapplied payments. Also, complaints that are handled routinely by the various IRS offices would not be included in the Advocate’s data because that office was not involved in the matter. For example, appeals related to lien, levy, and seizure actions are to be handled by the Collection Appeals Program (effective April 1, 1996). For fiscal year 1996, the Appeals Program reported that of the 705 completed appeals of IRS’ enforced collection actions, it fully sustained IRS actions on 483 cases, partially sustained IRS in 55 cases, did not sustain IRS actions in 68 cases, and returned 99 cases to the initiating office for further action because they were prematurely referred to the Collection Appeals Program. According to IRS Appeals officials, a determination that Appeals did not sustain an IRS enforcement action does not necessarily mean that the action was inappropriate. If a taxpayer offered an alternative payment method, the Appeals Officer may have approved that offer—and thus not sustained the enforcement action—even if the enforcement action was justified. In any event, the Collection Appeals Program keeps no additional automated or summary records on the resolution of appeals as they relate to the appropriateness of lien, levy, or seizure action. IRS’ record-keeping practices limit both our and IRS’ ability to generate data needed to determine the extent or causes of the misuse of lien, levy, and seizure authority. Neither IRS’ major data systems—masterfiles and supplementary systems—nor the summary records (manual or automated) maintained by the IRS offices responsible for the various stages of the collection process systematically record and track the issuance and complete resolution of all collection enforcement actions, i.e., liens, levies, and seizure actions. Moreover, the detailed records kept by these offices do not always include data that would permit a determination about whether an enforcement action was properly used. But, even if collection records contained information relevant to the use of collection enforcement actions, our experience has been that obstacles exist to retrieving records needed for a systematic review. IRS maintains selected information on all taxpayers, such as taxpayer identification number; amount of tax liability by tax year; amount of taxes paid by tax year; codes showing the event triggering the tax payment, including liens, levies, and seizures; and taxpayer characteristics, including earnings and employment status, on its Individual and Business Masterfiles. Also, if certain changes occur to a taxpayer’s account, such as correction of a processing error in a service center, IRS requires information to be captured on the source of the error, that is, whether the error originated with IRS or the taxpayer. the characteristics of affected taxpayers. The lack of such data also precludes us from identifying a sample of affected taxpayers to serve as a basis for evaluating the use or misuse of collection actions. As I noted earlier, the IRS tax collection process involves several steps, which are carried out by different IRS offices that are often organizationally dispersed. Since authorities exist to initiate some of the collection actions at different steps in the process, several different offices could initiate a lien, levy, or seizure to resolve a given tax assessment. In addition, our examination of procedures and records at several of these offices demonstrated that records may be incomplete or inaccurate. For example, the starting point for a collection action is the identification of an unpaid tax assessment. The assessment may originate from a number of sources within IRS, such as the service center functions responsible for the routine processing of tax returns; the district office, ACS, or service center functions responsible for examining tax returns and identifying nonfilers; or the service center functions responsible for computer-matching of return information to identify underreporters. These assessments may not always be accurate, and as reported in our financial audits of IRS, cannot always be tracked back to supporting documentation. Since collection actions may stem from disputed assessments, determining the appropriateness of IRS actions would be problematic without an accurate tax assessment supported by documentation. Further, offices responsible for resolving taxpayer complaints do not always maintain records on the resolution of those complaints that would permit identification of instances of inappropriate use of collections authorities. We found several examples of this lack of data during our review. cases involving ACS, where an automated system is used for recording data, specific information about complaints may not be maintained because the automated files have limited space for comments and transactions. If a taxpayer complaint is not resolved by the responsible office, the taxpayer may seek assistance from the Taxpayer Advocate. As noted earlier, the Advocate has some information on complaints about the use of collection enforcement authorities, but those data are incomplete. In addition, starting in the last quarter of 1996, the Advocate was to receive notification of the resolution of taxpayer complaints involving IRS employee behavior (that is, complaints about IRS employees behaving inappropriately in their treatment of taxpayers, such as rudeness, overzealousness, discriminatory treatment, and the like.) These notifications, however, do not indicate if the problem involved the possible misuse of collection authority. If a taxpayer’s complaint involves IRS employee integrity issues, the complaint should be referred to IRS’ Inspection Office. According to Inspection, that office is responsible for investigating allegations of criminal and serious administrative misconduct by specific IRS employees, but it would not normally investigate whether the misconduct involved inappropriate enforcement actions. In any event, Inspection does not keep automated or summary records on the results of its investigations as they relate to appropriateness of lien, levy, or seizure actions. Court cases are to be handled by the Chief Counsel’s General Litigation Office. Internal Revenue Code sections 7432 and 7433 provide for taxpayers to file a claim for damages when IRS (1) knowingly or negligently fails to release a lien or (2) recklessly or intentionally disregards any provision of law or regulation related to the collection of federal tax, respectively. According to the Litigation Office, a total of 21 cases were filed under these provisions during 1995 and 1996. However, the Litigation Office does not maintain information on case outcomes. The Office has recently completed a study that covered court cases since 1995 involving damage claims in bankruptcy cases. As a part of that study, the Office identified 16 cases in which IRS misapplied its levy authority during taxpayer bankruptcy proceedings. IRS officials told us that the results of this study led IRS to establish a Bankruptcy Working Group to make recommendations to prevent such misapplication of levy authority. collection enforcement authorities. As we have learned from our prior work, IRS cannot always locate files when needed. For example, locating district office closed collection files once they have been sent to a Federal Records Center is impractical because there is no list identifying file contents associated with the shipments to the Records Centers. On a number of past assignments, we used the strategy of requesting IRS district offices to hold closed cases for a period of time, and then we sampled files from those retained cases. However, the results of these reviews could not be statistically projected to the universe of all closed cases because we had no way to determine if the cases closed in the relatively short period of time were typical of the cases closed over a longer period of time. We discussed with IRS the feasibility of collecting additional information for monitoring the extent to which IRS may have inappropriately used its collection enforcement authorities, and the characteristics of taxpayers who might be affected by such inappropriate actions. IRS officials noted that, although IRS does not maintain specific case data on enforcement actions, they believed that sufficient checks and balances (e.g., supervisory review of collection enforcement actions, collection appeals, complaint handling, and taxpayer assistance) are in place to protect taxpayers from inappropriate collection action. The development and maintenance of additional case data are, according to IRS officials, not practical without major information system enhancements. The IRS officials further observed that, given the potential volume and complexity of the data involved and the resources needed for data gathering and analysis, they were unable to make a compelling case for compiling the information. We recognize that IRS faces resource constraints in developing its management information systems and that IRS has internal controls, such as supervisory review and appeals, that are intended to avoid or resolve inappropriate use of collection authorities. We also recognize that the lack of relevant information to assess IRS’ use of its collection enforcement authorities is not, in itself, evidence that IRS lacks commitment to resolve taxpayer collection problems after they occur. However, the limited data available and our prior work indicate that, at least in some cases, these controls may not work as effectively as intended. IRS is responsible for administering the nation’s voluntary tax system in a fair and efficient manner. To do so, IRS oversees a staff of more than 100,000 employees who work at hundreds of locations in the United States and foreign countries and who are vested, by Congress, with a broad set of discretionary enforcement powers, including the ability to seize taxpayer property to resolve unpaid taxes. Given the substantial authorities granted to IRS to enforce tax collections, IRS and the other stakeholders in the voluntary tax system—such as Congress and the taxpayers—should have information to permit them to determine whether those authorities are being used appropriately; whether IRS’ internal controls are working effectively; and whether, if inappropriate uses of the authorities are identified, the problems are isolated events or systemic problems. At this time, IRS does not have the data that would permit it or Congress to readily determine the extent to which IRS’ collections enforcement authorities are misused, the causes of those occurrences, the characteristics of the affected taxpayers, or whether the checks and balances that IRS established over the use of collection enforcement authorities are working as intended. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions you may have. A. B. is researched. C. proceed? amount owed? owe? Case closed. Cannot full pay. IA,OIC, or CNC considered. processed. Establish IA, OIC or tolerance? approved? CNC. (ACS Collection) D. (Field Collection) E. (ACS Collection) D. Case closed. pays? attempted. proceed? rights. mail (statutory requirement). taxpayer rights. (ACS Collection) pay? for ACS? (Field Collection) Case closed. (Field Collection) E. (Field Collection) E. find levy sources. found? contact. pay? owe? pay? sources found. pay? to proceed? Closed. Establish IA or OIC if taxpayer cannot full pay, if taxpayer cannot pay at all, then CNC. File lien, if appropriate. (Field Collection) information. something? lien, if appropriate. Establish IA or OIC. File lien, if appropriate. full? pay? Enforce collection. Issue levies if sources available. Are taxes fully paid? Seize assets. Are taxes fully paid? actions until paid or case otherwise closed. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed: (1) the availability of information on the Internal Revenue Service's (IRS) use of its enforcement authorities to collect delinquent taxes; and (2) whether information existed that could be used to determine whether collection enforcement authorities were properly used. GAO found that: (1) while IRS has some limited data about its use, and misuse, of collection enforcement authorities, these data are not sufficient to show: (a) the extent of the improper use of lien, levy, or seizure authority; (b) the causes of improper actions; or (c) the characteristics of taxpayers affected by improper actions; (2) the lack of information exists because IRS' systems--both manual and automated--have not been designed to capture and report comprehensive information on the use and possible misuse of collection authorities; (3) also, much of the data that are recorded on automated systems cannot be aggregated without a significant investment of scarce programming resources; (4) some information is available in manual records, but--because collection enforcement actions can be taken by a number of different IRS offices and records resulting from these actions are not always linked to IRS' automated information systems--this information cannot be readily assembled to assess the use of enforcement actions; (5) also, data are not readily available from other potential sources, such as taxpayer complaints, because, in many circumstances, IRS does not require that information on the resolution of the complaints be recorded; (6) IRS officials told GAO that collecting complete data on the use of enforcement actions that would permit an assessment of the extent and possible causes of misuse of these authorities is unnecessary because they have adequate checks and balances in place to protect taxpayers; and (7) however, IRS does not have the data that would permit it or Congress to readily resolve resonable questions about the extent to which IRS' collections enforcement authorities are misused, the causes of those occurrences, the characteristics of the affected taxpayers, or whether IRS' checks and balances over the use of collection enforcement authorities are working as intended.
The U.S. government has engaged in multiple efforts in Afghanistan since declaring a global war on terrorism in 2001 that targeted al Qaeda, its affiliates, and other violent extremists. These efforts employ a whole-of- government approach that calls for the use of all elements of U.S. national power to disrupt, dismantle, and defeat al Qaeda and its affiliates and prevent their return. In March 2011, U.S. forces shifted their role from carrying out combat operations to advising and assisting Afghan forces as lead security responsibility was transitioned to Afghan forces. U.S. government efforts for the global war on terrorism in Iraq began in 2003 with Operation Iraqi Freedom. Similar to U.S. efforts in Afghanistan, U.S. military operations in Iraq shifted focus from combat and counterinsurgency to that of an advising and training role for Iraqi security forces. The U.S. and Iraqi governments signed an agreement in 2008 to draw down U.S. forces in Iraq to a complete withdrawal no later than December 31, 2011. In 2014, the Islamic State of Iraq and Syria (ISIS) emerged as a major force in Iraq and Syria. In September 2014, the President announced the U.S. strategy to degrade and ultimate destroy ISIS. Also in 2014, Congress passed and the President signed legislation authorizing DOD to provide assistance, including training and equipment, to vetted Syrian opposition forces to fight ISIS, among other purposes. Similar legislation authorized assistance to military and other security forces of or associated with the Government of Iraq, including Kurdish and tribal security forces or other local security forces with a national security mission. Force management levels and similar caps are generally set by the Executive Branch to limit or manage the number of military personnel deployed at any one time to specific countries. Force management levels can also be derived from various other sources. For example, we reported that during the Balkan operations of the 1990s, DOD limited U.S. troops to 15 percent of the North Atlantic Treaty Organization force in Kosovo. Also, the overall number of U.S. forces may be limited by the host nation to which they are deploying. Force management levels and similar caps have been a factor in military operations for a long time— dating at least to the Vietnam War, during which troop ceilings were used to manage the number of deployed U.S. forces. As such, operating under limitations to the total number of deployed forces is something with which DOD has become familiar. The executive branch used force management levels to shape the drawdown of forces in Afghanistan and Iraq. In Iraq, U.S. forces drew down from a peak of over 170,000 “boots on the ground” in November 2007 to their withdrawal at the end of 2011. In Afghanistan, U.S. forces have drawn down from a peak of almost 100,000 in March 2011 to 9,300 as of the middle of 2016. In the current counter-ISIS fight in Iraq and Syria, force management levels limited the initial deployment of forces and have been increased over time to enable the deployment of additional forces to carry out the mission. Military officials planning for and executing operations under force management levels have taken various actions to maximize military capabilities deployed to countries under those limits. For example, we reported in 2013 that with the initial drawdown of forces in Afghanistan starting in 2011, which occurred as U.S. forces shifted from carrying out combat operations to advising and assisting Afghan forces, there were a number of key areas that military planners and operational commanders would have to consider regarding the military capabilities DOD retained in Afghanistan to enable the success of Afghan partner forces. These would include considerations regarding what types of key enablers—such as air, logistics, intelligence, and medical evacuation support—were needed to support Afghan National Security Forces. Similarly, as force management levels in Afghanistan were further reduced to below 10,000 forces in early 2015, military planners and operational commanders faced more fundamental issues about the structure of the U.S. presence in Afghanistan. Among other things, planners had to consider how reduced force levels would constrain resources for the advising mission, given for example the increasing dedication of resources and personnel to base force protection, the number of enduring base locations, and reduced medical reach. As the force management level in Afghanistan has continued to decline, these are the questions that military planners and operational commanders continue to address through various actions. Similarly, in the current counter-ISIS mission in Iraq and Syria, planners and commanders have been assessing how to maximize military capabilities while providing the needed support for the mission they are executing under current force management levels. Among the actions DOD has taken to accomplish these goals in Afghanistan, Iraq, and Syria is that of increasing its reliance on: (1) partner nation security forces; (2) U.S. and Coalition airpower; (3) special operations forces; and (4) contractor and temporary duty personnel. One of the tools DOD has used to maximize the number of mission- focused personnel under a force management level to achieve its objectives is to increase engagement with partner nation security forces through a range of security cooperation efforts. For example, as part of the overall transition of lead security from U.S. forces to Afghan National Security Forces and the drawdown of U.S. forces after 2010, the U.S. mission in Afghanistan shifted from a combat role to an advise-and-assist mission. As a result, DOD has used a variety of approaches to provide U.S. advisors to carry out the advise-and-assist mission. In early 2012, the U.S. Army and Marine Corps began to deploy small teams of advisors with specialized capabilities—referred to as Security Force Assistance Advisory Teams—that were located throughout Afghanistan, to work with Afghan army and police units from the headquarters to the battalion level, and advise them in areas such as command and control, intelligence, and logistics. Relying on partner forces to conduct operations has both positive and negative potential effects. On the positive side, limited U.S. capacity can help to ensure partner forces take the lead, such as in Iraq, where Iraqi Security Forces are leading the attack on Mosul as part of Operation Inherent Resolve. However, as the Director of the Defense Intelligence Agency stated, the Iraqi Security forces lack the capacity to defend against foreign threats or sustain conventional military operations without continued foreign assistance. For example, the recapture of the Iraqi city of Sinjar in November 2015 and the Ramadi government center in December 2015 depended on extensive coalition airstrikes and other support. As a result, this can create complications for U.S. planners in terms of allocating capabilities and resources within the force management levels. In addition, in 2011 we reported on challenges DOD has faced when supplying advise and assist teams, such as in providing the necessary field grade officers and specialized capabilities. We also found that splitting up brigade combat teams to source these advisor teams had an effect on the readiness and training of those brigades. We made three recommendations to the department to ensure that the activities of individual advisor teams are more clearly linked to command goals and to enhance the ability of advisor teams to prepare for and execute their mission. DOD concurred with our recommendations and has implemented two of them. With a limited U.S. footprint under the current force management levels in Afghanistan, Iraq, and Syria, DOD has relied on U.S. and coalition airpower to provide support to partner ground forces in lieu of U.S. ground combat capabilities. For example, U.S. Air Force Central Command reported that since the 2011 drawdown began in Afghanistan, coalition members have flown nearly 108,000 sorties and dropped approximately 16,500 munitions. Additionally, since U.S. operations related to ISIS began in August 2014, coalition members have flown nearly 44,000 sorties and dropped more than 57,000 munitions. While effective, according to senior DOD officials, this reliance on air power is not without its costs or challenges. For example, according to the Secretary of Defense in February 2016, the accelerating intensity of the U.S. air campaign against ISIS in Iraq and Syria has been depleting U.S. stocks of GPS-guided smart bombs and laser-guided munitions. As a result, DOD requested an additional $1.8 billion in the fiscal year (FY) 17 budget request to purchase more than 45,000 more of these munitions. Furthermore, DOD is exploring the idea of increasing the production rate of these munitions in the U.S. industrial base. Similarly, airborne intelligence, surveillance, and reconnaissance (ISR) systems have proved critical to commanders to support military operations in Afghanistan, Iraq and Syria. The success of ISR systems in collecting, processing, and disseminating useful intelligence information has fueled growing a demand for more ISR support, and DOD has increased its investments in ISR capabilities significantly since 2002. According to a senior DOD official, as the United States reduces its footprint in Afghanistan, it is imperative that U.S. intelligence collection capabilities be constant and robust to support forces on the ground. With respect to Iraq and Syria, according to this senior official, there is also a need for significant ISR capabilities to develop and maintain situational awareness of the security environment, particularly in the absence of a large U.S. ground presence. As he noted, ISR platforms with full-motion video capabilities have become fundamental to almost all battlefield maneuvers, adversary detection, terrorist pattern-of-life development, and force protection operations. In a force management level-constrained environment, DOD has increased the use of U.S. Special Operations Forces (SOF), who are specially organized, trained, and equipped to conduct operations in hostile or politically sensitive environments. As a result, these forces increase the operational reach and capabilities of the limited number of ground forces that can be deployed under a force management level. However, SOF deployments in countries such as Afghanistan, Iraq and Syria have placed significant demand on the force during this period. As we reported in 2015, DOD has increased the size and funding of SOF and has emphasized their importance to meeting national security needs. Specifically, the number of authorized special operations military positions, which includes combat and support personnel, increased from about 42,800 in FY 2001 to about 62,800 in FY 2014. Funding provided to U.S. Special Operations Command for special operations– specific needs has more than tripled from about $3.1 billion in FY 2001 to about $9.8 billion in in FY 2014, in FY 2014 constant dollars, including supplemental funding for contingency operations. We made three recommendations to the department to improve budget visibility for SOF and to determine whether certain traditional SOF activities can be transferred to or shared with conventional forces. DOD partially concurred with our recommendations, and they remain open. While DOD has taken some steps to manage the increased pace of special operations deployments, we have reported that opportunities may exist to better balance the workload across the joint force because activities assigned to SOF can be similar to activities assigned to conventional forces. Conventional forces have been expanding their capabilities to meet the demand for missions that have traditionally been given to SOF, such as stability operations, security force assistance, civil security, and repairing key infrastructure necessary to provide government services and sustain human life. For example, in 2012, we reported that the services were taking steps and investing resources to organize and train conventional forces capable of conducting security force assistance based on identified requirements. We made two recommendations: to improve the way in which the department plans for and prepares forces to execute security force assistance, and to identify and track security force assistance activities. DOD partially concurred with and implemented both recommendations. Recently DOD began establishing conventional forces, such as the Army’s regionally aligned forces, with more extensive language and cultural skills, which are capable of conducting activities previously performed primarily by SOF. In a May 2014 report to Congress, DOD noted that SOF personnel have come under significant strain in the years since September 11, 2001. Both the Assistant Secretary of Defense for Special Operations and Low- Intensity Conflict and the commander of U.S. Special Operations Command acknowledged in 2015 that SOF have sustained unprecedented levels of stress during the preceding few years. Specifically, the commander of U.S. Special Operations Command testified that continued deployments to meet the increasing geographic combatant command demand, the high frequency of combat deployments, the high-stake missions, and the extraordinarily demanding environments in which these forces operate placed not only SOF but also their families under unprecedentedly high levels of stress. According to the commander of U.S. Special Operations Command, the high pace of deployments has resulted in both increased suicide incidents among the force and effects on operational readiness and retention due to a lack of predictability. The Commander’s statements are consistent with our prior work, which has found that a high pace of deployments for SOF can affect readiness, retention, and morale. In that work, GAO made several recommendations to maintain the readiness of SOF to support national security objectives and address human capital challenges. DOD concurred or partially concurred with our recommendations and has implemented them. The military services have also acknowledged challenges that SOF face as a result of operational demands. For example, in 2013 Air Force officials reported that a persistent special operations presence in Afghanistan and elsewhere, increasing requirements in the Pacific region, and enduring global commitments would continue to stress Air Force special operations personnel and aircraft. In a force management level-constrained environment, DOD relies on contractors to support a wide range of military operations and free up uniformed personnel to directly support mission needs. During operations in Afghanistan and Iraq, contractors played a critical role in supporting U.S. troops with the number of contractor personnel sometimes exceeding the number of deployed military personnel. According to DOD, the level of contracted support has exceeded that required in previous wars, and this level is not expected to change in future contingency operations. For example, even as troop levels began to drop below 90,000 in Afghanistan in early 2012, U.S. Central Command reported that the number of contractor personnel in country grew, peaking at 117,227. As of mid-2016, U.S. Central Command reported that there were 2,485 DOD contractor personnel in Iraq, as compared with a force management level of 4,087 U.S. troops in Iraq. DOD has used contractors as a force multiplier, and with a limited force management level, such as in Iraq, contractors have become an increasingly important factor in operations. DOD uses contractors to provide a wide variety of services because of force limitations on the number of U.S. military personnel who can be deployed and a lack of required skills. The use of contractors can free up uniformed personnel to conduct combat operations and provide expertise in specialized fields. The services provided by contractors include logistics and maintenance support, base support, operating communications networks, construction, security, translation support, and other management and administrative support. While contractor support plays a critical role in operations, we have previously reported on DOD’s long-standing challenges in overseeing contractors in deployed environments, and the failure to manage contract support effectively could undermine U.S. policy objectives and threaten the safety of U.S. forces. For example, we reported in 2012 that DOD did not always have sufficient contract oversight personnel to manage and oversee its logistics support contracts in Iraq and Afghanistan. Without an adequate number of trained oversight personnel DOD could not be assured that contractors could meet contract requirements efficiently and effectively. We made four recommendations to improve oversight of operational contract support. DOD concurred with our recommendations and implemented three of them. Since DOD anticipates continued reliance on contractors for future operations, it may face similar challenges related to oversight in current and future operations, such as Operation Inherent Resolve, particularly if force management levels limit the number of military personnel available to conduct such oversight. In addition to contractors, DOD also relies on personnel on temporary duty (TDY) to augment subordinate unified commands and joint task forces during contingency operations. Joint task forces, such as Combined Joint Task Force – Operation Inherent Resolve, are established for a focused and temporary purpose; however if the mission is a continuing requirement, the task force may become a more enduring organization. According to DOD, temporary personnel requirements for short-duration missions should be supported through augmentation, TDY tasking, augmented hiring of civilian personnel, or other temporary personnel solutions. We have previously reported that the combatant commands utilize augmentation to support staff operations during contingencies. We have also reported that CENTCOM’s service component commands, such as U.S. Naval Forces Central Command, and theater special operations commands rely on temporary personnel to augment their commands. We made one recommendation that DOD develop guidance related to costs of overseas operations. DOD partially concurred with our recommendation and it remains open. According to DOD officials, TDY personnel are not counted toward force management level limits. As such, in a constrained-force management level environment, TDY personnel can be used by joint task forces to free up their assigned personnel to meet mission requirements. However, to the extent that force management levels are intended to shape the number of forces deployed to a given country, the use of TDY personnel may not provide a complete picture of U.S. forces engaged in operations. Chairwoman Hartzler, Ranking Member Speier, and Members of the Subcommittee, this completes my prepared statement. I would be pleased to respond to any questions that you may have at this time. If you have any questions about this statement, please contact Cary Russell, Director, Defense Capabilities and Management Team, at (202) 512-5431 or russellc@gao.gov. In addition to the contact named above, James A. Reynolds, Assistant Director; Alissa Czyz; Lori Kmetz; Sean Manzano; Marcus Oliver; Alice Paszel; Michael Shaughnessy; Mike Silver; and Cheryl Weissman made key contributions to this statement. Report numbers with an SU or RSU suffix are Sensitive but Unclassified, and those with a C suffix are Classified. Sensitive but Unclassified and Classified reports are available to personnel with the proper clearances and need to -know, upon request. Afghanistan Equipment Drawdown: Progress Made, but Improved Controls in Decision Making Could Reduce Risk of Unnecessary Expenditures. GAO-14-768. Washington, D.C.: September 30, 2014. Afghanistan: Changes to Updated U.S. Civil-Military Strategic Framework Reflect Evolving U.S. Role. GAO-14-438R. Washington, D.C.: April 1, 2014. Security Force Assistance: More Detailed Planning and Improved Access to Information Needed to Guide Efforts of Advisor Teams in Afghanistan. GAO-13-381. Washington, D.C.: April 30, 2013. Afghanistan: Key Oversight Issues. GAO-13-218SP. Washington, D.C.: February 11, 2013. Afghanistan Drawdown Preparations: DOD Decision Makers Need Additional Analyses to Determine Costs and Benefits of Returning Excess Equipment. GAO-13-185R. Washington, D.C.: December 19, 2012. Afghanistan Security: Security Transition. GAO-12-598C. Washington, D.C.: September 11, 2012. Observations on U.S. Military Capabilities to Support Transition of Lead Security Responsibility to Afghan National Security Forces. GAO-12-734C. Washington, D.C.: August 3, 2012. Afghanistan Security: Long-standing Challenges May Affect Progress and Sustainment of Afghan National Security Forces. GAO-12-951T. Washington, D.C.: July 24, 2012. Interim Results on U.S.-NATO Efforts to Transition Lead Security Responsibility to Afghan Forces. GAO-12-607C. Washington, D.C.: May 18, 2012. Security Force Assistance: Additional Actions Needed to Guide Geographic Combatant Command and Service Efforts. GAO-12-556. Washington, D.C.: May 10, 2012. Afghanistan Security: Estimated Costs to Support Afghan National Security Forces Underscore Concerns about Sustainability. GAO-12-438SU. Washington, D.C.: April 26, 2012. Afghan Security: Renewed Sharing of Biometric Data Could Strengthen U.S. Efforts to Protect U.S. Personnel from Afghan Security Force Attacks. GAO-12-471SU. Washington, D.C.: April 20, 2012. Afghanistan Security: Department of Defense Effort to Train Afghan Police Relies on Contractor Personnel to Fill Skill and Resource Gaps. GAO-12-293R. Washington, D.C.: February 23, 2012. Afghanistan: Improvements Needed to Strengthen Management of U.S. Civilian Presence. GAO-12-285. Washington, D.C.: February 27, 2012. Countering ISIS: DOD Should Develop Plans for Responding to Risks and for Using Stockpiled Equipment No Longer Intended for Syria Train and Equip Program. GAO-16-670C. Washington, D.C.: September 9, 2016. Iraq: State and DOD Need to Improve Documentation and Record Keeping for Vetting of Iraq’s Security Forces. GAO-16-658C. Washington, D.C.: September 30, 2016. Mission Iraq: State and DOD Have Not Finalized Security and Support Capabilities. GAO-12-759RSU. Washington, D.C.: July 26, 2012. Mission Iraq: State and DOD Face Challenges in Finalizing Support and Security Capabilities. GAO-12-856T. Washington, D.C.: June 28, 2012. Intelligence, Surveillance, and Reconnaissance: Actions Needed to Improve DOD Guidance, Integration of Tools, and Training for Collection Management. GAO-12-396C. Washington, D.C.: April 5, 2012. Intelligence, Surveillance, and Reconnaissance: DOD Needs a Strategic, Risk-Based Approach to Enhance Its Maritime Domain Awareness. GAO-11-621. Washington, D.C.: June 20, 2011. Intelligence, Surveillance, and Reconnaissance: Actions Are Needed to Increase Integration and Efficiencies of DOD’s ISR Enterprise. GAO-11-465. Washington, D.C.: June 3, 2011. Special Operations Forces: Opportunities Exist to Improve Transparency of Funding and Assess Potential to Lessen Some Deployments. GAO-15-571. Washington, D.C.: July 16, 2015. Special Operations Forces: DOD’s Report to Congress Generally Addressed the Statutory Requirements but Lacks Detail. GAO-14-820R. Washington, D.C.: September 8, 2014. Operational Contract Support: Actions Needed to Enhance the Collection, Integration, and Sharing of Lessons Learned. GAO-15-243. Washington, D.C.: March 16, 2015. Contingency Contracting: Contractor Personnel Tracking System Needs Better Plans and Guidance. GAO-15-250. Washington, D.C.: February 18, 2015. Warfighter Support: DOD Needs Additional Steps to Fully Integrate Operational Contract Support into Contingency Planning. GAO-13-212. Washington, D.C.: February 8, 2013. Operational Contract Support: Sustained DOD Leadership Needed to Better Prepare for Future Contingencies. GAO-12-1026T. Washington, D.C.: September 12, 2012. Contingency Contracting: Agency Actions to Address Recommendations by the Commission on Wartime Contracting in Iraq and Afghanistan. GAO-12-854R. Washington, D.C.: August 1, 2012. Operational Contract Support: Management and Oversight Improvements Needed in Afghanistan. GAO-12-290. Washington, D.C.: March 29, 2012. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States has engaged in multiple efforts in Afghanistan, Iraq, and Syria since declaring a global war on terrorism in 2001. Currently, in Afghanistan, Iraq, and Syria, U.S. forces are deployed under force management levels set by the administration. Force management levels and similar caps limit the number of U.S. military personnel deployed to a given region and have been a factor in military operations at least since the Vietnam War. Force management levels were also used to shape the drawdowns of operations in Afghanistan and Iraq. In June 2016, the President announced that the force management level for Afghanistan is 9,800. According to DOD, in September 2016 the United States authorized additional troops for Iraq and Syria, for a total of 5,262. Today's testimony discusses some of the actions DOD has taken to maximize military capabilities while operating under force management levels in ongoing operations. In preparing this statement, GAO relied on previously published work related to operations in Afghanistan, Iraq, and Syria since 2001. Military officials planning for and executing operations under force management levels have taken various actions to maximize military capabilities deployed to countries under those limits, as discussed below: Increased Engagement with Partner Nation Security Forces. The Department of Defense (DOD) has increased its engagement with partner nations through advise-and-assist missions that rely on partner nation security forces to conduct operations. While this action helps leverage U.S. resources, it can create complications for U.S. planners in terms of allocating capabilities and resources. In 2011, GAO reported that the Army and Marine Corps have faced challenges in providing the necessary field grade officers and specialized capabilities for advisor teams, as well as challenges regarding the effect on the readiness and training of brigades whose combat teams have been split up to source advisor teams. GAO made three recommendations related to advisor teams. DOD concurred and implemented two recommendations relating to improving the ability of advisor teams to prepare for and execute their mission. Reliance on Airpower. DOD has relied on U.S. and coalition airpower to provide support to partner nation ground forces in lieu of U.S. ground combat capabilities. For example, since U.S. operations related to the Islamic State of Iraq and Syria (ISIS) began in August 2014, coalition members have dropped more than 57,000 munitions. Air-based intelligence, surveillance, and reconnaissance systems have also proved critical to commanders by providing them timely and accurate information. While effective, this reliance on air power is not without its costs or challenges. For example, the Secretary of Defense stated in February 2016 that the intensity of the U.S. air campaign against ISIS has been depleting U.S. stocks of certain weapons. Increased Pace of U.S. Special Operations Deployments. DOD has increased its use of U.S. Special Operations Forces to increase its operational reach and maximize its capabilities under force management levels. However, the increased use of U.S. Special Operations Forces in operations has resulted in a high pace of deployments which can affect readiness, retention, and morale. GAO made 10 recommendations to DOD related to U.S. Special Operations Forces. DOD concurred or partially concurred and has implemented 7 recommendations relating to security force assistance activities and readiness of U.S. Special Operations Forces. Increased Use of Contractors and Personnel on Temporary Duty. DOD relies on contractors to support a wide range of military operations and free up uniformed personnel to directly support mission needs. During operations in Afghanistan and Iraq contractor personnel played a critical role in supporting U.S. troops and sometimes exceeded the number of deployed military personnel. However, the increased use of contractors and temporary personnel to provide support during operations has its challenges, including oversight of contractors in deployed environments. GAO made four recommendations to improve oversight of operational contract support. DOD concurred with all four, and has implemented three of them. GAO also made a recommendation that DOD develop guidance relating to costs of overseas operations, with which DOD partially concurred and which remains open. GAO made 18 recommendations in prior work cited in this statement. DOD has implemented 12 of them. Continued attention is needed to ensure that some recommendations are addressed, such as improving visibility in total Special Operations funding to determine whether opportunities exist to balance deployments across the joint force.
Prescription opioid pain relievers are safe and effective when used as directed, but these highly addictive substances can pose serious risks of addiction or death if they are abused, misused, or diverted. Opportunities for abuse or diversion can occur as drugs flow through the prescription drug supply chain. DEA is responsible for ensuring the availability of controlled substances for legitimate uses while preventing their diversion through its administration and enforcement of the CSA and its States also play a role in regulating controlled implementing regulations.substances and the practices of medicine and pharmacy within their state boundaries. Additionally, national associations representing stakeholders such as distributors, pharmacies, and practitioners work on behalf of their members to support efforts to reduce prescription drug abuse and diversion. When taken as directed for legitimate medical purposes, prescription drugs are safe and effective. Pain, which affects millions of Americans, is a health problem for which prescription drugs are often used. Pain can be characterized in terms of intensity—mild to severe—and duration—acute or chronic. According to the Institute of Medicine, more than 100 million Americans are affected by chronic pain.treatment of pain varies, some patients are prescribed prescription pain relievers, such as opioids, to treat pain. These may include hydrocodone, oxycodone, and morphine, among other opioids. Prescription opioid pain relievers can be used effectively as a short-term treatment for a variety of acute or chronic pain conditions, such as severe pain following trauma, While the appropriate medical and for patients with painful terminal diseases such as cancer. However, opioids are sometimes used in a manner other than as prescribed—that is, they are abused and misused. Because opioids are highly addictive substances, they can pose serious risks when they are abused and misused, which can lead to addiction and cause death. The prescription drug supply chain is the means through which prescription drugs are ultimately delivered to patients with legitimate medical needs. Although there can be many variations in the flow of prescription drugs through the supply chain, in a common example, prescription drugs are produced by manufacturers; are purchased and stored by distributors, who take orders and deliver them to customers such as pharmacies; and ultimately are dispensed by pharmacies to patients who have a prescription from a practitioner. (See fig. 1.) Although prescription drugs are intended for legitimate medical uses, the prescription drug supply chain may present opportunities for the drugs to be abused and diverted as the drugs move through the various components of the supply chain. For example, an individual may visit multiple practitioners posing as a legitimate patient, referred to as a doctor shopper, to obtain prescriptions for drugs for themselves or others. In an example of diversion, criminal enterprises may rob distributors and pharmacies of prescription drugs to sell to others for a profit. Through its Office of Diversion Control, DEA administers the Diversion Control Program whose mission is to prevent, detect, and investigate the diversion of controlled substances from legitimate sources while ensuring an adequate and uninterrupted supply is available for legitimate medical, commercial, and scientific needs. In addition to investigations, the Office of Diversion Control conducts a variety of activities such as establishing quotas on the total amount of each basic class of controlled substance that can be manufactured, promulgating regulations for handling controlled substances, regulating handlers of controlled substances, and monitoring the production and distribution of certain controlled substances, among other things. The CSA requires businesses, entities, or individuals that import, export, manufacture, distribute, dispense, conduct research with respect to, or administer controlled substances to register with the DEA. As of December 2014, along with other registrants, there were over 1.5 million registered distributors, pharmacies, and practitioners. (See table 1.) DEA registrants must comply with a variety of requirements imposed by the CSA and its implementing regulations. For example, a registrant must keep accurate records and maintain inventories of controlled substances, among other requirements, in compliance with applicable federal and state laws. Additionally, all registrants must provide effective controls and procedures to guard against theft and diversion of controlled substances. Examples of some of the specific regulatory requirements for distributors, pharmacists, and practitioners include the following: Distributors: Registrants must design and operate a system to disclose suspicious orders of controlled substances, and must inform the DEA field division office in the registrant’s area of suspicious orders when the registrant discovers them. Pharmacists: While the responsibility for proper prescribing and dispensing of controlled substances rests with the prescribing practitioner, the pharmacist who fills the prescription holds a corresponding responsibility for ensuring that the prescription was issued in the usual course of professional treatment for a legitimate purpose. Practitioners: Practitioners are responsible for the proper prescribing and dispensing of controlled substances for legitimate medical uses. A prescription for a controlled substance must be issued for a legitimate medical purpose by an individual practitioner acting in the usual course of that person’s professional practice. As part of the registrant monitoring process and to ensure compliance with the CSA and its implementing regulations, DEA conducts three types of investigations—regulatory, complaint, and criminal. Regulatory investigations: DEA conducts different types of regulatory investigations, including scheduled, or cyclic, investigations (inspections) of DEA registrants.conducted at a frequency depending on the registrant’s business activity, and occur every 2, 3 or 5 years. Registrants such as physicians—with the exception of physicians permitted to treat narcotic dependence—generally do not receive scheduled investigations by the DEA. These registrants may be regularly investigated by the states in which they conduct business. Complaint investigations: Complaint investigations are started on the basis of information or a tip provided to DEA or state regulators, or other information DEA has regarding the diversion of controlled substances. The origin of the information could be from any number of sources, such as a state or local official or citizen who observed something suspicious, employees of a registrant, the identification by DEA of unusual purchasing trends by a registrant such as a pharmacy that is tracked through DEA’s Automation of Reports and Consolidated Orders System (ARCOS), or a report to DEA of a loss of controlled substances by a registrant. Criminal investigations: DEA also conducts investigations into criminal activities involving diversion of controlled substances that may involve DEA registrants or nonregistrants, such as an undercover purchase of a controlled substance from an individual who is not a registrant. Within its 21 field divisions, DEA utilizes a variety of personnel (including diversion investigators, special agents, and task force officers) to carry out these investigative responsibilities. Following an investigation, DEA can initiate a variety of enforcement actions for violations of the CSA or its implementing regulations— administrative, civil, and criminal. The type(s) of action initiated is within DEA’s discretion and is typically driven by the severity of the offense(s) and whether a registrant was the subject of any previous actions. The penalties associated with different enforcement actions likewise vary in severity. Administrative actions: Administrative actions are handled primarily by DEA and can include (1) a letter of admonition to advise the registrant of any violations and necessary corrective action, (2) a memorandum of agreement which outlines things the registrants agree to do to become compliant and obligations of DEA when violations are corrected or not corrected, (3) an order to show cause that can initiate revocation or suspension of a DEA registration, and (4) an immediate suspension order that is issued when violations pose an imminent threat to public health or safety, and deprive the registrant of the ability to handle controlled substances upon service of the suspension order. Civil penalties: Civil penalties generally include monetary fines. Criminal penalties: Criminal penalties generally include incarceration and fines. Each state has a role in regulating controlled substances and health care within its jurisdiction. For example, as of December 2014, 49 states and one U.S. territory (Guam) have operational prescription drug monitoring programs, which collect data from dispensers and report information to authorized users, including practitioners and pharmacists. Prescription drug monitoring program information can assist law enforcement and health care providers such as practitioners and pharmacists in identifying patterns of prescribing, dispensing, or receiving controlled substances that may indicate abuse or diversion. State prescription drug monitoring programs vary in numerous ways, including what information they collect; what drugs they cover; who has access to, or who is required to use, the prescription drug monitoring program; and which state agency oversees and administers the program. States also govern the use of controlled substances through their own state controlled substances acts, and through the regulation of the practices of medicine and pharmacy. In general, to legally dispense a prescription drug, a pharmacist licensed by the state and working in a pharmacy licensed by the state must be presented a valid prescription from a licensed practitioner. The regulation of the practice of pharmacy is rooted in state pharmacy practice acts and regulations enforced by state boards of pharmacy. The state boards of pharmacy also are responsible for routinely inspecting pharmacies, ensuring that pharmacists and pharmacies comply with applicable laws, and investigating and disciplining those that fail to comply. All states also require that physicians practicing in the state be licensed to do so and state medical practice laws generally outline standards for the practice of medicine and delegate the responsibility of regulating physicians to state medical boards. Each state’s medical board also defines the elements of a valid patient-provider relationship, and grants prescribing privileges to physicians and other practitioners. National associations also play a role in efforts to reduce prescription drug abuse and diversion. National associations represent the interests of their members or constituents, which can include DEA registrants, such as pharmacies, practitioners, and distributors; various state governmental agencies or employees, such as state regulatory boards and law enforcement entities; and patient groups, among others. These national associations may support their members in various ways, such as providing guidance and training to help educate members about abuse and diversion; commenting on proposed legislation, such as proper disposal of prescription drugs; and lobbying on behalf of their members or constituents to federal agencies and members of Congress. Results from our generalizable surveys of DEA registrants show that the extent of registrants’ interaction with DEA varies. Our survey results also show that many registrants are not aware of DEA conferences and resources. Of those registrants that reported that they had interacted with DEA since January 1, 2012, most were generally satisfied. However, some distributors, individual pharmacies, and chain pharmacy corporate offices reported that they want additional guidance from, and communication with, DEA. We surveyed registrants about three primary methods for interacting with DEA—direct communication with DEA headquarters or field office staff; participation in DEA conferences, initiatives, or training; and utilization of DEA resources, such as guidance. Our survey results show that registrants interact with DEA through these methods to varying degrees, and that many registrants are not aware of DEA conferences and resources. Communication with DEA headquarters or field office staff. Based on our surveys, we found that the most common type of interaction between DEA and its registrants is direct communication with DEA headquarters or field office staff about registrants’ roles and responsibilities under the CSA. Most distributors and chain pharmacy corporate offices communicate with DEA headquarters or field office staff, while few individual pharmacies or practitioners do so. (See table 2.) Registrants that reported that they had no communication with DEA headquarters or field office staff (outside of conferences, initiatives, or training) were asked to explain why not. Of those that offered a response, one common explanation was that the registrant did not feel any communication was necessary. Of those registrants that had communicated with DEA headquarters or field office staff, the frequency of communication was typically less than once a quarter, although we estimate that some distributors (22 percent) and some chain pharmacy corporate offices (22 percent or 6 of 27) have communicated with DEA field office staff at least once a month since January 1, 2012. (See app. II, tables 12 and 13, for a complete listing of the numbers of registrants reporting various frequencies of communication with DEA headquarters and field office staff.) We did not survey registrants about the content of these communications with DEA headquarters or field office staff. However, the responses distributors, chain pharmacy corporate offices, and individual pharmacies offered to open-ended questions in these sections of our surveys suggest that the substance of this communication is wide ranging. For example, registrants cited communication with DEA ranging from inquiries about regulatory responsibilities to questions about suspicious customers and reporting of thefts. The most common methods of communication reported across registrant types generally were telephone or e-mail communication, although we estimate that most distributors (76 percent) also have in-person communication with DEA field office staff. (See app. II, table 14, for a complete listing of numbers of registrants reporting various methods of communication with DEA headquarters and field office staff.) The reasons for greater communication with DEA among distributors and chain pharmacy corporate offices may be related to the nature of their relationship with DEA. For example, distributors are required to renew their DEA registration annually, and are subject to scheduled, cyclical regulatory investigations. Conversely, pharmacies and practitioners only have to renew their DEA registration every three years, and are not subject to scheduled, cyclical regulatory investigations. Because the chain pharmacy corporate offices we surveyed represent 50 or more individual pharmacies, it follows that they might have more regular communication with DEA on behalf of those pharmacies. Participation in conferences, initiatives, or training. Results from our surveys show that smaller percentages of DEA registrants have interacted with DEA via conferences, initiatives, or training (see table 3), although many registrants are not aware of these opportunities. DEA periodically hosts events such as conferences or meetings for various components of its registrant population during which the agency provides information about registrants’ CSA roles and responsibilities for preventing abuse and diversion. DEA is also often a presenter at various conferences at the national, state, or local level, which registrants may attend. DEA places information about upcoming conferences that it is hosting on its website, and DEA officials said that to further publicize them DEA has sent emails or letters to registrants about these events, but also relies on state regulatory boards and national associations to promote them. Distributors were asked whether representatives of their facility attended DEA’s 2013 Distributor Conference, and individual pharmacies and chain pharmacy corporate offices were asked whether they or other representatives of their pharmacy (or pharmacy chain) had attended a Pharmacy Diversion Awareness Conference (PDAC). Based on our surveys, we estimate that 27 percent of distributors and 17 percent of individual pharmacies have participated in the DEA-hosted events, while 63 percent (20 of 32) of chain pharmacy corporate offices we surveyed had participated in a PDAC. Of the large percentages of distributors and pharmacies that did not participate in these conferences, many cited lack of awareness as the reason. For example, an estimated 76 percent of individual pharmacies that had not attended a PDAC and 35 percent of distributors that had not attended the 2013 Distributor Conference cited lack of awareness as a reason for not participating. (See app. II, table 15 and table 16, for additional reasons reported by distributors and pharmacies for not participating in these conferences.) While it is possible that some individual pharmacies are not aware of PDACs because one has not yet been scheduled or publicized in its state, the 76 percent of individual pharmacies that cite lack of awareness as a reason for not participating is a matter of concern since PDACs have been held in 21 states since 2011. Some distributors have also interacted with DEA through its Distributor Initiative briefings, which are intended to educate and inform distributors of their responsibilities under the CSA.12 percent of distributor facilities reported participating in these briefings since January 1, 2012, of those that reported that they had not attended, an estimated 12 percent said that a briefing had been attended by corporate or other company staff, and 4 percent said they participated in a briefing prior to 2012. (See app. II, table 15, for additional reasons distributors reported for not participating in these briefings.) We also asked all registrants whether they had participated in any other DEA conferences, initiatives, or training since January 1, 2012, and small percentages of registrants indicated that they had done so. (See table 3.) In the open-ended responses offered about the other DEA events they had attended, registrants across all four surveys cited, for example, DEA presentations at various professional association conferences or meetings they had attended. Utilization of DEA resources. DEA also has created various resources, such as guidance manuals and a registration validation tool, which registrants may utilize to understand or meet their roles and responsibilities under the CSA; however, based on our surveys, we found that many registrants are not utilizing these resources because they are not aware that they exist. (See table 4.) For example, DEA has created guidance manuals for pharmacists and practitioners to help them understand how the CSA and its implementing regulations pertain to these registrants’ professions. These documents are available on DEA’s Office of Diversion Control’s website. In terms of guidance for distributors, in 2011 DEA released a document containing suggested questions a distributor should ask customers prior to shipping controlled substances (referred to as the Know Your Customer guidance). Additionally, DEA offers a registration validation tool on its website so that registrants, such as distributors and pharmacies, can determine if a pharmacy or practitioner has a valid, current DEA registration. However, as shown in table 4, our survey results suggest that many registrants are not utilizing these resources that could help them better understand and meet their CSA roles and responsibilities because they are unfamiliar with them. For example, of particular concern are the estimated 53 percent of individual pharmacies that are not aware of either DEA’s Pharmacist’s Manual or the registration validation tool, and the 70 percent of practitioners that are not aware of DEA’s Practitioner’s Manual, and are therefore not utilizing these resources. In addition to the resources listed above, we also asked registrants whether there were “any other DEA guidance, resources, or tools (e.g. DEA’s Office of Diversion Control website or DEA presentations available online)” that they had used to understand their roles and responsibilities. We estimate that while nearly half of distributors (42 percent) and chain pharmacy corporate offices (47 percent or 15 of 32) have used other DEA resources, only small percentages of individual pharmacies (15 percent) or practitioners (7 percent) have done so. Of those distributors and chain pharmacy corporate offices that offered responses about what other DEA resources they have used, usage of DEA’s website was the most common response, with some distributors noting that they also refer to published DEA regulations, and some chain pharmacy corporate offices noting that they have referred to presentations from past DEA conferences. The lack of awareness among registrants of DEA resources and conferences suggests that DEA may not have an adequate means of communicating with its registrant populations. While DEA’s website contains information and links for specific guidance, tools, and conferences, if registrants are unaware that these types of resources exist, they will not know to search DEA’s website for them. And although DEA officials told us that many registrants should be familiar with DEA’s website because that is where they renew their registration, a DEA official estimated that about 14 percent of registrants register by paper, and registration renewal is only required once every three years for pharmacies and practitioners. Also, many of the registrants we surveyed reported that they had not used other DEA resources such as DEA’s website to understand their roles and responsibilities under the CSA. For example, we estimate that 69 percent of individual pharmacies and 46 percent of distributors have not used other DEA resources such as DEA’s website for this purpose. Therefore, while most registrants are using DEA’s website to renew their registration, it is likely that registrants responding to our survey did not consider this usage of DEA’s website an activity that helped them understand their CSA roles and responsibilities. Furthermore, while DEA has promoted some conferences via email, the agency does not have current, valid email addresses for all of its registrants. DEA reports that email addresses are not required information for registrants, and that mailed correspondence to a registrant’s address is the official method of communication. A DEA official told us that while DEA has email addresses for the approximately 86 percent of registrants that renew their registration online, not all of these email addresses may be current or valid. For example, the official noted that because pharmacies and practitioners are only required to renew their registration every three years, the email addresses for those groups may be less accurate, as the registrant’s email address may have changed during that time. The standards in DEA’s Office of Diversion Control Customer Service Plan for Registrants state that DEA will provide guidance regarding the CSA and its regulations. Additionally, federal internal control standards state that management should ensure there are adequate means of communicating with stakeholders who may have a significant impact on the agency achieving its goals. Despite the lack of awareness we found that existed among registrants, DEA officials have indicated that they do not believe they need to take any additional steps to improve communication or raise registrants’ awareness of the agency’s conferences and resources. Other federal agencies use practices that may be useful to DEA to increase registrants’ awareness of agency resources. For example, an additional method for communicating with stakeholders that other federal agencies, such as the Centers for Medicare & Medicaid Services (CMS) and National Institutes of Health, have used is a listserv—an electronic mailing list through which external stakeholders sign up to receive information on various topics of interest. For example, the bottom right corner of any page on CMS.gov has a link through which interested parties can sign up to receive e-mail updates from CMS on a wide variety of topics. DEA could examine the use of these or other communication methods to help keep relevant registrant populations informed about upcoming conferences, new or revised resources, or other materials or activities that inform registrants about their responsibilities regarding the CSA and its implementing regulations. With so many registrants unaware of DEA’s conferences and resources, DEA lacks assurance that registrants have sufficient information to understand and meet their CSA responsibilities. If registrants do not meet their CSA responsibilities, they could be subject to DEA enforcement actions. However, since DEA officials reported that the agency’s goal is to bring registrants into compliance rather than take enforcement actions against them, additional communication with registrants about DEA’s conferences and resources may help the agency better achieve this goal. Our survey results showed that while many registrants, particularly individual pharmacies and practitioners, did not report any interaction with DEA since 2012, most of those that did interact with DEA were generally positive about those interactions. For example, of the registrants that communicated with DEA headquarters or field office staff, most reported that the communication was very or moderately helpful. (See table 5.) Distributors that communicated with DEA field offices about their roles and responsibilities under the CSA were particularly satisfied—we estimate that 92 percent of distributors found the field office staff very or moderately helpful. However, some registrants reported dissatisfaction with DEA communication. For example, 6 of 26 chain pharmacy corporate offices that reported communicating with DEA field offices said that staff were slightly or not at all helpful. Similarly, when asked about DEA’s performance relative to certain customer service standards, most of the registrants that reported communicating with DEA headquarters or field office staff were positive about their interactions with staff. DEA’s Office of Diversion Control Customer Service Plan for Registrants has standards for interacting with registrants, which include the following expectations: Courteous and professional treatment from DEA personnel; Responses to: written, electronic, or telephone inquiries; concerns and criticisms; and complaints and suggestions to improve DEA service, procedures, and performance; and Discretion in handling sensitive information. When asked about their interactions with DEA relative to these standards, generally most registrants that communicated with DEA headquarters or field office staff reported that staff were very or moderately responsive, very or moderately courteous and respectful, and showed great or moderate discretion when handling sensitive information. For example, we estimate that 93 percent of distributors and 77 percent of individual pharmacies found DEA field office staff very or moderately responsive to their inquiries. (See app. II, table 17 through table 19, for a complete listing of the number of registrants reporting perspectives on both DEA headquarters and field office staff on these three standards.) Ratings were similarly positive for both DEA headquarters and field office staff, although distributors and chain pharmacy corporate offices more often reported having made inquiries to DEA field office staff than DEA headquarters staff. Finally, related to DEA conferences, initiatives, or training, while most registrants other than chain pharmacy corporate offices had not attended such events, the most frequent response among registrants that reported attending was that these events were very or moderately helpful for understanding their CSA roles and responsibilities. (See app. II, table 20.) For example, most of the individual pharmacies and chain pharmacy corporate offices that reported attending one of DEA’s PDACs found them very or moderately helpful. Similarly, many distributors (29 of 40) that reported attending DEA’s October 2013 Distributor Conference said that it was very or moderately helpful, although a smaller but notable number of distributors (11 of 40) that attended reported that the conference was slightly or not at all helpful. Criticisms of the 2013 Distributor Conference that were offered by distributors in their open-ended responses included the presentation of outdated or previously shared information, and that the information shared was too general and did not provide the specific guidance registrants were expecting. Some survey responses indicate that additional guidance for distributors regarding suspicious orders monitoring and reporting, as well as more regular communication, would be beneficial. For example, while DEA has created guidance manuals for pharmacists and practitioners, the agency has not developed a guidance manual or comparable document for distributors. As noted previously, standards in DEA’s Customer Service Plan for Registrants include providing guidance regarding the CSA and its regulations, and internal control standards for federal agencies state that management should ensure there are adequate means of communicating with stakeholders that may have a significant impact on the agency achieving its goals. In response to an open-ended question in our survey about how DEA could improve its Know Your Customer document, the guidance document DEA has provided to distributors, half of distributors (28 of 55) that offered comments said that they want more guidance from DEA. Additionally, just over one-third of distributors (28 of 77) reported that DEA’s Know Your Customer document was slightly or not at all helpful.(See app. II, table 21 for a complete listing of registrant responses on the helpfulness of various DEA resources.) Furthermore, in response to an open-ended question about what additional interactions they would find helpful to have with DEA, more than half of the distributors that offered comments (36 of 55) said that they needed more communication or information from, or interactions with, DEA. Some of the specific comments noted that distributors would like more proactive communication from DEA that is collaborative in nature, rather than being solely violation- or enforcement-oriented. Some of the additional communication and interactions proposed by distributors included quarterly meetings with the local field office and more training or conferences related to their regulatory roles and responsibilities. DEA officials told us that they believe the information in agency regulations is sufficient for distributors to understand their CSA responsibilities for suspicious orders monitoring and reporting. DEA officials said that they have not created guidance manuals for distributors similar to what they have done for pharmacies and practitioners because they meet routinely with distributors and distributors have fewer requirements compared to those other registrant types and officials don’t believe such guidance is necessary. Additionally, DEA officials said that while distributors want specific instructions on how to avoid enforcement actions, DEA cannot do that because circumstances that lead to enforcement actions (e.g., individual business practices) vary. DEA officials said that distributors must make informed business decisions regarding customers that are diverting prescription drugs, and that DEA cannot tell distributors not to ship to specific customers. Officials told us that they would advise distributors to know their customers and their typical orders so that they’ll be able to identify unusual or suspicious orders or purchasers. DEA officials also suggested that distributors should refer to the enforcement actions against distributors that are described on DEA’s website in order to learn “what not to do.” Regarding their communication with registrants, DEA officials also indicated that they do not think they need to make any changes in their practices. They said that they believe that they are accessible to any registrant, and that registrants can contact either DEA headquarters or field office staff if they have questions. A guidance document for distributors similar to the one offered for pharmacies and practitioners could help distributors further understand and meet their roles and responsibilities under the CSA for preventing diversion, though the document may not need to be as detailed. Specifically, although DEA may not be able to provide guidance that will definitively answer the question of what constitutes a suspicious order or offer advice about which customers to ship to, DEA could, for example, provide guidance around best practices in developing suspicious orders monitoring systems. DEA could also enhance its proactive communication with distributors—which could be done, for example, via electronic means if additional in-person outreach would be cost prohibitive. Such steps are key to addressing distributors’ concerns, as without sufficient guidance and communication from DEA, distributors may not be fully understanding or meeting their roles and responsibilities under the CSA for preventing diversion. Additionally, in the absence of clear guidance from DEA, our survey data show that many distributors are setting thresholds on the amount of certain controlled substances that can be ordered by their customers (i.e., pharmacies and practitioners), which can negatively impact pharmacies and ultimately patients’ access. For example, we estimate that 62 percent of individual pharmacies do business with distributors that put thresholds on the quantity of controlled substances they can order, and we estimate that 25 percent of individual pharmacies have had orders cancelled or suspended by distributors. Responses to our surveys also show that some pharmacies want updated or clearer guidance, as well as more communication and information, from DEA. The agency has provided a guidance manual for pharmacists, and of the pharmacies that were aware of DEA’s Pharmacist’s Manual, most said that it was helpful. For example, most individual pharmacies (54 of 68) that were aware of the manual found it very or moderately helpful. (See app. II, table 21.) However, DEA’s Pharmacist’s Manual was last updated in 2010, and since that time DEA has levied large civil fines against some pharmacies; some pharmacy associations reported these fines have caused confusion in the industry about pharmacists’ CSA roles and responsibilities. As noted previously, DEA’s customer service plan standards call for the agency to provide guidance regarding the CSA and its regulations, and federal internal control standards call for adequate communication channels with stakeholders. In their responses to an open-ended question in our survey about DEA’s Pharmacist’s Manual, some chain pharmacy corporate offices (7 of 18) said that the manual needed updates or more detail, some chain pharmacy corporate offices (5 of 18) reported other concerns with the manual, and some individual pharmacies (13 of 33) said that the manual needed improvement, such as more specifics. For example, several chain pharmacy corporate offices commented that the manual needed to be updated to reflect changes in DEA enforcement practices or regulations (e.g., the rescheduling of hydrocodone from a schedule III to a schedule II drug). The need for clearer guidance for pharmacists was also suggested by some chain pharmacy corporate offices’ responses to a question about DEA field office consistency. Specifically, when asked how consistent the responses of staff in different field offices have been to their inquiries about pharmacists’ roles and responsibilities, nearly half of chain pharmacy corporate offices (8 of 19) that had contact with multiple DEA field offices said that staff responses were slightly or not at all consistent. (See app. II, table 22.) In an open-ended response to this question, one chain pharmacy corporate office noted that in its interactions with different DEA field offices throughout the country it has received different, widely varying interpretations of DEA requirements that affect the chain’s day-to- day operations, such as requirements for theft/loss reporting of controlled substances and requirements for prescribers to be reported when the prescriber fails to provide a written prescription. These responses from chain pharmacy corporate offices about field office inconsistencies suggest that the existing pharmacy guidance may not be clear even to some DEA field office officials. Additionally, the desire for more or clearer guidance and more communication from DEA was a common theme in the responses offered from both individual pharmacies and chain pharmacy corporate offices to the open-ended questions in our survey related to DEA interactions. For example, in response to an open-ended question about what additional interactions they would find helpful to have with DEA headquarters or field office staff, nearly all of the chain pharmacy corporate offices that offered comments (15 of 18) said that they wanted more guidance or clearer interpretation of the guidance from DEA, more communication with DEA, or a more proactive, collaborative relationship with DEA. In addition, nearly a third of individual pharmacies (18 of 60) that offered open-ended answers to a question about any new guidance, resources, or tools that DEA should provide to help them understand their roles and responsibilities said that they would like more proactive communication from DEA through methods such as a newsletter or e-mail blast. Some chain pharmacy corporate offices (7 of 17) and individual pharmacies (11 of 33) also offered comments expressing a desire to receive up-to-date information on data or trends in diversion of prescription drugs from DEA. The majority of pharmacy registrants that reported having seen DEA data on trends in prescription drug abuse and diversion found the information to be very or moderately helpful for understanding how to identify common abuse and diversion tactics (43 of 57 individual pharmacies and 23 of 25 chain pharmacy corporate offices), suggesting that information of this kind could be very helpful to pharmacy registrants if it was more widely distributed. (See app. II, table 21.) However, DEA officials indicated that they do not believe there is a need for additional guidance for or communication with pharmacy registrants, and that the current methods by which the agency helps pharmacy registrants understand their CSA roles and responsibilities are sufficient. DEA officials said that registrants can write, call, or e-mail DEA headquarters or field offices if they have questions. Officials also said that the agency has reached out to pharmacy registrants via their PDACs; however, because DEA had held only 44 PDACs in 21 states between 2011 and 2014, many pharmacy registrants had not had the opportunity to attend these conferences. Additionally, in their open-ended responses to questions in the section of our survey about DEA conferences, several individual pharmacies also cited their distance from the cities in which training is often held as their reason for not attending, with one individual pharmacy suggesting that a web-based training option would be helpful. Regarding the concern about inconsistencies in responses among DEA field offices related to inquiries about pharmacies’ roles and responsibilities under the CSA, DEA headquarters officials said that they have heard this concern in the past, but when they ask for specific examples of the conflicting information, registrants do not provide specific, actionable details. DEA officials acknowledged that interpretations can vary among different investigators and said that they have provided training to their staff to ensure consistent interpretation of regulations, including an annual conference and training of every diversion investigator, to address this concern. As indicated in the concerns expressed by some pharmacy registrants, without clear guidance or adequate communication with and information from DEA, these registrants may not fully understand or meet their responsibilities for preventing abuse and diversion under the CSA. Furthermore, without adequate communication with pharmacy registrants, DEA may not fully understand registrants’ needs and how best to address them. Additionally, in the absence of clear guidance from DEA, some pharmacies may be inappropriately delaying or denying filling prescriptions for patients with legitimate medical needs. For example, we estimate that 22 percent of practitioners have had pharmacies delay filling the prescriptions they wrote, and 13 percent of practitioners have had pharmacies deny filling certain prescriptions for controlled substances. Officials from state agencies we interviewed told us that they interact with DEA through law enforcement activities, such as joint task forces, and other activities, while officials from national associations we interviewed said that they most often interact with DEA by hosting and participating in meetings. Nearly all state agencies and more than half of the national associations told us that they were generally satisfied with their interactions with DEA; however, some national associations wanted improved communication with DEA. Among the 16 state agencies we interviewed, 14 reported interacting with DEA, most commonly through law enforcement activities (including joint task forces, investigations, and inspections), meetings and presentations, and sharing prescription drug monitoring program and other types of data to help reduce prescription drug abuse and diversion. Nearly all state agencies that reported interacting with DEA indicated that they were satisfied with those interactions. Methods of interaction with DEA. Of the 14 state agencies that interacted with DEA, the most common method reported to us was through law enforcement-related activities such as working together during investigations, or collaborating on joint task forces to reduce prescription drug abuse and diversion (11 of 14). For example, officials from a state medical board reported that the board collaborated with DEA on an investigation against a physician involving fraud and questionable prescribing practices which resulted in several patients’ deaths. Additionally, officials from eight state agencies we interviewed reported working with DEA and other law enforcement agencies in a task force setting such as with DEA Tactical Diversion Squads to investigate criminal prescription drug diversion cases. Most of the state agencies (11 of 14) also reported interacting with DEA through attending the same conferences, meetings, presentations, or workshops related to reducing prescription drug abuse and diversion. Specifically, officials from three state agencies reported that they invited DEA to present at an agency meeting; officials from another three state agencies reported that they were invited to speak at DEA sponsored events; and officials from three more state agencies reported they held general meetings with DEA to discuss trends and best practices. Officials from three state agencies also reported that their agencies jointly hosted a conference related to prescription drug abuse and diversion with DEA. Officials from some of the boards of pharmacy we interviewed reported that their boards collaborated with DEA on the agency’s PDACs, such as by sending emails about the PDACs to their pharmacists to encourage participation, and by joining DEA in presentations about pharmacists’ corresponding responsibilities. More than half of the state agencies (9 of 14) reported interacting with DEA through sharing data, including sharing state prescription drug monitoring program data and other data about suspicious prescribers, pharmacies, or distributors. For example, an official from one state prescription drug monitoring program noted that the program responded to a request from DEA for its data related to a physician’s prescribing history in order to support DEA’s investigation into a prescription fraud ring in which the physician’s DEA registration number had been used. Another state agency official reported that DEA shares its registrant information with the state agency when information is needed for investigative purposes. A few state agencies (4 of 14) reported interacting with DEA through promoting DEA’s prescription drug take-back events. According to DEA, the purpose of its National Take-Back events is to provide a safe, convenient, and responsible means of disposing prescription drugs, while educating the public about the potential for abuse and diversion of controlled substances. DEA has partnered with others such as state and local law enforcement agencies to help with their take-back events. For example, officials from one state agency reported that they conduct outreach among local agencies about DEA’s prescription drug take-back days and encourage participation from drug task forces in their state. Furthermore, officials representing a state board of pharmacy and a state law enforcement agency reported that they posted information about DEA’s take-back events on their website, including locations collecting the unwanted, unused medications. Satisfaction with DEA interactions. Nearly all state agencies (13 of 14) that reported interacting with DEA indicated that they were satisfied with those interactions. For example, officials at some state agencies who reported that they participated in DEA’s Tactical Diversion Squads or other investigative activities with DEA found those interactions to be positive and helpful—particularly as DEA provided access to additional investigative tools and resources and intelligence they would not otherwise have had access to. Furthermore, four state agencies we interviewed stated that they are easily able to exchange information or data with DEA, and officials have no problems in communicating and collaborating with DEA. Officials from two state agencies noted that they meet with DEA on a monthly or quarterly basis for presentations and to discuss updated information. Officials said that during these meetings they exchange recommendations and best practices for how to reduce prescription drug abuse and diversion. Furthermore, officials from two state agencies—both pharmacy boards—reported that DEA’s education outreach efforts through its PDACs were positive and provided invaluable information. Officials from one state board suggested that because the PDACs held in their state have been so valuable, pharmacists should be required to attend these conferences, and that they would encourage DEA to offer more PDACs in their state. One state board reported dissatisfaction with its interactions with DEA related to DEA enforcement actions against pharmacists in the state, and differences in how DEA field office staff and the state pharmacy board interpret laws and regulations affecting pharmacists. Specifically, officials from that state board said that while there is value in DEA enforcement actions such as preventing harmful drugs from being diverted to illegal sales, DEA enforcement actions have created fear among some pharmacists, causing them to be overly cautious when dispensing prescription drugs (e.g., by denying a prescription). Regarding the different interpretation of laws and regulations, the state board officials explained that there was inconsistent interpretation of laws and regulations among DEA field offices, which caused confusion among the board and pharmacists. The board officials said that they contacted DEA for clarification, but this has not resolved the issue. Of the 26 national associations we interviewed, 24 reported interacting with DEA most commonly through hosting or participating in meetings, providing input and comments on regulations, and supporting federal drug disposal efforts to help reduce prescription drug abuse and diversion. While some national associations did not comment directly on their satisfaction with how they interact with DEA, more than half of those that did indicated that they were generally satisfied with those interactions, though others wanted better communication with the agency. Methods of interaction with DEA. Of the 24 national associations that interacted with DEA, many reported that they participate in meetings with DEA to obtain and share information related to prescription drug abuse Specifically, more than half (15 of 24) of the national and diversion.associations that interacted with DEA reported that they have hosted meetings in which DEA was invited to be a speaker or participated in meetings where DEA was present. For example, officials from six national associations reported that they invited DEA to their meetings to discuss issues such as changes in regulations or trend data on prescription drug abuse. National associations also interact with DEA as part of larger, national meetings. For example, officials from four national associations reported interacting with DEA by attending the same meetings such as the National Prescription Drug Abuse Summit and Pain Care Forum, where DEA was a presenter. They reported that during these meetings DEA officials discussed such things as best practices for reducing prescription drug abuse and diversion, legitimate prescribing, and patient access to legitimate drugs. National associations also reported that they have interacted with DEA by providing input or comments on proposed regulations. For example, officials from six national associations we interviewed reported interacting with DEA by providing comments or feedback on DEA’s proposed drug disposal rule.(12 of 24) we interviewed reported supporting or participating in DEA’s prescription drug take-back events. According to officials from four of these national associations, they helped promote the take-back events by publicizing the events on their website for their members and two associations arranged for the collection of unwanted medication from the public. Additionally, officials from half of the national associations Satisfaction with DEA interactions. While some national associations (7 of 24) did not comment on whether they were satisfied with how they interact with DEA, most of those that did indicated that they were generally satisfied with those interactions. Specifically, of the 17 national associations that commented about their satisfaction with their interactions with DEA, 10 indicated that they were generally satisfied, while 7 indicated that they were generally dissatisfied. Of the national associations that indicated they were generally satisfied, some noted that the information shared by DEA officials during meetings, particularly about trends in prescription drug abuse and diversion, has been helpful, as were DEA’s prescription drug take-back events. According to officials from three national associations we interviewed, the trend information they receive from DEA has been helpful in understanding what is happening in different regions related to prescription drug abuse and diversion. Regarding DEA’s prescription drug take-back events, officials from a national association reported that the take-back events help to reduce the number of drugs in people’s medicine cabinets, which may reduce potential misuse or abuse. One national association that indicated it was generally satisfied with its interactions with DEA also said that it would like to have more communication from DEA. For example, an official from this national association reported that it would be helpful if DEA would provide some type of communication and information that could serve as a checklist of things the association and its members should be aware of, such as tips and trends related to transporting pharmaceuticals. Among the concerns cited by the seven national associations that were generally dissatisfied with their DEA interactions was insufficient communication and collaboration from DEA. For example, officials from five national associations reported that as prescription drug abuse has increased, DEA has been less collaborative, and officials from two associations noted that DEA refused to meet with them to clarify issues related to their members’ CSA responsibilities. DEA officials told us that they did not believe the agency had turned down any requests from associations that wanted to meet, though they acknowledged they were aware that one national association in particular has not been satisfied with DEA and has said that DEA has cut off communications. DEA officials said that the agency communicates with the registrants that this particular association represents, and these registrants should contact DEA directly about any questions related to their roles and responsibilities. Nonetheless, because 4 of the 7 dissatisfied associations indicated that the additional communication they want to have with DEA relates to the CSA roles and responsibilities of their members, improved communication with and guidance for registrants may address some of these associations’ concerns. Many of the DEA registrants we surveyed and other stakeholders we interviewed reported that they believe DEA enforcement actions have helped decrease prescription drug abuse and diversion. Nonetheless, over half of DEA registrants reported changing certain business practices as a result of DEA enforcement actions or the business climate these actions may have created, and many of these registrants reported that these changes have limited access to prescription drugs for patients with legitimate medical needs. While the majority of DEA registrants have not had DEA enforcement actions taken against them, we estimate that between 31 and 38 percent of registrants that we surveyed, depending on the registrant group, believe DEA enforcement actions have been very or moderately helpful in decreasing abuse and diversion. However, 53 percent of chain pharmacy corporate offices (17 of 32) believe DEA enforcement actions were slightly or not at all helpful and other registrants reported not knowing whether DEA’s efforts had an effect, such as practitioner registrants where we estimate that 47 percent don’t know the effect of enforcement actions. (See table 6.) Of the national associations and state agencies we interviewed that offered a perspective on this issue, most (13 of 17) reported that DEA enforcement actions have helped to decrease abuse and diversion of prescription drugs. For example, an official from a state law enforcement agency said that DEA’s enforcement efforts had been very helpful in that state, particularly as DEA provided the state with additional resources and worked with local law enforcement. In addition, an official from a national association said that the association has heard from its members how helpful DEA has been in working with some of the statewide and local task forces on diversion-related investigations. An official from another national association said that DEA’s enforcement actions have caused some companies to make changes to their corporate practices that have a positive effect on decreasing abuse and diversion. While several of the national associations and state agencies we interviewed said that DEA enforcement actions may be reducing prescription drug abuse and diversion, some are concerned about a resulting substitution of other illegal drug use. For example, officials from one state law enforcement agency said that they are seeing evidence of the reemergence of heroin usage as the availability of prescription drugs has gone down and their cost has gone up. In addition to obtaining stakeholders’ perspectives on how DEA enforcement actions have affected abuse and diversion of prescription drugs, we reviewed data on DEA enforcement actions and investigations from fiscal year 2009 through fiscal year 2013 to identify any trends in DEA activities. Our analyses showed that certain types of administrative enforcement actions—administrative enforcement hearings, letters of admonition, and memoranda of agreement—increased across all registrants during this time period while other administrative enforcement actions—orders to show cause and immediate suspension orders— decreased. Scheduled regulatory investigations also increased during this time period for diversion-related cases, particularly for pharmacy and practitioner registrants. (See app. III for data on DEA enforcement actions and investigations.) Officials from DEA’s Office of Diversion Control told us that DEA shifted its work plan in 2009 to put more emphasis on regulatory investigations with the goal of bringing registrants into compliance with the CSA. The officials said the increase in DEA’s scheduled regulatory investigations during this period may have helped identify areas in which registrants needed to improve and make changes to be in compliance with their responsibilities under the CSA. They also said that the increase in letters of admonition explains why there was not an increase of orders to show cause or immediate suspension orders, which are more severe penalties. Officials said that DEA considers letters of admonition as a way to help registrants comply with CSA requirements, and if registrants comply, this may help reduce diversion. The officials added that this increase shows that DEA’s enforcement efforts are being resolved cooperatively with its registrants, and that as a result DEA has less need to impose harsher penalties on its registrants. However, data are not available to show any direct link between DEA enforcement actions or investigations and decreases in abuse and diversion. In a previous report, we recommended that DEA enhance its performance measures to better track and report on the results its enforcement actions had on reducing diversion of prescription drugs.stated that it is impossible to measure the lack of diversion, and that enforcement actions help to prevent future diversion, among other things. On the basis of our generalizable surveys, we found that over half of registrants have made changes to certain business practices that they attribute in part to either DEA enforcement actions or the business climate these actions may have created. For example, we estimate that 71 percent of individual pharmacies increased the number of contacts made to prescriber’s offices to verify legitimate medical need for prescriptions, and 75 percent of these pharmacies attributed this change to a great or moderate extent to DEA enforcement actions or the business climate those actions have created. (See app. II, tables 23 through 26 for complete data for all four registrant types.) Some business practice changes may help reduce prescription drug abuse and diversion. For example, in their open-ended responses, several practitioners said that they appreciated getting phone calls from pharmacies verifying the legitimacy of prescriptions because it helped make the practitioner more aware of potential abuse. However, many registrants reported that some of these changes had limited access to prescription drugs for patients with legitimate medical needs. (See table 7 below, and app. II, tables 27 through 30 for additional data.) For example, we estimate that over half of distributors placed stricter thresholds, or limits, on the quantities of controlled substances that their customers (e.g., pharmacies and practitioners) could order, and that most of these distributors were influenced to a great or moderate extent by DEA’s enforcement actions. Regarding specific enforcement actions that DEA has taken, in 2011, three distributors agreed to pay fines totaling more than $58 million and, in 2013, two distributors agreed to pay fines totaling more than $80 million, which some registrants and one national association suggested could be influencing distributors’ decisions to place thresholds on orders. (See app. III for additional data on civil fines.) Many individual pharmacies and chain pharmacy corporate offices reported that these stricter thresholds have limited, to a great or moderate extent, their ability to supply drugs to those with a legitimate need. (See table 7.) In their open-ended responses to our survey, some registrants expanded upon how DEA enforcement actions have affected their business practices, and subsequently affected patient access. A chain pharmacy corporate office reported that pharmacists are afraid of being the target of DEA enforcement actions even if they fill a prescription in good faith and with good judgment. Instead of erring on the side of a patient when considering filling a prescription, the chain pharmacy corporate office said that pharmacists are taking actions to try to protect their DEA registration that come at the expense of the patient. For example, one individual pharmacy reported that it turned away patients without taking steps to verify whether a controlled substance prescription was legitimate because the pharmacy could not serve new controlled substance patients without risking being cut off by its distributor. This pharmacy said that DEA has clearly stated that it is not calling for distributor cutoffs (i.e., thresholds), but their distributors have communicated that these changes are made because of fear of DEA enforcement actions, which has led many pharmacies to refuse to fill legitimate prescriptions. A distributor reported it refuses to distribute large volumes of controlled substances to prescribers or pharmacies that specialize in pain management, even if it has no evidence that the prescribers or pharmacies are engaged in diversion. This distributor said that DEA has stated that the agency would hold distributors accountable for diversion that occurs at the prescriber and pharmacy level. Therefore, according to this distributor, supplying a large volume of controlled substances to customers with a pain management practice creates too great a risk of being the target of a DEA enforcement action for them to continue to service such requests. Further, several individual pharmacies expressed concern in their open- ended responses that certain business practices, such as distributors placing thresholds on their orders for controlled substances, have affected their ability to care for patients by limiting access to these drugs. A few national associations also spoke of indirect effects resulting from the business climate that enforcement actions have created, which could ultimately result in limiting access for legitimate needs. For example, one national association said that following a large DEA fine against one distributor, and in the absence of clear DEA guidance, distributors became concerned about how to determine that an order is suspicious. Therefore, distributors elected to arbitrarily set thresholds for the amount of controlled substances pharmacies could order. In addition, an official from another national association said that prescribers find it difficult to address the questions from pharmacists about patients’ need for certain prescription medication and this affects the prescriber’s time in providing care to the patients and could affect patient access to certain medication. Of the national associations and state agencies we interviewed that offered a perspective on the potential for limited access, more than half (19 of 28) expressed concern that DEA’s enforcement actions have limited access to these drugs for legitimate medical needs. For example, officials from one state agency said that DEA has taken actions against pharmacists in that state that has caused concerns among some pharmacists resulting in instances where legitimate patients with a legitimate prescription are being denied access to prescription drugs. However, DEA officials in the four DEA field office divisions we spoke with said that they generally did not think that their enforcement actions have had a negative effect on access, and headquarters officials from DEA’s Office of Diversion Control indicated that they did not believe their enforcement actions had any bearing on access issues. DEA field office officials said that they have rarely heard about any access concerns, although neither DEA field office nor headquarters officials indicated that they have taken steps to obtain any information about the extent of access issues. DEA headquarters officials said that they could not tell a distributor that a pharmacy is ordering too many controlled substances; there are no federal quotas on these orders. Additionally, DEA headquarters officials said that if access is limited the patient should contact his or her state pharmacy association and explain the situation and that the state pharmacy board could intervene. DEA headquarters officials also told us that if a pharmacy is unable to fill a prescription because distributor thresholds have limited the amount of drugs the pharmacy has available to fill prescriptions, that pharmacy should help the patients find another pharmacy where they can get the medications, as they should in any case in which the pharmacy could not fill a prescription. However, while DEA’s recommendation may be valid for some patients, it does not take into account that certain patients could experience hardships in trying to find another pharmacy to get their prescription filled. For example, patients living in rural areas may have a limited number of pharmacies nearby, and some patients, such as those with cancer, may be too ill to travel to different pharmacies for their medications. As previously noted, internal control standards for federal agencies state that management should ensure there are adequate means of communicating with stakeholders that may have a significant impact on the agency achieving its goals. If access issues to prescription drugs for patients with legitimate medical needs are resulting from DEA registrants being unclear about their roles and responsibilities under the CSA, and registrants have not proactively raised concerns about access issues directly with DEA, more regular communication with its registrants, as previously discussed, could provide the agency with more opportunities to obtain registrants’ input regarding concerns about access issues. Further, more regular communication between DEA and its registrants, including clearer guidance, could help to mitigate registrants’ fears of taking actions that would make them targets of DEA enforcement actions and investigations, and help registrants make business decisions that balance ensuring that patients have access to needed medications with controlling abuse and diversion. The magnitude of the prescription drug abuse problem, including high rates of overdose deaths, requires a response from all levels of government, industry, and other stakeholders. And while many federal agencies have important responsibilities in addressing prescription drug abuse and diversion, DEA plays a key role because it administers and enforces the CSA, and in doing so interacts with a wide range of nonfederal entities that are stakeholders in the prescription drug supply chain. DEA faces a significant challenge in simultaneously ensuring the availability of controlled substances for legitimate use while limiting their availability for diversion and abuse. Therefore, adequate DEA communication with and guidance for its registrants are essential to help ensure that registrants take actions that prevent abuse and diversion but do not unnecessarily diminish patients’ access to controlled substances for legitimate use because of their uncertainty about how to appropriately meet their CSA roles and responsibilities. While many of the registrants, state government agencies, and national associations that have interacted with DEA were generally satisfied with these interactions, some of these stakeholders said they needed improved communication and guidance regarding registrants’ roles and responsibilities for preventing abuse and diversion under the CSA. More DEA communication with registrants could help improve their awareness of various DEA resources, as well as help DEA better understand registrants’ information needs, such as their need for improved guidance. While providing additional guidance to registrants—particularly distributors and pharmacies—about their CSA roles and responsibilities cannot ensure that registrants are meeting them, by doing so DEA will have a greater assurance that registrants understand their CSA responsibilities. Additionally, DEA has stated that its goal is bringing registrants into compliance rather than taking enforcement actions, and DEA can move closer towards this goal by improving its communication and information sharing with registrants, consistent with federal internal controls standards. In order to strengthen DEA’s communication with and guidance for registrants and associations representing registrants, as well as supporting the Office of Diversion Control’s mission of preventing diversion while ensuring an adequate and uninterrupted supply of controlled substances for legitimate medical needs, we recommend that the Deputy Assistant Administrator for the Office of Diversion Control take the following three actions: Identify and implement means of cost-effective, regular communication with distributor, pharmacy, and practitioner registrants, such as through listservs or web-based training. Solicit input from distributors, or associations representing distributors, and develop additional guidance for distributors regarding their roles and responsibilities for suspicious orders monitoring and reporting. Solicit input from pharmacists, or associations representing pharmacies and pharmacists, about updates and additions needed to existing guidance for pharmacists, and revise or issue guidance accordingly. We provided a draft copy of this report to the Department of Justice for its review and DEA’s Office of Diversion Control provided written comments, which are reproduced in full in appendix IV. In its comments, DEA stated that it describes the actions that it plans to take to implement our three recommendations. However, we identified additional actions DEA should take to fully implement our recommendations. In addition to providing comments on the recommendations, DEA also commented on other aspects of our draft report, including some of the results and conclusions from our surveys, and referred to some survey results as anecdotal data. Because our surveys were designed and conducted to produce reliable and generalizable estimates, we are confident that our survey results accurately represent the perspectives of registrants about their interactions with DEA and their concerns about their roles and responsibilities under the CSA. We are also confident that the conclusions we drew from the survey results were reasonable and appropriate. Regarding our first recommendation to identify and implement means of cost-effective, regular communication with distributor, pharmacy, and practitioner registrants, DEA agreed that communication from DEA to the registrant population is necessary and vital. The agency stated that it is in the planning stages of developing web-based training modules for its registrant population, to include training for pharmacists on their corresponding responsibilities and potential training for manufacturers and distributors to include ARCOS reporting and how to request a quota. While DEA did not specifically mention developing training for distributors on suspicious orders monitoring in its comments, our survey results suggest that this type of training for distributors would also be helpful. DEA also stated that it is considering implementing a listserv to disseminate information on various topics to its registrants, including information on cases involving diversion of controlled substances, and will continue to explore other means of cost-effective communication with its registrants. Additionally, while DEA agreed that communication with its registrants is necessary and vital, it also suggested that registrants that are not in frequent communication with the agency do not deem such communication to be necessary and noted that its registrant community has not broached the subject of additional guidance or communication. However, our survey data show that registrants are not fully aware of DEA conferences and resources and want additional guidance from, and communication with, the agency. Therefore, we continue to believe that it is DEA’s responsibility to reach out to its registrants, and believe that doing so will help DEA better understand registrants’ information needs. DEA raised concerns about our second recommendation to solicit input from distributors, or associations representing distributors, and develop additional guidance for distributors regarding their roles and responsibilities for suspicious orders monitoring and reporting. DEA stated that short of providing arbitrary thresholds to distributors, it cannot provide more specific suspicious orders guidance because the variables that indicate a suspicious order differ among distributors and their customers. Instead, DEA highlighted regulations that require distributors to design and operate systems to disclose suspicious orders. However, according to DEA’s Customer Service Plan for Registrants, DEA is responsible for developing guidance for registrants regarding the CSA and its regulations, and the agency was able to create such guidance for pharmacy and practitioner registrants. DEA also noted that it has steadily increased the frequency of compliance inspections of distributors in recent years. DEA stated that this has enabled the agency to take a more proactive approach in educating its registrants and ensuring that registrants understand and comply with the CSA and its implementing regulations. While we agree that inspections provide registrants with an opportunity for communication with DEA and may provide specific information related to compliance with the CSA, we do not believe that formal inspections provide registrants with a neutral educational setting in which to obtain a better understanding of their CSA roles and responsibilities. DEA also provided examples of how the agency has provided additional information related to suspicious orders monitoring to distributor registrants who participate in its Distributor Initiative briefings and its distributor conferences. Therefore, we continue to believe that DEA could provide additional written guidance for distributors that could be more widely accessible to all distributor registrants. DEA did not comment on whether it plans to solicit input from distributors, or associations representing distributors, on developing additional distributor guidance, and we continue to believe that obtaining input from these parties would help DEA better understand distributors’ needs related to their CSA roles and responsibilities. With regard to our third recommendation to solicit input from pharmacists, or associations representing pharmacies and pharmacists, about updates and additions needed to existing guidance for pharmacists, and revise or issue guidance accordingly, DEA described actions it would take to partially address the recommendation. Specifically, DEA stated that it would work to update the Pharmacist’s Manual to reflect two subject matter area changes made since the manual was last updated in 2010— (1) the rescheduling of hydrocodone from schedule III to schedule II and (2) the new rules on disposal of controlled substances. However, DEA did not comment about providing any additional guidance to pharmacists related to their roles and responsibilities in preventing abuse and diversion under the CSA. Because our survey results showed that this was a primary area of concern for individual pharmacies and chain pharmacy corporate offices, we believe any updates to the Pharmacist’s Manual should also include additional information specific to pharmacists’ corresponding responsibilities under the CSA. DEA also did not comment on whether it plans to solicit input from pharmacists, or associations representing pharmacies and pharmacists, on updating and revising guidance for pharmacists; however, we continue to believe such input would be beneficial for DEA to better understand its pharmacy registrants’ needs and how best to address them. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Attorney General, the Administrator of DEA, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or at kohnl@gao.gov. Contact points for our Office of Congressional Relations and Office of Public Affairs can be found on the last page of this report. Other major contributors to this report are listed in appendix V. This report examines (1) how and to what extent selected registrants interact with the Drug Enforcement Administration (DEA) related to their responsibilities for preventing prescription drug abuse and diversion under the Controlled Substances Act (CSA), and registrants’ perspectives on those interactions, (2) how selected state agencies and national associations interact with DEA related to reducing prescription drug abuse and diversion, and their perspectives on those interactions, and (3) stakeholders’ perspectives about how DEA enforcement actions have affected abuse and diversion of prescription drugs and access to those drugs for legitimate medical needs. To address our first and third objectives, we administered four web-based nationally representative surveys to the following three types of DEA registrants: drug distributors, pharmacies, and practitioners. To further address all three objectives we interviewed government officials from 16 agencies in four states (California, Florida, Kentucky, and New York), officials at 26 national associations and nonprofit organizations (referred to as “national associations” throughout this report), and officials at both DEA headquarters and selected field offices. Finally, to help address our third objective, we reviewed data on DEA’s enforcement actions from fiscal year 2009 through fiscal year 2013 that were taken against DEA registrants in the three categories that we included in our surveys (distributors, pharmacies, and practitioners) to identify any trends in DEA’s enforcement actions over a recent time period. To address the first and third objectives, we surveyed samples of practitioners, distributors, and pharmacies that were registered with the DEA to prescribe, administer, or handle controlled substances about their interactions with DEA and perspectives on DEA enforcement. The survey was designed to collect detailed reports from registrants and make generalizable estimates of the nature and extent of their interaction with DEA programs and staff related to registrant responsibilities under the CSA. The survey was also designed to measure registrant perceptions of the impact of DEA enforcement actions on: their own business practices, or the business climate in which they operate, as well as their perspectives on whether enforcement actions have had an effect on reducing abuse and diversion or on limiting patients’ access to prescription drugs for legitimate medical needs. Of the approximately 1.5 million DEA registrants as of January 2014, the target populations for our survey were restricted to distributors, pharmacies, and practitioners in specific business activity categories. We selected these categories of registrants because they are the primary DEA registrants in the prescription drug supply chain and are more likely to be the focus of DEA enforcement actions than other categories of registrants such as researchers or drug importers. Our target populations were also restricted to those with an active registration status; eligible to distribute, dispense, administer, or prescribe either Schedule II or III drugs; and located in the continental United States. We used DEA’s CSA Master File, as of January 13, 2014, to define the target populations, and to create the listings from which we drew our survey samples. Our target populations also excluded additional identifiable registrants outside the scope of our review such as federal government registrants, veterinarians or veterinary-oriented businesses, and research-oriented academic registrants. Distributors in our target population were restricted to those registrants with the DEA business activity code F and subcode 0. Pharmacies were restricted to those with activity code A and subcodes 0 (“Retail Pharmacies”), 1 (“Central Fill Pharmacies”—later excluded from being included in the survey sample if not part of a chain pharmacy corporation), or 3 (“Chain Pharmacies”). Practitioners in our target population were restricted to those with activity codes and subcodes listed in table 8. The total number of registrants in the DEA CSA Master File database that we received, and the total number of registrants initially designated as eligible for the target populations, prior to sampling, are listed in table 9. In our interviews with national pharmacy associations and in our survey pretests with selected chain pharmacies, we learned that the corporate offices of the larger chain pharmacies generally interact with federal agencies and other groups on issues related to prescription drug abuse and diversion as opposed to their individual pharmacy locations. Therefore, we sent a separate survey to the corporate offices for the chain pharmacies that we identified as having 50 or more registered stores so that the chain pharmacies could answer our survey on behalf of all of their stores. practitioner registrants who were primarily in academic, federal government, or veterinary practice, and those practitioners or distributors that were no longer prescribing, administering, storing or handling controlled substances. The resulting four target populations were: distributors, individual pharmacies, chain pharmacy corporate offices, and practitioners. From the four target population lists, we drew simple random samples of sufficient sizes (see table 11) to account for reductions due to nonresponse, additional ineligibility, and the variability introduced by sampling, to yield percentage estimates from survey questions generalizable to each of the four populations with confidence intervals (sampling error, or the margin of error) no wider than ±10 percentage points at the 95 percent level of confidence. This planned level of precision applied only to questions to be asked of the entire sample; questions asked of only a subset of the sample would produce estimates with wider confidence intervals. We designed and tested four questionnaires, asking parallel questions tailored to each of the four populations. We consulted with subject matter experts in professional trade associations and survey methodologists, and reviewed past surveys of these populations and subjects. We also conducted cognitive interview pretests of draft versions of the questionnaires with registrants from each population (three practitioners, two distributors, one individual pharmacy, and two chain pharmacy corporate offices), and obtained a quality review by a separate GAO survey methodologist. Based on these developmental and evaluation activities, we made changes to the four draft questionnaires before administering them. Each questionnaire focused on four primary topic areas, made up of questions appropriate for the population: 1. awareness, use, and rating of DEA guidance, resources, and tools for understanding registrant responsibilities related to the CSA; 2. nature, extent, and ratings of interactions with DEA headquarters or field staff related to CSA responsibilities through DEA conferences, initiatives, training, and other communication; 3. interaction with other federal agencies; and 4. impact of DEA enforcement actions on registrant business practices, including opinions on the effect of DEA enforcement actions on drug abuse and diversion and legitimate access to controlled substances. Individual pharmacies were asked to respond to the survey on behalf of their single pharmacy location that was selected in our sample, regardless of its ownership status. Chain pharmacy corporate offices were asked to respond to the survey on behalf of all of their registered pharmacy locations. The surveys were administered using a mixed-mode approach. Web questionnaire format was the primary mode, and each of the surveys used an initial data collection attempt using emailed username, password, and link to a questionnaire website. When email addresses were not available, or found to be nonworking, mail or phone contacts were made to obtain emails, to direct registrants to the website, or, as a secondary mode of response for practitioners and individual pharmacies, to fax or mail paper versions of the questionnaires. For practitioners, of the 208 usable responses, 47 were received in paper format. For individual pharmacies, of the 170 usable responses, 20 were received in paper format. A variety of contacts were made with each sample during survey fieldwork. For practitioners and individual pharmacies, an advance letter was mailed to all sampled registrants in late June and early July of 2014. Telephone contacts were made before and during fieldwork to obtain missing or incorrect contact information, encourage response, and determine final outcomes such as ineligibility or refusal. Paper questionnaires were mailed to nonresponding practitioner and individual pharmacy registrants; letters with web survey login information were mailed to distributors during the follow-up period. GAO staff made direct contacts with chain pharmacy corporate offices to manage survey administration. The key steps and dates of data collection are described in table 10. After the survey fieldwork period closed, the outcomes of the original samples drawn were tallied. (See table 11.) Each questionnaire, except for those sent to chain pharmacy corporate offices, began with a filter question to determine whether the sampled registrant had prescribed, dispensed administered, stored or handled controlled substances in the approximately two years prior to the survey (or, in the case of individual pharmacies, “currently”). This was known with certainty for the chain pharmacy corporate offices, but some of the respondents in the other registrant samples had not performed this activity: 14 percent of practitioners, 11 percent of distributors, and 4 percent of individual pharmacies reported that they had not performed this activity in the last two years, or currently. These respondents were not asked the rest of the survey questions, which were only applicable to the subset of 179 practitioners, 152 distributors, and 162 individual pharmacies that had performed these activities recently. We statistically adjusted, or weighted, survey results to multiply the contribution of each responding member of the sample, to produce estimates that represented the entire population. Weights greater than one were applied to all but the chain pharmacy corporate office survey results, which were not based on a sample, as that survey included all 38 members of the target population as we defined, each contributing a weight of one. Because we followed a probability procedure based on random selections, our samples are only three of a large number of samples that we might have drawn. As each sample could have provided different estimates, we express our confidence in the precision of our particular samples’ results as 95 percent confidence intervals (e.g., from x to y percent). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. As a result, we are 95 percent confident that each of the confidence intervals based on our survey includes the true values in the sample population. Throughout this report, the confidence intervals surrounding our estimates are no more than plus or minus 10 percentage points, unless otherwise noted. In addition to sampling error, questionnaire surveys are subject to other potential errors: failure to include all eligible members in the listing of the population, measurement errors when administering the questions, nonresponse error from failing to collect information on some or all questions from those sampled, and data processing error. We took steps to limit each type of error. The DEA CSA Master File database we used to create our listings of the populations was assessed as reliable and likely the most comprehensive listing of DEA registrants. Our manual screening and presurvey contacts with the original oversamples mitigated this potential source of error. Our survey design, testing and evaluation steps were intended to reduce measurement error. Because response rates for practitioners and individual pharmacies fell below 80 percent, a level generally accepted as an indicator of potentially increasing risk of bias due to missing data, we performed nonresponse bias analyses to determine whether those not responding would have answered in a fundamentally different way on key questions we asked. Based on the information available to us to compare respondents to nonrespondents, we found no evidence of a difference on a characteristic that might reasonably be expected to determine the propensity or nature of response. Finally, all data processing and analysis programming was verified by a separate data analyst, and sample and response tracking datasets were independently reviewed. We analyzed survey responses and compared them to federal internal control standards related to information and communication and the standards in DEA’s Office of Diversion Control Customer Service Plan for Registrants. To further address our objectives, we interviewed government officials at 16 agencies in four states (California, Florida, Kentucky, and New York) and officials at 26 national associations to obtain information about interactions with DEA, their perspectives about those interactions, and their views about the effects of DEA enforcement actions on abuse and diversion and access to legitimate prescription medication. We selected these four states based on the following criteria: (1) had varied drug overdose death rates per 100,000 people based on 2010 CDC data, (2) received federal grants for their prescription drug monitoring programs in 2012 and 2013 from the Department of Justice’s Bureau of Justice Assistance, and the Department of Health and Human Services’ Substance Abuse and Mental Health Services Administration, (3) represented different geographic regions of the country (as represented by DEA domestic field divisions), and (4) were among states that were mentioned by national associations during our interviews as having unique or innovative initiatives to address prescription drug abuse and diversion. In each of the four states, we interviewed officials that represented the state’s Controlled Substances Authority, pharmacy board, medical board, law enforcement agency, and the agency that oversees the state’s prescription drug monitoring program, for a total of 16 state agencies. The 26 national associations represented patients, practitioners, pharmacies and pharmacists, distributors, state regulatory authorities, state and local law enforcement, and drug manufacturers, among other relevant stakeholder types. Although the perspectives we obtained during the interviews with state agencies and national associations are not generalizable, the interviews provided insights regarding how these types of entities interact with DEA as well as indicating common areas of concern. We also obtained documents from and interviewed DEA Office of Diversion Control officials who have oversight responsibility for DEA registrants and are engaged in addressing prescription drug abuse and diversion to learn about how DEA interacts with its registrants and other nonfederal stakeholders, and to obtain DEA’s perspectives on information we obtained from our survey results and interviews with nonfederal stakeholders. In addition, we interviewed officials in DEA field offices in each of the four states in our study, such as supervisors overseeing both diversion investigators and special agents, to obtain their views about engaging with state agencies on efforts related to reducing prescription drug abuse and diversion. We interviewed officials in the following four DEA field offices: the Miami Division, the San Francisco Division, the Kentucky District Office, and the New York Division. We compared DEA’s responses regarding its interactions with registrants and nonfederal stakeholders to federal internal control standards related to information and communication and the standards in DEA’s Office of Diversion Control Customer Service Plan for Registrants. To further address our third objective, we reviewed data on DEA investigations and enforcement actions from fiscal year 2009 through fiscal year 2013 that were taken against the DEA registrant categories that we included in our survey. We examined the data to determine if there were any trends over a recent time period. Investigations included regulatory investigations (i.e., scheduled investigations or inspections conducted every 2, 3, or 5 years), complaint investigations, and criminal investigations. Enforcement actions included administrative actions (e.g., formal administrative hearings, letters of admonition to advise registrants of any violations, and orders to show cause to initiate revocation or suspension of a registration), civil actions, where penalties generally include monetary fines, and criminal actions, where penalties generally include incarceration and fines. We determined that the data were sufficiently reliable for purposes of our report. We conducted this performance audit from August 2013 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Tables 12 through 30 contain selected data from our surveys of DEA registrants. Between July 30, 2014 and October 14, 2014, we surveyed generalizable random samples of distributors, individual pharmacies, and practitioners, and we surveyed all of the corporate offices of the 38 chain pharmacies we identified using a DEA database. Percentages that are cited are weighted to represent the population. Generally, actual numbers of responses are cited when the number of responses for any registrant type in a particular table fell below 100. Tables 31 through 38 show data on DEA investigations and enforcement actions from fiscal year 2009 through fiscal year 2013, focusing in particular on data related to DEA-registered distributors, pharmacies, and practitioners (including mid-level practitioners). As of September 2013, there were nearly 1.5 million registered distributors, pharmacies, and practitioners. DEA conducts investigations of its registrants as part of the registrant monitoring process and to ensure compliance with the Controlled Substances Act (CSA) and its implementing regulations. Following an investigation, DEA can initiate a variety of enforcement actions for violations of the CSA or its implementing regulations. In addition to the contacts above, Karen Doran, Assistant Director; Kristy Love, Assistant Director; Amy Andresen; Willie Commons III; Christine Davis; Justin S. Fisher; Sally Gilley; Cathleen Hamann; Catherine Hurley; Eileen Larence; Kirsten Lauber; Lisa A. Lusk; Carl M. Ramirez; Christina Ritchie; and Monica Savoy made key contributions to this report.
The DEA administers and enforces the CSA as it pertains to ensuring the availability of controlled substances, including certain prescription drugs, for legitimate use while limiting their availability for abuse and diversion. The CSA requires those handling controlled substances to register with DEA. GAO was asked to review registrants' and others' interactions with DEA. This report examines (1) to what extent registrants interact with DEA about their CSA responsibilities, and registrants' perspectives on those interactions, (2) how state agencies and national associations interact with DEA, and their perspectives on those interactions, and (3) stakeholders' perspectives on how DEA enforcement actions have affected prescription drug abuse and diversion and access to those drugs for legitimate needs. GAO administered nationally representative web-based surveys to DEA-registered distributors, individual pharmacies, chain pharmacy corporate offices, and practitioners. GAO also interviewed officials from DEA, 26 national associations and other nonprofits, and 16 government agencies in four states representing varying geographic regions and overdose death rates. GAO's four nationally representative surveys of Drug Enforcement Administration (DEA) registrants showed that these registrants vary in the extent of their interaction with DEA related to their roles and responsibilities for preventing prescription drug abuse and diversion under the Controlled Substances Act (CSA). Specifically, GAO found that distributors and chain pharmacy corporate offices interacted with DEA more often than individual pharmacies or health care practitioners. The surveys also showed that many registrants are not aware of various DEA resources. For example, GAO estimates that 70 percent of practitioners are not aware of DEA's Practitioner's Manual. Of those registrants that have interacted with DEA, most were generally satisfied with those interactions. For example, 92 percent of distributors that communicated with DEA field office staff found them “very” or “moderately” helpful. However, some distributors, individual pharmacies, and chain pharmacy corporate offices want improved guidance from, and additional communication with, DEA about their CSA roles and responsibilities. For example, 36 of 55 distributors commented that more communication or information from, or interactions with, DEA would be helpful. DEA officials indicated that they do not believe there is a need for more registrant guidance or communication. Federal internal control standards call for adequate communication with stakeholders. Without more registrant awareness of DEA resources and adequate guidance and communication from DEA, registrants may not fully understand or meet their CSA roles and responsibilities. Officials GAO interviewed from 14 of 16 state government agencies and 24 of 26 national associations said that they interact with DEA through various methods. Thirteen of 14 state agencies and 10 of 17 national associations that commented about their satisfaction with DEA interactions said that they were generally satisfied; however, some associations wanted improved DEA communication. Because the additional communication that four associations want relates to their members' CSA roles and responsibilities, improved DEA communication with and guidance for registrants may address some of the associations' concerns. Among those offering a perspective, between 31 and 38 percent of registrants GAO surveyed and 13 of 17 state agencies and national associations GAO interviewed believe that DEA enforcement actions have helped decrease prescription drug abuse and diversion. GAO's survey results also showed that over half of DEA registrants have changed certain business practices as a result of DEA enforcement actions or the business climate these actions may have created. For example, GAO estimates that over half of distributors placed stricter limits on the quantities of controlled substances that their customers (e.g., pharmacies) could order, and that most of these distributors (84 percent) were influenced to a “great” or “moderate extent” by DEA's enforcement actions. Many individual pharmacies (52 of 84) and chain pharmacy corporate offices (18 of 29) reported that these stricter limits have limited, to a “great” or “moderate extent,” their ability to supply drugs to those with legitimate needs. While DEA officials said they generally did not believe that enforcement actions have negatively affected access, better communication and guidance from DEA could help registrants make business decisions that balance ensuring access for patients with legitimate needs with controlling abuse and diversion. GAO recommends that DEA take three actions to improve communication with and guidance for registrants about their CSA roles and responsibilities. DEA described actions that it planned to take to implement GAO's recommendations; however, GAO identified additional actions DEA should take to fully implement the recommendations.
Against this backdrop of long-standing concerns with DOD’s internal controls over its payment processes, I would like to briefly outline the specifics of the two recent fraud cases. The first case involved fraudulent activity between October 1992 and February 1993 related to two Bolling Air Force Base (AFB) office automation contracts resulting in an embezzlement of over $500,000. The Bolling AFB contracting officer’s technical representative (COTR) had authority to authorize, approve, verify, and process contract and payment documentation and receive and accept goods and services. In addition, this person was not adequately supervised. The COTR’s supervisor told investigators and us that she allowed the COTR to perform these duties independently without close supervision. The COTR was able to embezzle over $500,000 by creating fictitious invoices and receiving reports. The COTR was able to accomplish this scheme without detection by Air Force officials because he took advantage of his broad authority and the lack of adequate supervision. In addition, at the time of this incident, the address on the invoice was used as the remittance address, which is a control weakness. Therefore, directing the payments to himself was simply a matter of listing his post office box as the contractor address on the false invoices. Authorities were only alerted to the COTR’s embezzlement when he attempted to withdraw a large portion of the funds, and suspicious bank officials put a hold on the accounts and notified the U.S. Secret Service. After coming under suspicion, the COTR prepared a letter stating that overbilling errors had been made and returned the funds to the government. Following an investigation by the Air Force Office of Special Investigation, the COTR pleaded guilty and was sentenced to 3 years probation and ordered to pay $495. Further details on the COTR’s schemes can be found in GAO/OSI-98-15. We also were unable to determine whether the Air Force received the goods and services paid for under the two Air Force contracts associated with the Bolling AFB fraud because, in addition to missing records—another indicator of a weak internal control environment—a number of improper procedures were followed for receipt and control of equipment and services paid for under the contracts. For example, the COTR had also directed the contractor to falsify invoices and receiving reports by changing the type and quantity of items received under a delivery order. The second case covered fraudulent activities of a Staff Sergeant between October 1994 and June 1997 at two locations resulting in a $435,000 embezzlement and attempted theft of over $500,000. The first known location where fraudulent payments were made was Castle AFB, California, between October 1994 and May 1995. The Staff Sergeant, who was Chief of Material in the Accounting Branch, had broad access to the automated vendor payment system, which allowed him to enter contract information, including contract numbers, delivery orders, modifications, and obligations, as well as invoice and receiving report information and remittance addresses. The Staff Sergeant used this broad access to process invoices and receiving report documentation that resulted in eight fraudulent payments totaling $50,770 that were identified. The invoices prepared by the Staff Sergeant designated the name of a relative as the payee and his own mailing address as the remittance address, although any address, including a post office box, could have been used. Castle AFB closed in September 1995, and the Staff Sergeant was transferred to the Defense Finance and Accounting Service (DFAS) operating location at Dayton, Ohio. At DFAS Dayton, the Staff Sergeant was assigned as the Vendor Pay Data Entry Branch Chief in the Vendor Pay Division. As Vendor Pay Chief, the Staff Sergeant was allowed a level of access to the vendor payment system similar to the access he previously held at Castle AFB. Between November 1995 and January 1997, the Staff Sergeant prepared false invoices and receiving reports that resulted in nine fraudulent payments totaling $385,916. By designating the remittance address on the false invoices, the Staff Sergeant was able to direct fraudulent payments to an accomplice. In February 1997, the Staff Sergeant was reassigned to DFAS Dayton’s Accounting Branch and his access to the vendor payment system was removed. However, while assigned to the Accounting Branch, the Staff Sergeant created two false invoices totaling $501,851 and submitted them for payment in June 1997, using the computer password of another DFAS employee who had a level of access comparable to that previously held by the Staff Sergeant. The Staff Sergeant’s fraudulent activities were detected when, for an invoice totaling $210,000, an employee performing a reconciliation identified a discrepancy between the contract number associated with the invoice in the vendor payment system and in the accounting system. These two numbers should always agree. For this invoice, the Staff Sergeant failed to ensure that the contract cited was the same in both systems. Further research determined that the contract was not valid and the payment was fraudulent. A second fraudulent invoice for $291,851, the $50,770 in fraudulent payments at Castle AFB, and the $385,916 in fraudulent payments at DFAS Dayton were detected during the subsequent investigation of the DFAS Dayton fraud. The Staff Sergeant was convicted of embezzling over $435,000 and attempted theft of over $500,000. He was also convicted of altering invoices and falsifying information in the vendor payment system—a violation of 18 U.S.C. 1001—to avoid interest on late payments and improve reported performance for on-time payments, which is discussed later in this testimony. In July 1998, the Staff Sergeant was sentenced to 12 years imprisonment. The Dayton case also involved the altering of invoices to improve reported payment performance, thereby depriving government contractors of interest payments. Now, Mr. Chairman, I would like to turn our attention to the current control environment at the locations where these incidents occurred. Our work shows that similar internal control and system weaknesses continue to leave the Air Force vulnerable to fraudulent or improper vendor payments. For example, as of mid-June 1998, over 1,800 DFAS and Air Force employees had a level of access to the vendor payment system that allowed them to enter contract information, including the contract number, delivery orders, modifications, and obligations, as well as invoice and receiving report information and remittance addresses. In addition, the automated vendor payment system is vulnerable to penetration by unauthorized users due to weaknesses in computer security, including inadequate password controls. Finally, controls over remittance addresses remain a weakness. An August 1996 Air Force Audit Report disclosed that DFAS personnel did not properly control access to the vendor payment system and recommended that DFAS review and reduce vendor payment system access levels where appropriate. Our review of vendor payment system access levels as of mid-June 1998 showed that across DFAS and Air Force installations, individual users could enter contract data, including obligations, and invoice and receiving report information, and change remittance addresses for vendor payments. Currently, there are four access levels to the vendor payment system: inquiry, clerk, subsupervisor, and supervisor. Inquiry is read only access. Clerk access allows the user to enter data other than remittance addresses. Subsupervisor access allows the user to input or change contract data; information on obligations, invoices, and receiving reports; and remittance addresses. Supervisor access allows the user to perform all subsupervisor functions as well as assign or remove access. The Staff Sergeant who committed the DFAS Dayton fraud had supervisor access. Proper and effective internal controls would preclude allowing any individual user to have the ability to record an obligation, create and change invoices and receiving reports, and enter remittance addresses. Our review of the vendor payment process at DFAS Dayton and DFAS Denver’s Directorate of Finance and Accounting Operations confirmed that employees with supervisor and subsupervisor access to the vendor payment system could make fraudulent payments without detection by entering contract information and obligations, invoice and receiving report data, and changing or creating a remittance address. If the data on a false invoice and receiving report match the information on the voucher, certifying officers are not likely to detect a fraudulent payment through their certification process, a key prevention control. Second, problems with the lack of segregated access within the payment system application are compounded by the excessive and widespread access to the system throughout DFAS and the Air Force. Our review of vendor payment system access levels as of mid-June 1998 showed that 1,867 users across DFAS and Air Force installations had supervisor or subsupervisor access. Further, 94 of these users had not accessed the system since 1997, indicating that they may no longer be assigned to vendor payment operations. In addition, 171 users had not accessed the system at all, possibly indicating that access is not required as a regular part of their duties. DFAS officials told us they were unaware that such a large number of employees had broad access to the vendor payment system. After we briefed the DFAS Denver Center Director about our concerns, he told us that the current operational review program would be revised to place a greater focus on internal controls, including the review of vendor payment system access levels. DFAS officials told us that for Air Force employees outside the operating locations who had supervisor or subsupervisor access, but only need status reports, they have initiated action to reduce the level of access to inquiry only. They also told us that they would consider modifying the supervisor and subsupervisor access levels across DFAS locations to provide for greater segregation of duties within the vendor payment application for employees responsible for processing payments. Finally, with respect to access controls, there are significant weaknesses in the mainframe operating system security and the vendor payment system application that would allow unauthorized users to make fraudulent or improper payments. A recently completed review by the Defense Information Systems Agency (DISA), performed at our request, identified the following problems with the mainframe operating system on which DFAS Denver’s Directorate of Finance and Accounting Operations vendor payment system runs. Excessive access to powerful system utilities was permitted. These utilities enable a user to access and manipulate any data within the mainframe computer and vendor payment system. Routine system monitoring and oversight was not performed to identify and follow-up on user noncompliance with security standards. This allowed serious security weaknesses, which are commonly exploited by hackers, to exist. For example, default passwords, which are commonly known, were not disabled. Further, passwords and user IDs were not managed according to DISA policies. For example, 12 users, including a security administrator, had passwords that were set to never expire, exceeding the 90-day DISA policy. In addition, our tests of the local network and communication links to the DFAS Denver Directorate of Finance and Accounting Operations and the DFAS Dayton vendor payment systems showed that these systems are vulnerable to penetration by unauthorized internal DFAS and Air Force users. For example, because vendor payment system passwords and user IDs are transmitted across the local network and communication links in clear text, readily available software would permit any user to read vendor payment system passwords and user IDs. The ability to misdirect payments to a personal post office box or to an accomplice’s address was a major factor in the two fraud cases. Again, we found that weaknesses in controls over remittance addresses remain. Although DFAS changed its policy in April 1997 to require that the contractor address listed in the contract be used as the remittance address, it still permits the use of the invoice address if the invoice states that payment must be made to a specified address. This continues to afford a mechanism to misdirect payments for fraudulent purposes. This problem is compounded by the widespread access to the vendor payment system, just discussed, that allows users to enter changes to the remittance address. The Defense Logistics Agency has an initiative under way intended to validate remittance addresses. Under the Central Contractor Registry,contractors awarded a contract on or after June 1, 1998, are required to be registered in order to do business with the government. While DFAS Denver Center officials did not have a target date for full implementation of the Registry, they expect that 80 percent of the eligible contracts will be included in the Registry by mid-1999. The Registry, which is accessed through the Internet using a password or manually updated using a standard form, is intended to ensure that the contractor providing payment data, including the remittance address, is the only one authorized to change these data. However, this process, while an improvement, still has vulnerabilities related to control over remittance address changes. First, as previously discussed, DOD’s computer systems are particularly susceptible to attack through connections on the Internet. In addition, once the addresses are downloaded from the Registry to the vendor payment system, they will be vulnerable to fraudulent or improper changes due to the access control weaknesses previously discussed. Therefore, Registry controls over the remittance addresses will only be effective to the extent that access to remittance addresses currently held by DFAS and Air Force employees is eliminated or compensating controls are implemented. As I stated before, the Dayton case also involved the altering of invoices—a violation of 18 U.S.C. 1001—to improve reported payment performance, thereby depriving government contractors of interest payments. Again, we found that although some improvements have been made, today’s control environment would still permit such activity at most DFAS locations. Specifically, DFAS lacks procedures to ensure that the date that invoices were received for payment and the date that goods and services were received were properly documented. These are critical dates for ensuring proper vendor payments and compliance with the Prompt Payment Act, which requires that payments made after the due date include interest. The falsification of payment documentation to improve reported performance for on-time payments undermined DFAS Dayton’s internal controls over payments and impaired its ability to detect or prevent fraud. This was done by (1) altering dates on invoices received from contractors, (2) replacing contractor invoices with invoices created using an invoice template that resided on DFAS Dayton personal computers used by vendor payment employees, and (3) throwing away numerous other invoices. According to DFAS internal review and Air Force investigative reports, during 1996, DFAS Dayton also altered faxed invoices to change invoice receipt dates to avoid late payment interest required by the Prompt Payment Act. Not only did this practice undermine late payment controls, but an environment in which altered documents are commonplace made it more difficult to detect other fraudulent activity, such as the false invoices generated for personal financial gain. Our review of selected fiscal year 1997 DFAS Dayton and DFAS Denver’s Directorate of Finance and Accounting Operations vendor payment transactions identified a number of problems, including inadequate documentation. These issues affect not only Prompt Payment Act compliance, but the ability to determine whether payments were proper or whether the government received the goods and services paid for under Air Force contracts. We also found that neither DFAS Dayton nor DFAS Denver’s Directorate of Finance and Accounting Operations tracks invoices, whether mailed or faxed, from the time they are received until they are entered into the vendor payment system. For DFAS Dayton, we tested 27 vendor payment disbursement transactions made during fiscal year 1997 as part of our audit of the governmentwide consolidated financial statements. Our tests disclosed that 9 of 27 disbursement transactions were not supported by proper payment documentation, which includes a signed contract, approved voucher, invoice, and receiving report. Of the remaining 18 disbursement transactions, receiving report documentation for 12 transactions did not properly document the date that goods and services were received. Instead, the receiving report documentation showed the date that the document was signed. At your request, we reviewed 77 vouchers for Bolling AFB contracts paid by DFAS Denver’s Directorate of Finance and Accounting Operations in 1997 and 1998 that were obtained by your staff during their review of the DFAS Denver Directorate’s vendor payment operations in March 1998. All 77 of the payment vouchers had deficiencies, ranging from incomplete information to identify the individual receiving the goods and services to a missing receiving report. For example, 13 of the 77 DFAS Denver Directorate’s payment vouchers were replacement invoices that were marked “duplicate original” or “reprint,” possibly indicating that the original invoices had been lost or misdirected before being entered in the vendor payment system. In addition, 31 of the 77 vouchers contained receiving report documentation that omitted the date that goods and services were received. On March 25, 1998, in response to concerns regarding these 31 vouchers, the DFAS Denver Directorate revised its receiving report requirements to help ensure proper documentation of this date. However, at the end of our review in mid-August 1998, we were told that this problem had not yet been corrected at DFAS Dayton or the other vendor payment operating locations. Our review also showed that 2 of the 77 vouchers had discrepancies similar to those identified as part of the DFAS Dayton investigation. Specifically, one voucher had been voided and resubmitted later without the appropriate interest calculation. The other voucher included an invoice that appeared to have been created by a DFAS Denver Directorate employee because, according to the contract, the contractor lacked invoicing capability. The practice of creating invoices for contractors provides an opportunity for DFAS and Air Force employees to create false invoices. In the absence of computerized invoicing, contractors can submit billing letters that identify quantities, items billed, and costs. Thus, there appears to be no valid reason for DFAS or Air Force employees to create invoices. In closing, Mr. Chairman, internal control weaknesses that contributed to past fraud in the Air Force’s vendor payment process continue. Our report on these issues, released today, details a number of recommendations to help improve the controls over Air Force vendor payments. For example, we recommend that the DFAS Director strengthen payment processing controls by establishing separate organizational responsibility for entering (1) obligations and contract information, (2) invoice and receiving report information, and (3) changes in remittance addresses. We also recommend that the vendor payment system access levels be revised to correspond with the segregation of organizational responsibility and that the number of employees with vendor payment system access be reduced. Until DFAS and the Air Force complete the actions to address control weaknesses in vendor payment systems and processes and maintain accountability over goods and services received, Air Force funds will continue to be vulnerable to fraudulent and improper payments. Mr. Chairman, this concludes my statement. I will be pleased to answer any questions you or other Members of the Subcommittee may have at this time. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO discussed the current status of internal controls over the process for Air Force vendor payments, focusing on two recent cases of payment fraud. GAO noted that: (1) the two cases of Air Force vendor payment fraud resulted from a weak internal control environment; (2) the lack of segregation of duties and other control weaknesses created an environment where employees were given broad authority and the capability, without compensating controls, to perform functions that should have been performed by separate individuals under proper supervision; (3) similar internal control weaknesses continue to leave Air Force funds vulnerable to fraudulent and improper vendor payments; (4) for example, as of mid-June 1998, over 1,800 Defense Finance and Accounting Service (DFAS) and Air Force employees had a level of access to the vendor payment system that allowed them to enter contract information, including the contract number, delivery orders, modifications, and obligations, as well as invoice and receiving report information and remittance addresses; (5) no one individual should control all key aspects of a transaction or event without appropriate compensating controls; (6) this level of access allows these employees to submit all the information necessary to create fraudulent or improper payments; (7) in addition, the automated vendor payment system is vulnerable to penetration by unauthorized users due to weaknesses in computer security, including inadequate password controls; (8) further, DFAS lacked procedures to ensure that the date that invoices were received for payment and the date that goods and services were received were properly documented; (9) these are critical dates for ensuring proper vendor payments and compliance with the Prompt Payment Act, which requires that payments made after the due date include interest; and (10) until DFAS and the Air Force complete the actions to address control weaknesses in vendor payment systems and processes and maintain accountability over goods and services received, Air Force funds will continue to be vulnerable to fraudulent and improper vendor payments.
In December 2007, the OMB Office of Federal Procurement Policy issued guidance to chief acquisition officers and senior procurement executives to review and update their acquisition policies on the appropriate use of incentive fee contracts, which include award fee contracts. The guidance highlighted preferred practices including: (1) linking award fees to acquisition outcomes, such as cost, schedule, and performance results; (2) limiting the use of rollover to exceptional circumstances defined by agency policies; (3) designing evaluation factors that motivate excellent contractor performance by making clear distinctions between satisfactory and excellent performance; and (4) prohibiting payments for contractor performance that is judged to be unsatisfactory or does not meet the basic requirements of the contract. Further, OMB asked agencies to obtain and share practices in using award fees through an existing Web-based resource. The OMB guidance was developed based on award fee problems that had been identified by GAO and which DOD and NASA had begun to address. The following shows how OMB’s guidance is reflected in guidance provided by each agency: In response to GAO recommendations in 2005 and subsequent legislation, DOD issued guidance in 2006 and 2007 that states it is imperative that award fees are linked to desired outcomes, that the practice of rolling over unearned award fees should be limited to exceptional circumstances, that award fees must be commensurate with contractor performance, and that performance that is unsatisfactory is not entitled to any award fee. It also states that satisfactory performance should earn considerably less than excellent performance; otherwise, the motivation to achieve excellence is negated. While NASA’s Award Fee Guide already addressed the four issues, our previous work found that NASA did not consistently implement key aspects of its guidance on major award fee contracts. In response to our findings, a June 2007 NASA policy update reemphasized these policies to contracting staff and added a requirement that contracting officers include documented cost-benefit analysis when using an award fee contract. DOE has supplemental guidance to the Federal Acquisition Regulation (FAR) that outlines how award fees should be considered and in September 2008 created implementing guidance specific to management and operations contracts that links award fees to acquisition outcomes and limits the use of rollover. However, DOE’s departmental guidance does not clearly define the standards of performance for each rating category or prevent payment of fees for unsatisfactory performance. Divisions of DOE have developed their own standards and methods of evaluation which vary in their consistency with the OMB guidance. DHS provides guidance on award fees in its acquisition manual, but does not fully address the issues in the OMB guidance. The DHS guidance requires award fee plans to include criteria related (at a minimum) to cost, schedule, and performance and establishes that award fees are to be earned for successful outcomes and that no award fee may be earned against criteria that are ranked below “successful” or “satisfactory.” However, the manual does not describe standards or definitions for determining various levels of performance or include any limitation on the use of rollover. HHS officials did not have guidance specific to the use of award fees and were not aware of any such guidance at their operational divisions. Officials told us that they relied on the FAR for guidance on using award fees. However, contracting officials at HHS operational divisions noted a need for better guidance and told us that the FAR did not provide the level of detail needed to execute an award fee contract. As a result, contracting officers at these operational divisions have developed approaches to award fee contracts which vary in their degree of consistency with OMB’s guidance. The National Defense Authorization Act for Fiscal Year 2009 directed that the FAR be amended by the middle of October 2009 to expand the requirements placed on DOD in 2007 to all executive agencies. A working group including representatives from these agencies is reviewing and updating the FAR. DOD officials also told us that they are developing supplemental guidance on award fees, but will wait until the FAR working group completes its work before finalizing the guidance. By implementing the revised guidance, some DOD components reduced costs and improved management of award fee contracts. Potential changes at NASA —such as documented cost-benefit analyses—are too recent for their full effects to be judged. At DOE, DHS, and HHS, individual contracting offices have developed their own approaches to executing award fee contracts which are not always consistent with the principles in the OMB guidance or between offices within these departments. Use of Rollover: Guidance from DOD, DOE, and OMB states that allowing contractors a second chance at unearned fees should be limited to exceptional circumstances and should require high-level approval. NASA guidance does not allow rollover. Allowing contractors an opportunity to obtain previously unearned fees reduces the motivation of the incentive in the original award fee period. In almost all of the 50 DOD contracts we reviewed, rollover is now the exception and not the rule. While in 2005 we found that 52 percent of all DOD programs rolled over fee, only 4 percent of the programs in our sample continue this practice. We reviewed active contracts from our 2005 sample and found that eliminating rollover will save DOD more than an estimated $450 million on 8 programs from April 2006 through October 2010. However, with the exception of NASA where rollover is not allowed, we found instances at each agency, where rollover was allowed, at times, for 100 percent of the unearned fee. Linking Fees to Outcomes: OMB’s guidance indicates that award fees should be used to achieve specific performance objectives established prior to contract award, such as delivering products and services on time, within cost, and with promised performance; and must be tied to demonstrated results, as opposed to effort. Contracting officers and program managers across all five agencies said award fee contracts could benefit from objective targets that equate to a specific amount of the fee. While the combination of award fee contracts which evaluate subjective criteria and incentive contracts which evaluate objective targets was the preferred approach of several officials, there is no guidance on how to balance or combine these contract types. The effective use of subjective criteria requires that they be accompanied by definitions and measurements of their own to ensure they are linked to outcomes rather than processes or efforts. DOD’s Joint Strike Fighter is one program that has incorporated more discrete criteria. In comparing periods before and after the application of these criteria, the contractor has consistently scored lower in the performance areas than in previous periods where less defined criteria were applied. We estimate that the more accurate assessment of contractor performance has saved almost $29 million in less than 2 years of the policy change. However, contracts do not always use criteria that are linked to outcomes. For example, an HHS contract for call center services awarded a portion of the fees based on results, such as response times, but also included criteria based more on efforts, such as requiring the contractor to ensure that staffing levels were appropriate for forecasted volumes during hours of operation, rather than measuring results. Using Evaluation Factors to Motivate Excellent Performance: The amount of the fee established for satisfactory performance or meeting contract requirements generally awards the contractor for providing the minimum effort acceptable to the government. Programs used a broad range in setting the amount of the fee available for satisfactory performance, but many left little to motivate excellent performance. For example, DOE’s Office of Science uses a model that sets the amount of the fee able to be earned for meeting expectations at 91 percent, thus leaving 9 percent to motivate performance that exceeds expectations. In contrast, in an HHS contract for management, operation, professional, technical, and support services, the contractor earns 35 percent of the award fee for satisfactory performance, leaving 65 percent of the fee to motivate excellent performance. DOD and NASA are the only agencies we reviewed that provide guidance on the amount of the fee to be paid for satisfactory performance, up to 50 percent and 70 percent respectively. However, not all DOD programs have followed this guidance. For example, a DOD Missile Defense Agency (MDA) contract signed in December 2007 awards the contractor up to 84 percent of the award fee pool for satisfactory performance, which the agency defines as meeting most of the requirements of the contract. This leaves only 16 percent of the award fee pool to motivate performance that fully meets contract requirements or is considered above satisfactory. Payments for Unsatisfactory Performance: DOD, NASA, and OMB have stated that performance not meeting contract requirements or judged to be unsatisfactory merits no award fee. However, while the median award fee scores indicate satisfaction with the results of the contract, programs we reviewed continue to use evaluation tools that could allow for contractors to earn award fees without performing at a level that is acceptable to the government under the terms of the contract. For example, an HHS contract for Medicare claims processing rates contractor performance on a point scale, from 0 to 100, where the contractor can receive up to 49 percent of the fee for unsatisfactory performance and up to 79 percent for satisfactory performance (defined as meeting contract requirements). The National Nuclear Safety Administration, a separate agency within DOE, uses a tool that prohibits payments for unsatisfactory performance while the evaluation method used by DOE’s Office of Science allows a contractor to earn up to 84 percent of the award fee for performance that is defined as not meeting expectations. Further, current award fee plans for some programs using the Office of Science lab appraisal process allow for an award fee to be earned at the “C” level, which guidance defines as performance in which “a number of expectations ... are not met and/or a number of other deficiencies are identified” with potentially negative impacts to the lab and mission. According to Office of Science guidance, as much as 38 percent of the fee can be earned for objectives that fall in this category. While programs have paid more than $6 billion in award fees for the 100 contracts we reviewed, none of the five agencies has developed methods for evaluating the effectiveness of an award fee as a tool for improving contractor performance. Instead, program officials noted that the effectiveness of a contract is evident in the contractor’s ability to meet the overall goals of the program and respond to the priorities established for a particular award fee period. However, officials were not able to identify the extent to which successful outcomes were attributable to incentives provided by award fees versus external factors such as a contractor’s interest in maintaining a good reputation. When asked how they would respond to a requirement to evaluate the effectiveness of an award fee, officials told us that they would have difficulty developing performance measures that would be comparable across programs. Of the five agencies we reviewed, only DOD collects data on award fee contracts. In 2006, legislation required DOD to develop guidance on the use of award fees that included ensuring that the department collects relevant data on award and incentive fees paid to contractors and that it has mechanisms in place to evaluate such data on a regular basis. DOD has collected and analyzed data and provided that analysis to Congress and the Senior Procurement Executives of the military services and other DOD agencies. However, DOD does not have performance measures to evaluate the effectiveness of award fees as a tool for improving contractor performance and achieving desired program outcomes. DOD’s data collected on objective efficiencies include cost and schedule measures but do not reflect any consideration of the circumstances that affected performance, a critical element in determining award fees. While DOD has established an award fee community of practice through its Defense Acquisition University, most information regarding successful strategies for using award fees is shared through informal networks. Contracting officers at DOD, DOE, DHS, and HHS were unaware of any formal networks or resources for sharing best practices, lessons learned, or other strategies for using award fee contracts, and said they rely on informal networks or existing guidance from other agencies. However, within agencies, procurement executives are beginning to review award fee criteria across programs for consistency and successful strategies. Award fee contracts can motivate contractor performance when certain principles are applied. Linking fees to acquisition outcomes ensures that the fee being paid is directly related to the quality, timeliness, and cost of what the government is receiving. Limiting the opportunity for contractors to have a second chance at earning a previously unearned fee maximizes the incentive during an award fee period. Additionally, the amount of the fee earned should be commensurate with contractor performance based on evaluation factors designed to motivate excellent performance. Further, no fee should be paid for performance that is judged to be unsatisfactory or does not meet contract requirements. While DOD has realized benefits from applying these principles to some contracts, these principles have not been established fully in guidance at DOE, DHS, and HHS. Having guidance is not enough, however, unless it is consistently implemented. Further, the lack of methods to evaluate effectiveness and promote information sharing among and within agencies has created an atmosphere in which agencies are unaware of whether these contracts are being used effectively and one in which poor practices can go unnoticed and positive practices can be isolated. In our report, we recommended that DOE, HHS, and DHS update or develop implementing guidance on using award fees. This guidance should provide instructions and definitions on developing criteria to link award fees to acquisition outcomes, using an award fee in combination with incentive fees, rolling over unearned fees, establishing evaluation factors to motivate contractors toward excellent performance, and prohibiting payments of award fees for unsatisfactory performance. To expand upon improvements made, we recommended that DOD promote consistent application of existing guidance, including reviewing contracts awarded before the guidance was in effect for opportunities to apply it, and provide guidance on using an award fee in combination with incentive fees to maximize the effectiveness of subjective and objective criteria. We also recommended that the five agencies establish an interagency working group to (1) identify how best to evaluate the effectiveness of award fees as a tool for improving contractor performance and achieving desired program outcomes and (2) develop methods for sharing information on successful strategies. The agencies concurred with our recommendations and noted that both the FAR working group and an interagency working group could be potential mechanisms for implementing our recommendations. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For questions regarding this statement, please contact John P. Hutton at (202) 512-4841 or at huttonj@gao.gov. Individuals making contributions to this testimony include Thomas Denomme, Assistant Director, Kevin Heinz, John Krump, and Robert Swierczek. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
From fiscal year 2004 through fiscal year 2008, agencies spent over $300 billion on contracts which include award fees. While many agencies use award fee contracts, over 95 percent of the government's spending using this contract type in fiscal year 2008 occurred at five: the departments of Defense (DOD), Energy (DOE), Health and Human Services (HHS), and Homeland Security (DHS) and the National Aeronautics and Space Administration (NASA). In December 2007, the Office of Management and Budget's (OMB) Office of Federal Procurement Policy issued guidance to chief acquisition officers and procurement executives across the government that echoed several recommendations we made in 2005 on the use of award fees and emphasized positive practices to be implemented by all agencies. GAO's statement today is based on our May 29, 2009, report, Federal Contracting: Guidance on Award Fees Has Led to Better Practices But is Not Consistently Applied (GAO-09-630). Like the report, this statement addresses how agencies are implementing OMB's guidance. Specifically, we (1) identified the actions agencies have taken to revise or develop policies and guidance to reflect OMB guidance on using award fees, (2) determined the extent to which current practices for using award fee contracts are consistent with the new guidance, and (3) identified the extent to which agencies collect and analyze information on award fees to evaluate their use and share that information within their agencies. Award fee contracts can motivate contractor performance when certain principles are applied. Linking fees to acquisition outcomes ensures that the fee being paid is directly related to the quality, timeliness, and cost of what the government is receiving. Limiting the opportunity for contractors to have a second chance at earning a previously unearned fee maximizes the incentive during an award fee period. Additionally, the amount of the fee earned should be commensurate with contractor performance based on evaluation factors designed to motivate excellent performance. Further, no fee should be paid for performance that is judged to be unsatisfactory or does not meet contract requirements. While DOD has realized benefits from applying these principles to some contracts, these principles have not been established fully in guidance at DOE, DHS, and HHS. Having guidance is not enough, however, unless it is consistently implemented. Further, the lack of methods to evaluate effectiveness and promote information sharing among and within agencies has created an atmosphere in which agencies are unaware of whether these contracts are being used effectively and one in which poor practices can go unnoticed and positive practices can be isolated.
I would like to begin my testimony by briefly describing the development of the FHLBank System, significant statutory changes to the System, and its operations. Then, I will describe FHFB’s structure and activities. Congress passed the Federal Home Loan Bank Act (FHLBank Act) in 1932 and established the FHLBank System to facilitate the extension of mortgage credit and the housing finance market, which had been severely affected by the Great Depression. The FHLBank Act required all federally chartered thrifts to become members of the FHLBank located in their districts (see fig. 1) and invest capital in the FHLBanks. The System acted as a central credit facility that made advances to thrifts which, in turn, were expected to make additional mortgage credit available to homebuyers and thereby revive the housing finance market. The act also established safeguards to help ensure the financial soundness of the FHLBanks. In particular, thrifts had to pledge high-quality assets in excess of the value of their advances as collateral. In addition, the act created the Federal Home Loan Bank Board (Bank Board) to oversee the safety and soundness regulation of the FHLBanks as well as the thrift industry. However, between 1985 and 1989, the Bank Board delegated its oversight responsibility of the thrift industry to each of the FHLBanks. The business of the FHLBank System and its members essentially remained unchanged from the 1930s until the 1980s. The System is generally credited with serving as a relatively low-cost funding source for thrifts during that period and helping to overcome regional shortages in housing credit. However, due to regional downturns, sharply rising interest rates, and poor management, hundreds of FHLBank member thrifts failed during the 1980s, causing a contraction in FHLBank System business. As a result, Congress appropriated billions of dollars to cover the costs associated with ensuring the payment of insured thrift deposits. In addition, the regulatory structure for the thrift industry—where FHLBanks supervised thrifts on behalf of the Bank Board—involved significant conflicts of interest. These conflicts (such as FHLBanks regulating institutions to which they made advances) compromised the safety and soundness oversight of the thrift industry. In response to these issues, Congress enacted the FIRREA, which made substantial changes to the FHLBank System’s membership, regulation, and mission requirements as summarized below: FIRREA opened FHLBank system membership to commercial banks and credit unions that engaged in mortgage activities. These voluntary members were required to invest capital in their FHLBank but could normally withdraw such capital on 6-months notice. However, FIRREA still required thrifts to be members of their FHLBank and did not allow them to withdraw their capital contributions. FIRREA required the System to capitalize the Resolution Funding Corporation (REFCORP) to help pay for the deposit insurance fund losses resulting from thrift failures. Furthermore, the System had to pay up to $300 million per year of annual earnings to contribute towards interest payments on bonds issued by REFCORP to pay for thrift losses. FIRREA abolished the Bank Board and established FHFB to regulate the 12 FHLBanks. FIRREA also transferred the Bank Board’s previous supervisory and regulatory responsibilities for thrift institutions and their holding companies to the newly created Office of Thrift Supervision. FIRREA also directed each FHLBank to establish or maintain two low- and moderate-income housing programs—the Community Investment Program (CIP) and the Affordable Housing Program (AHP). As part of CIP, each FHLBank makes advances to finance the purchase or rehabilitation of housing for eligible households and finance other projects benefiting residents of low- and moderate-income neighborhoods. AHP requires each FHLBank to subsidize the financing of eligible low- and moderate-income housing and FIRREA sets priorities for use of these advances among eligible projects. Although FIRREA is credited with helping to restore the financial condition and supervision of the thrift industry during the 1990s, the capital structure of the FHLBank System and the financial obligations that the act imposed on the System and its members subsequently raised concerns. In particular, the fact that voluntary members, such as commercial banks, had the option of removing their capital from the System with 6-months notice appeared to increase financial risks to the System. Additionally, since the FHLBanks’ earnings had been weakened by the declining profitability of the thrift industry, the $300 million REFCORP obligation posed a challenge. Consequently, FHLBanks looked for new sources of revenue and increased investments in mortgage-backed securities that offered potentially higher returns, but exposed them to increased risks. After years of attempting to resolve these issues, Congress passed the Gramm-Leach-Bliley Act of 1999 (GLBA), which contained provisions that Eliminated the requirement that thrift institutions be members of the FHLBank System and made membership voluntary for all members. Additionally, GLBA established new capital requirements for FHLBank members that were intended to make the System’s capital more permanent. I will describe the capital provisions of GLBA in more detail later in my testimony; Revised the System’s REFCORP obligations, moving from a fixed annual payment of about $300 million to a specified percentage (20 percent) of the System’s annual earnings after AHP expenses. This change minimized the financial obligation on the System during periods of relatively low profitability but increased the total payment when profits increased; and Expanded the amounts and types of collateral that FHLBanks could accept for advances from small members known as community financial institutions (CFI). GLBA permitted CFIs to pledge small business and agricultural loans as collateral for FHLBank advances. Each of the 12 FHLBanks has a board of directors of at least 14 persons, with 8 elected by members and at least 6 appointed by FHFB. The FHFB appointed directors are commonly referred to as public interest directors. Additionally, each FHLBank board appoints a president who is responsible for overseeing the institution’s staff (see table 1). The president and staff are responsible for such activities as the FHLBank’s asset and liability management, AHP and other community development activities, and compliance with laws and regulations. The FHLBank System raises funds in the capital markets through its Office of Finance (OF), which has a board of directors consisting of three individuals who serve 3-year terms. FHFB appoints the OF chair and selects two FHLBank presidents to serve as the other OF board members. As I discussed earlier, the FHLBank System can issue debt, generally referred to as consolidated obligations, at a relatively low cost due to its GSE status, which may allow members to fund mortgages at lower rates. Consolidated obligations are the “joint and several” obligations of the FHLBanks. That is, if an FHLBank defaults on its repayment obligations, all the other FHLBanks may have to cover its obligations. Although the federal government does not explicitly guarantee that it would provide financial assistance to the FHLBank System in a financial emergency, investors perceive an implied guarantee because of the ties between the government and the System. For example, each FHLBank has a federal charter and consolidated obligations are exempt from federal, state, and local taxes. Moreover, the federal government did provide financial assistance to other GSEs, such as Fannie Mae and the Farm Credit System, when they experienced financial difficulties during the 1980s. In addition to providing advances to its members, the FHLBank System provides member institutions other benefits and services. For example, FHLBanks generally pay dividends to their member financial institutions. Other services FHLBanks may offer members include providing discounts on advances for large transactions, funding the AHP and CIP programs to help members finance affordable housing and community development activities, and offering mortgage purchase programs as discussed next. Although reportedly no FHLBank has ever suffered a credit loss on an advance, the business activities of the FHLBanks have become increasingly complex and potentially risky in recent years largely due to the implementation of the mortgage purchase programs. All of the FHLBanks are authorized to purchase mortgages from members through programs such as the Mortgage Partnership Finance (MPF) program and the Mortgage Purchase Program (MPP). Through these mortgage purchase programs, FHLBanks purchase conventional or government- guaranteed mortgages directly from their members. The FHLBanks hold the mortgages on their books and bear the interest-rate risks associated with them. To manage the interest-rate risks, FHLBanks must employ sophisticated risk-management techniques including the use of financial derivatives. Although such strategies are appropriate for risk management, they require specialized expertise, sophisticated information systems, and an understanding and application of sometimes complex accounting rules. As I discuss later, some FHLBanks recently have encountered financial problems in managing the interest rate risks associated with their mortgage portfolios. FHFB is responsible for regulating the FHLBank System’s safety and soundness as well as its mission achievement. The agency has a five- member board, with the President of the United States appointing four board members, subject to Senate approval. Each appointee serves a 7- year term. The fifth board member is the Secretary of the Department of Housing and Urban Development, or the secretary’s designee. The President also designates one of the four appointed board members as the chair, subject to Senate approval. FHFB is located in Washington, D.C. and has a staff of about 124 individuals, including 17 examiners in eight cities where FHLBanks are located. FHFB supervises the FHLBanks by conducting annual on-site examinations and off-site monitoring to ensure that the Banks satisfy capitalization requirements and maintain their ability to raise funds in the capital markets. On-site examinations are focused on particular risk areas (interest-rate risk, credit risk, and operational risk) and compliance with mission requirements such as the AHP program. Examiners set the scope for the examinations based on potential issues identified at previous examinations, as well as through quarterly monitoring. Off-site monitoring involves FHFB headquarters staff reviewing financial data on the FHLBanks on a continual basis. FHFB also conducts systemwide reviews of significant FHLBank operational, governance, and other practices and uses advisory bulletins and regulatory interpretations to convey guidance that addresses supervisory issues with systemwide implications. Under the FHLBank Act, FHFB is authorized to promulgate and enforce such regulations and orders that it deems necessary to carry out its responsibilities. The following summarizes several of FHFB’s key authorities: FHFB has the authority to issue cease-and-desist orders and other enforcement actions to address unsafe FHLBank practices. FHFB also has the authority to remove FHLBank officials and prohibit actions by FHLBank officers and directors. FHFB does not have specific statutory authority to establish a prompt corrective action (PCA) mechanism as do other federal banking regulators such as the Office of the Comptroller of the Currency and the Federal Deposit Insurance Corporation. Under such PCA authorities, bank regulators are required to take specific supervisory actions when bank capital levels fall below specified levels and may take other actions when specified unsafe and unsound actions occur. Although FHFB does not have PCA authority, FHFB officials believe they have all the necessary authorities to carry out their responsibilities; and FHFB has the statutory authority to liquidate or reorganize a critically undercapitalized FHLBank “whenever finds that the efficient and economical accomplishment of the purposes of the will be aided by such action.” Until 1989, the FHLBank System consisted of the 12 FHLBanks, OF, and thrifts, which were required to join. However, FIRREA and GLBA made substantial changes to this traditional structure. In this section, I will describe in more detail how FHLBank System membership, asset composition, and capital structure have changed over the past 15 years after the implementation of these statutes and FHFB regulations. Between 1990 and 2004, FHLBank System membership nearly tripled from 2,855 to 8,131 institutions (see fig. 2). As shown in the figure, the vast majority of membership increase can be attributed to commercial banks; whereas in the same period, thrift membership declined markedly. In 1990, thrifts accounted for 98 percent of all the System members but only 16 percent in 2004. In contrast, commercial banks accounted for 2 percent of all members in 1990 and 73 percent in 2004. A variety of factors may account for the surge in commercial bank membership. First, FHLBanks actively recruited commercial banks as members after FIRREA. Commercial banks may also have been attracted by the fact that FHLBank advances represent a stable and relatively low-cost source of funding. Additionally, in an attempt to offset declining membership resulting from the failure of many thrifts, FHLBanks modified their services and products to attract new members. For example FHLBanks made changes in advance pricing and terms in response to market pressures. Not only do commercial banks represent a large percentage of FHLBank System members, they also hold a large percentage of System capital and advances. As shown in figure 3, commercial banks now account for almost half of the System’s capital and advances. However, member thrifts still account for 43 percent of all system capital and 50 percent of all advances. Although thrifts account for a relatively small percentage of FHLBank members, they have remained significant customers of the FHLBank System due to their focus on mortgage financing. FHLBank System assets, which consist of advances, investments, and mortgages, increased from $165 billion in 1990 to $934 billion in 2004. As shown in figure 4, the mix of the three asset types has fluctuated in this time period although advances remained the largest category of assets. However, I note that the System did not hold mortgage assets until 1997. In 1990, advances represented about 70 percent of all the System’s assets, but declined to about 50 percent between 1991 and 1996. In contrast, FHLBank System investments—such as holdings of mortgage-backed securities (MBS) issued by Fannie Mae and Freddie Mac—increased from 27 percent of all assets in 1990 to 43 percent in 1996. During that period, as I have discussed, the number of thrifts declined, resulting in a loss of System advance customers and concerns were raised that REFCORP obligations imposed significant financial burdens on the FHLBanks. Investments in MBS were viewed within the System as more profitable than traditional advances and a potential means to comply with the REFCORP obligations. Through rulemaking, FHFB facilitated the FHLBanks’ ability to invest in MBS. In 1991, FHFB increased the percentage of MBS that the FHLBanks could hold as a percentage of their capital from 50 percent to 200 percent and, in 1993, FHFB raised the MBS to capital ratio to 300 percent. I note that investments as a percentage of all the System’s assets began to decline in 1995 while advances began to increase perhaps due, in part, to the increase in commercial bank members joining the System and taking advances. As I have discussed, FHFB also authorized the FHLBanks to begin purchasing mortgage assets through mortgage purchase programs in 1997. By 2003, such mortgage assets grew to almost 14 percent of all the System’s assets (about $113 billion in total System mortgage assets at year- end 2003). The growth in mortgages generally occurred relative to investments which declined from 40 percent of all assets in 1997 to 23 percent in 2003. However, in 2004, the System’s mortgage assets leveled off, partly because of difficulties identified at some FHLBanks in managing such assets. I discuss this issue in the next section. As I discussed earlier, prior to 1999 voluntary FHLBank members, such as commercial banks, could withdraw their capital on 6-months notice, which raised questions about the stability of the FHLBanks’ capital structure. To address this concern, GLBA established that FHLBank membership was voluntary but required that financial institutions that choose to become members invest more permanent stock in their FHLBank. Under the new capital structure, FHLBanks can issue class A stock, which can be redeemed with 6-months notice, and class B stock, which can be redeemed with 5-years notice, or both. To help ensure that capital does not dissipate due to redemption in times of stress, GLBA does not allow an FHLBank to redeem or repurchase capital if following the redemption the FHLBank would fail to satisfy any minimum capital requirement. Under GLBA, the FHLBanks are also subject to both a leverage requirement (minimum capital-to-assets ratio) and a risk-based capital calculation. Under the leverage requirements, each FHLBank must comply with two minimum capital ratios. First, permanent capital (equal to amounts paid in for class B stock plus retained earnings) plus class A stock is to be at least 4 percent of assets. Second, class A stock plus 1.5 times permanent capital is to be at least 5 percent of assets. The risk-based capital standards account for credit risk, interest-rate risk, and operations risk. For credit risk, a FHFB regulation specifies capital requirements according to the mix of activities (advances, mortgages, etc.) in which the individual FHLBank is engaging. For interest-rate risk, each of the FHLBanks must have a FHFB-approved interest-rate risk model that provides an estimate of the market value of the FHLBank’s portfolio during periods of market stress. The capital requirement for operations risk is generally 30 percent of the total capital charge for credit and interest-rate risk. GLBA required each FHLBank to submit a capital plan to FHFB for review and approval. FHFB approved all 12 FHLBank capital plans by 2002 and 11 of the 12 FHLBanks have implemented their capital plans. According to FHFB, 10 of the 12 capital plans rely entirely on class B stock and two of the capital plans include class A stock. As part of the capital plan implementation process, FHFB has required the FHLBanks to submit plans for modeling interest-rate risk and related procedures for managing these risks. Finally, I would like to discuss some important challenges and questions affecting the FHLBank System. They include risk-management practices, the securitization of mortgage assets, the extent to which the FHLBank System is meeting its mission requirements, and the alleged impacts that large financial institutions are having on the System’s traditional cooperative structure and AHP program. Over the past year, FHFB identified risk-management deficiencies at two FHLBanks—Chicago and Seattle—primarily related to their management of the interest-rate risks associated with mortgage purchases. FHFB identified weaknesses at these FHLBanks in such areas as corporate governance, financial recordkeeping, audit, and financial performance. Additionally, FHFB entered into written enforcement agreements with both FHLBanks that require improvements in accounting practices and internal controls. The Chicago and Seattle FHLBanks were both required to submit 3-year business and capital plans to FHFB and hire outside management consultants to review the banks’ management and their board’s oversight of the banks. The Chicago FHLBank has submitted its plan to FHFB, which accepted it. The Seattle FHLBank received an extension and submitted its plan to FHFB on April 5, 2005. The financial problems identified at the Chicago and Seattle FHLBanks have had significant effects on their operations and business practices. For example, FHFB required the Chicago FHLBank to restate its financial results for 2003 and placed limits on the growth of the institution’s mortgage purchases until its risk-management practices improve. Previously, the Chicago FHLBank had been the primary engine of growth for the mortgage purchase programs within the FHLBank System. The Seattle FHLBank has decided to exit from the MPP program and thereby stop purchasing mortgages from its members. FHFB officials told us they continue to monitor risk management practices within the FHLBank System, particularly the management of interest-rate risks. FHFB officials said that their examinations continue to identify deficiencies in these areas and that they are working with the FHLBanks to correct them. In recent years, proposals have been made to permit the FHLBanks to securitize mortgage assets to provide for the continued growth of the mortgage purchase programs. Without securitization, which would permit FHLBanks to remove mortgage assets from their balance sheets, the System’s ability to increase its mortgage purchases may be constrained by capital requirements. That is, since the FHLBanks must comply with capital requirements for assets such as mortgages held on their balance sheets, they would not be able to expand these programs without obtaining additional capital from their members, which may prove difficult. According to FHFB’s chair, the agency should defer to Congress on the question of whether FHLBanks should be permitted to securitize their mortgage assets. Securitization offers potential benefits to the FHLBank System but it raises questions as well. One potential benefit of securitization is that it would provide the FHLBanks with an additional tool to manage the interest-rate risks associated with mortgage purchases. Authorizing the FHLBanks to securitize mortgage assets has also been advocated as a means to increase competition in the secondary mortgage market, which could benefit lenders and homebuyers. However, questions exist on whether the FHLBanks would be able to develop the necessary infrastructure, including hiring staff with specialized expertise, to effectively manage securitization programs. Some FHLBank System members have also commented that securitization would further alter the System’s traditional focus on providing advances to member institutions, and therefore, be undesirable. Although anecdotal information exists on the benefits of the FHLBank System, limited quantitative analysis exists on the extent to which the FHLBanks’ activities benefit homebuyers, mortgage finance, and community development. We recognize that conducting such research is challenging. First, isolating the FHLBanks’ effects on mortgage markets is a complex and technical undertaking. Second, with the addition of mortgage purchase programs, the financial activities of the FHLBanks have become more sophisticated, thus complicating any analyses of benefits and costs. Nevertheless, assessing the outcomes of the FHLBank System’s activities is important for Congress and others to determine whether the risks associated with the System are offset by the potential benefits. I would now like to highlight information and data limitations that hamper any assessment of mission achievement: We are not aware of any studies on the extent to which FHLBank advances, mortgage purchase, and other activities directly benefit homebuyers through lower mortgage costs. In contrast, several studies have estimated the potential savings to homebuyers associated with the mortgage purchase activities of Fannie Mae and Freddie Mac. Some of these studies estimate a substantial savings to homebuyers while others conclude that the savings are small and that Fannie Mae and Freddie Mac as well as their shareholders are the primary beneficiaries. Although the studies’ findings may differ, they provide an empirical basis for discussing the costs and benefits of Fannie Mae and Freddie Mac’s activities. Similarly, there is minimal empirical evidence on the extent to which the FHLBank System’s advance business encourages lenders to expand their mortgage business. We have identified one existing study on this subject: a 2002 report by the Federal Reserve Bank of Cleveland. The study found that there is a significant positive relationship between an FHLBank member’s use of advances and its mortgage finance activity. However, we believe the report has certain methodological limitations and should be interpreted with caution. For example, the report does not demonstrate that members increased their mortgage assets as a result of joining the System, it only shows that System members have relatively high mortgage assets compared to non-System members (We are also aware that the FHLBank Council recently released two reports on this subject but we have not had time to analyze them in preparation for this testimony). There is limited information as to why the placement of small business and agricultural collateral by small community financial institutions (CFI) to secure FHLBank advances has been minimal. GLBA expanded the types of collateral (including small business and agricultural collateral) that CFIs could pledge to secure FHLBank advances in the view that doing so would allow the System to better meet the needs of small institutions. However, FHFB data indicates that such collateral represents less than 1 percent of all collateral pledged to secure System advances. On the one hand, it may be the case that very few institutions are willing to pledge such collateral to secure advances. However, the potential also exists that FHLBanks have established such strict underwriting standards—for example by applying significant haircuts to the collateral—that CFIs have been discouraged from pledging it. We understand that FHFB is planning a conference later this year to gather additional information on the use of CFI collateral. In recent years, questions have been raised about the potential impacts of holding companies who may have mortgage subsidiaries that are members of two or more FHLBank districts on the FHLBank System structure, which traditionally involved each financial institution belonging to one FHLBank. In a 2003 report, we noted that there are about 100 holding companies that had subsidiaries who were members of two or more FHLBank districts. Some observers have expressed concerns that such large financial institutions could pressure the FHLBanks to compete with one another on advance pricing terms—such as interest rates and collateral requirements—and that this competition could impair the overall safety and soundness of the FHLBanks. Our report noted significant differences in advance term pricing among the 12 FHLBanks and that the opportunity existed for holding companies to obtain advances from the FHLBank that offered the most favorable advance terms. Some FHLBank officials also said that holding companies seek to play one FHLBank off another creating competition within the System. However, we also found that FHFB had not identified any material safety and soundness issues related to FHLBanks’ advance-term pricing. I would reiterate a statement in our 2003 report that FHFB has a continued responsibility to monitor the FHLBanks to help ensure that any competition within the System does not result in unsafe and unsound practices. Concerns have also been raised that the activities of large financial institutions such as holding companies are having negative affects on the AHP program in certain FHLBank districts. Under FIRREA, FHLBanks must contribute 10 percent of their previous year’s earnings to subsidize housing finance for targeted groups. In some cases, financial institutions located in one FHLBank district have purchased banks or thrifts in other FHLBank districts. As such financial institutions grow through out-of-area acquisitions, they may be able to increase their business relations with their local FHLBank thereby increasing its profitability. For example, such financial institutions may take out additional advances or sell additional mortgages to the FHLBank. With potentially increased profits from doing business with a larger member, the FHLBank would have additional funds to devote to the AHP program. In contrast, FHLBanks whose members were acquired potentially lose net income and AHP funding dollars. According to one FHLBank president, such acquisitions have hurt AHP funding in his bank’s district. However, according to FHFB officials, recent research they conducted shows that mergers may have a short-term impact on AHP funding, but these effects seem to balance out over time. For example, financial institutions in a FHLBank district that lost members through acquisitions may purchase financial institutions in other FHLBank districts thereby recapturing AHP funds. FHFB officials also said that they may develop a regulation to address any concerns associated with mergers on AHP funding. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other members of the committee may have at this time. For further information regarding this testimony, please contact me at 202- 512-8678 or mccoolt@gao.gov or William B. Shear, Director, at 202-512- 8678 or shearw@gao.gov, or Wesley M. Phillips, Assistant Director, at 202- 512-5660 or phillipsw@gao.gov. Individuals making contributions to this testimony include Rachel DeMarcus, Austin Kelly, Jill M. Naamane, Andy Pauline, Mitchell B. Rachlis, and Barbara Roesmann. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The FHLBank System (FHLBank System or System) is a government-sponsored enterprise (GSE) that consists of 12 Federal Home Loan Banks (FHLBanks) and is cooperatively owned by member financial institutions, typically commercial banks and thrifts. The primary mission of the FHLBank System is to promote housing and community development generally by making loans, also known as advances, to member financial institutions. To minimize the potential for significant financial problems, the Federal Housing Finance Board (FHFB) regulates the FHLBank System's safety and soundness. Over time, a number of developments have affected the System's safety and soundness and have created pressures on its traditional cooperative structure. To assist the committee in understanding the important issues surrounding the FHLBank System and its regulation, this testimony provides information on the development of the System; two legislative changes and FHFB rulemaking that led to changes in membership, asset composition, and capital structure; and important challenges and questions the FHLBank System currently faces. Established in 1932 to facilitate the extension of mortgage credit, the FHLBank System has undergone significant statutory changes in the last 15 years. Between the 1930s and the 1980s, the System consisted primarily of thrift members that accepted advances from the FHLBanks. However, during the 1980s, hundreds of FHLBank member thrifts failed forcing Congress to fundamentally reform the System through the Financial Institutions Recovery, Reform, and Enforcement Act of 1989 (FIRREA). For example, FIRREA permitted commercial banks to join the System. Although FIRREA is credited with strengthening the thrift industry and the System, concerns were raised during the 1990s about the System's capital structure. In particular, commercial bank members could remove stock from their FHLBank on 6-months notice, which raised concerns about the System's financial stability. Among other provisions, the Gramm-Leach-Bilely Act (GLBA) of 1999 created a more permanent and risk-based capital structure for the System. Due to these statutes and FHFB rulemaking, the FHLBank System has evolved substantially since 1990. For example, commercial banks now account for more than 70 percent of all System members. The composition of FHLBank System assets has also fluctuated considerably over the years. For example, FHFB authorized the FHLBanks to purchase mortgages directly from their members in the 1990s. The System's mortgage assets grew to about $113 billion at yearend 2003 representing about 14 percent of total assets. However, the rapid growth in System mortgage assets leveled off in 2004 as two FHLBanks experienced problems managing the interest-rate risks associated with holding mortgages on their books. As provided by GLBA, System capital is now more permanent as members generally must invest capital for a period of 5 years and the FHLBanks are subject to new leverage and risk-based capital requirements. The FHLBank System faces important challenges and questions going forward. For example, FHFB has called the FHLBanks' risk-management practices into question, particularly those related to mortgage purchase programs. Further, proposals to permit the FHLBanks to issue mortgage-backed securities (securitization) could help ensure the growth of the mortgage purchase business and improve risk management, however these proposals raise questions regarding the FHLBanks' capacity to manage the related risks. Additionally, there is limited empirical information available regarding the extent to which the System is fulfilling its housing and community mission. Finally, questions have been raised regarding the potential negative affects that large financial institutions may have on the traditional cooperative structure of the FHLBank System and its programs designed to benefit targeted groups.
RWTMA includes provisions related to unobligated balances, client-level data, and ADAPs. RWTMA includes provisions to encourage grantees to obligate their grant funds in the year in which they were awarded. RWTMA provides that Part A and Part B grant funds are available for obligation for a one-year period beginning on the date funds first become available (referred to as the grant year for the award). RWTMA requires HRSA to cancel the unobligated balance of grant awards at the end of a grant year and to require grantees to return any amounts from such balances that have been disbursed to them. However, in the case of base grants, a grantee may submit a request to carry over the unobligated balance prior to the end of the grant year. If HRSA approves the request, the unobligated balance that is approved for carryover (carryover funds) is available to the grantee for expenditure for a one-year period beginning upon the expiration of the grant year (referred to as a carryover year). Under the RWTMA unobligated balance provisions, HRSA is required to cancel any unexpended balance of carryover funds at the end of the carryover year. HRSA must make the canceled balances from the grant awards (that is, funds that were not eligible or approved for carryover and carryover funds that remain after the carryover year) available as supplemental grants for the first fiscal year beginning after the fiscal year in which HRSA obtains the information necessary for determining the balance available. Part A grantees with greater than 2 percent of their base grant awards unobligated at the end of the grant year and Part B grantees with greater than 2 percent of their Part B and ADAP base awards unobligated at the end of the grant year incur a penalty. RWTMA requires HRSA to reduce the amount of those grants by the same amount as the unobligated balance for the first fiscal year beginning after the fiscal year in which HRSA obtains the information necessary for determining the unobligated balance. The grant funds that become available as a result of these reductions are also to be made available as supplemental grants. RWTMA’s authorization of appropriations for base and supplemental grants under Parts A and B provided that amounts appropriated for a fiscal year would be available for obligation until the end of the second succeeding fiscal year. Further, under appropriations acts enacted since RWTMA, funds for grants under Parts A and B, to which the unobligated balance provisions apply, are available for obligation for a 3-year period. In fiscal year 2007, for example, funds were made available for obligation until September 30, 2009—the end of the 2009 federal fiscal year. Thus, as HRSA recognized in its guidance regarding the unobligated balance provisions, the initial obligation of funds, cancellation of unobligated balances, return of amounts disbursed to grantees, and the recompetition and redistribution of supplemental grants would need to occur within the 3-year window. In order to implement the RWTMA unobligated balance provisions, HRSA created a multistep process for grantees and issued a policy notice to grantees explaining this process. HRSA’s process for implementing the unobligated balance provisions in grant year 2007 included five steps. First, a grantee wishing to carry over funds was required to submit a carryover request to HRSA with an estimated unobligated balance of base grant funds 60 days prior to the end of the grant year. In addition to the estimated unobligated balance, the initial carryover request also had to contain a viable plan and detailed budget for the use of the funds, and a description of the grantee’s capacity to utilize the funds within one-grant year. Part A grantees had to submit their initial carryover request to HRSA by January 1, 2008. Part B grantees had to submit their initial carryover request to HRSA by February 1, 2008. The second step of the 2007 grant year process was the evaluation of the initial carryover requests. HRSA authorized grantees that obtained approval before the end of the 2007 grant year to carryover 50 percent of the amount they requested in this initial carryover request. To authorize the use of the carryover funds, HRSA issued these grantees a notice of grant award that explained to the grantees that HRSA had effectively transferred the carryover funds from their grant year 2007 account into their grant year 2008 account, though balances remained, in effect, available to the grantees for obligation until the end of grant year 2007. HRSA officials explained that they did not authorize the full amount of the initial carryover request because they believed it was possible that the grantees that requested waivers would incur obligations greater than anticipated in the 60-day estimate. HRSA officials stated that they wanted to authorize the carryover of a portion of the unobligated balance so that grantees with approved carryover requests would have a longer period of time to obligate the carried over funds. For step three of HRSA’s 2007 grant year unobligated balance process, grantees were required by HRSA to submit a Financial Status Report (FSR) 90 days after the end of the grant year. The FSR contains, among other things, a grantee’s actual unobligated balance. For Part A grantees, FSRs were due on June 1, 2008. For Part B grantees, FSRs were due on June 30, 2008. HRSA can extend the deadlines for grantees for submission of their FSRs and granted extensions for 30 to 180 days. For step four of the process, although not required by HRSA for grant year 2007, grantees could submit a final carryover request based on their actual unobligated balances. Those grantees that had their initial carryover requests approved and had been authorized by HRSA to carry over 50 percent of their unobligated balances at that time, could apply for the remaining funds (the difference between the 50 percent they had already been authorized to carry over by HRSA and their actual unobligated balance). HRSA then authorized the use of the additional amount of carryover funds by issuing a notice of grant award. For step five of this process, grantees with unobligated balances of greater than 2 percent of their grant year 2007 Part A, Part B, and ADAP base grants were assessed a penalty. This penalty was a corresponding reduction in grant year 2009 funds. In addition, Part A and B grantees with unobligated balances of greater than 2 percent for grant year 2007 were ineligible to receive supplemental grants in grant year 2009. For Part A grantees this meant that they were not eligible to receive grant year 2009 Part A supplemental grants. For Part B base grantees this resulted in ineligibility to receive grant year 2009 Part B supplemental grants. For Part B ADAP grantees, an unobligated balance of greater than 2 percent does not result in ineligibility for ADAP supplemental grants. Instead, ineligibility for the ADAP supplemental grant occurs when a grantee has not obligated at least 75 percent of its ADAP grant award within 120 days of the award. Figure 1 shows a timeline for Part A and B grant distribution and the distribution and the unobligated balance provisions. unobligated balance provisions. HRSA cancelled and recovered $13,764,295 in combined grant year 2007 Part B base and supplemental unobligated balances from 16 Part B grantees with unobligated balances of greater than 2 percent. In addition, these 16 grantees’ grant year 2009 awards were reduced by a total of $19,677,483 as a penalty for incurring an unobligated balance of greater than 2 percent in grant year 2007. Of this, $4,441,865 was from Part B base grants and $15,235,618 was from ADAP base grants. Prior to RWTMA, HRSA used the Ryan White HIV/AIDS Program Data Report (RDR) to collect information on CARE Act services from grantees and their service providers. However, RDR was unable to collect client- level data with unique identifying information. Consequently, there was no way of knowing if the clients counted as being served by one provider were also included in the counts of those being served by other providers. Therefore, totaling the number of clients receiving services across providers could result in clients being counted more than once. Additionally, the lack of client-level data meant that HRSA was unable to assess the quality of care given to clients or sufficiently account for the use of CARE Act funds. HRSA now collects client-level data to help ensure accountability of CARE Act funds. A client-level data collection and reporting system contains information unique to each client receiving CARE Act-funded services, such as their socio-demographic characteristics, the services provided, and each client’s current health status. Because the system collects client- specific information rather than only aggregate-level data, HRSA can obtain a more accurate measure of the number of clients being served than was available using RDR. Each ADAP is given broad authority under the CARE Act to design its own program. The scope of an ADAP’s coverage—who and what is covered—is determined by each ADAP’s program design, which includes criteria such as the number and types of drugs it will provide to its clients, and the income levels to qualify for services. However, RWTMA required that each grantee establish an ADAP formulary that covers all core classes of antiretroviral medications. ADAP grants totaled approximately $821 million in fiscal year 2009. Of this amount, $780 million was provided to grantees as ADAP base grants, which are awarded by formula and are based on a grantee’s share of living HIV/AIDS cases. The remaining $41 million was distributed to grantees as ADAP supplemental grants. These grants are distributed to ADAPs that demonstrate a severe need to increase the availability of HIV/AIDS drugs. ADAPs must balance client need with available resources. In previous years, many ADAPs have had to institute waiting lists and other cost containment measures because of insufficient funds to provide services to all individuals who qualify. In our 2006 report, we found that in fiscal year 2004, 14 ADAPs had waiting lists of individuals they determined were eligible for assistance but they were unable to serve. According to NASTAD and the Henry J. Kaiser Family Foundation, since 2002 a total of 20 different ADAPs have had waiting lists at some point. The largest number of individuals on waiting lists across all grantees at any time was 1,629 in May 2004. However, they reported that there were no individuals on waiting lists as of September 2007. NASTAD, the Henry J. Kaiser Family Foundation, and others have cited several factors that contributed to the elimination of waiting lists as of September 2007. These reasons included HRSA’s awarding $39.5 million in ADAP supplemental grants in September 2007, states’ increasing their contributions to ADAPs, and the continued implementation of Medicare Part D prescription drug coverage. The lack of timely and accurate information has delayed HRSA’s distribution of unobligated balances as supplemental grants and places at risk HRSA’s ability to obligate these funds. HRSA has attempted to develop timely information on grantee obligations but was unsuccessful doing so for grant year 2007. The lack of timely and accurate information in grantees’ FSRs regarding grant year 2007 unobligated balances has delayed HRSA’s distribution of Part B supplemental grants, and places at risk HRSA’s ability to redistribute these funds by September 30, 2009, after which it will no longer have the authority to redistribute the funds. Because of late FSR submissions, as of September 14, 2009, HRSA had not yet redistributed funds that it canceled and recovered from grantees’ 2007 unobligated balances. However, as HRSA recognized in its guidance regarding the unobligated balance provisions, the entire process for canceling and recovering grant funds and making the corresponding awards of supplemental grants must occur within the 3-year period of availability of those Part B funds. For HRSA’s grant year 2007 process, Part A grantees were required to submit their FSRs by June 1, 2008, and Part B FSRs were to be submitted to HRSA by June 30, 2008. The FSR contains, among other information, a grantee’s actual unobligated balance. HRSA uses the grantees’ actual unobligated balances, as reported on their FSRs, to determine the total amount of unobligated balance funds that will be available for redistribution through supplemental grants. Without complete, accurate, and timely information from grantees about their unobligated balances, HRSA is unable to redistribute unobligated balance funding through the Part A and Part B supplemental grants. Many Part A and B grantees submitted their FSRs late, and some submitted their FSRs more than 120 days after the deadline. Of the 56 Part A grantees, 21 submitted FSRs after the June 1, 2008, deadline. Of the 59 Part B grantees, 24 submitted FSRs after the June 30, 2008, deadline. Table 1 shows the number of days after the deadline that Part A and Part B grantees submitted their FSRs. HRSA officials stated that grantees were often delayed in submitting their FSRs because of their end-of-the-year workload, which includes the need to submit grant applications and multiple reports for their formula and supplemental funding. HRSA officials stated that grantees normally request extensions for submitting their FSRs, and 60-day extensions are typically granted. HRSA officials stated that in grant year 2007, due to the new process HRSA implemented to address the unobligated balance provisions, grantees had to implement separate tracking of the expenditure of current grant year base grant and supplemental funds, and the expenditure of carryover funding from previous years. HRSA officials also stated that grantees had difficulty implementing the separate tracking of these funds. HRSA officials stated that due to grantees’ difficulty tracking funds separately, some grantees’ FSRs reported inaccurate unobligated balances, which required HRSA staff to correspond with grantees and request revised information, creating additional delays. According to HRSA officials, in addition to experiencing difficulty tracking funds, grantees were dealing with other factors including late receipt of final invoices from contractors, delays in receipt of ADAP rebates, and staff vacancies. While HRSA has typically approved grantees’ requests for extensions in submitting their FSRs, the tardiness of grantees’ FSR submissions and HRSA’s need to correspond with grantees to address their inaccuracies has delayed HRSA’s ability to determine the amount of unobligated balances available for redistribution to grantees through Part B supplemental grants. In April 2009, HRSA officials stated that they planned to distribute Part B supplemental grants in May 2009. However, as of September 14, 2009, HRSA had not distributed the 2009 Part B supplemental grants. As a consequence, HRSA had not yet fully implemented the unobligated balance provisions for the first time. HRSA officials stated that they plan to implement changes to improve the timeliness of their process. For example, HRSA officials also stated that beginning in grant year 2009 they will no longer approve grantees’ requests for extensions for their FSR submissions. Additionally, beginning in grant year 2009, FSRs will be due 30 days after the end of the grant year instead of the grant year 2007 deadline of 90 days after the end of the grant year. In its 2007 process, HRSA tried to develop timely information on grantees’ unobligated balances, but these efforts were unsuccessful. For grant year 2007, in order to gain information on grantees’ unobligated balances so that it could begin to determine how much funding would be available for distribution as supplemental grants and so that it could provide grantees with a full year to obligate carryover funds, HRSA requested that grantees submit estimates of their unobligated balances 60 days before the end of the 2007 grant year. Because unobligated balance funds that grantees decide not to carry over and unobligated balance funds from carryover requests that are not approved by HRSA are available for redistribution through supplemental grants, HRSA officials needed to complete processing of the carryover requests before they could determine the amount of funding that and could be made available as supplemental grants. Many grantees’ estimates of their unobligated balances in advance of the end of the grant year differed from their actual unobligated balances at the end of the grant year. In accordance with HRSA’s requirements, many Part A and Part B grantees submitted estimates of their unobligated balances with requests to carryover these funds 60 days before the end of the grant year, but their estimates proved to be substantially different from the actual unobligated balances reported on their FSRs. Of the 29 Part A grantees that submitted initial carryover requests, compared to the actual unobligated balances they submitted on their FSRs, 25 overestimated their unobligated balances, two grantees underestimated their unobligated balances, and two grantees correctly estimated their unobligated balances. Among the grantees that overestimated their balances were nine grantees that were ultimately able to obligate all of their funding by the end of the grant year and therefore did not need to carry over any funds. Of the 24 Part B grantees that completed initial carryover requests, compared to the actual unobligated balances they submitted on their FSRs, 18 overestimated their unobligated balances. Two of these grantees, New York and New Jersey, overestimated their unobligated balances by more than the amount they received from HRSA based on their initial carryover requests and had to request that HRSA return the grant year 2007 carryover funds that the grantees had previously requested be transferred into their grant year 2008 accounts. Six grantees underestimated their unobligated balances. Nine grantees that overestimated their balances were ultimately able to obligate all of their funds by the end of the grant year and did not need to carryover any unobligated balances. The process of approving grantees’ initial carryover requests sometimes extended into the 2008 grant year. As a result, grantees were not authorized to use carryover funds at the expiration of the 2007 grant year as provided for by RWTMA. HRSA officials stated that the implementation of procedures to process, approve, and authorize carryover funding required significant staff time from the HRSA project officer, grants management staff, and program managers. The HRSA process called for staff to review these initial carryover requests, approve them, and authorize carryover funding to be transferred from the grantees’ 2007 accounts into their 2008 accounts. HRSA officials stated that the multiple grantee submissions, which often included revised proposals, resulted in processing delays and confusion for HRSA staff. On average, it took HRSA staff 3 months to complete processing of Part A grantees’ initial unobligated balance carryover requests and 4 months for Part B grantees. Because grantees were only given until the end of grant year 2008 to expend carry over funds, grantees who received authorization to carryover funds after the start of the grant year did not have the entire grant year to expend these funds. In light of HRSA’s difficulty implementing procedures related to the submission of initial carryover requests and the differences between grantees’ estimated and actual unobligated balances, HRSA has decided to discontinue its process of approving initial carryover waiver requests based on estimated unobligated balances. HRSA has taken actions to collect client-level data by implementing a new data collection and reporting system. It has also provided financial and technical assistance to grantees and service providers implementing their own client-level data and reporting systems. In addition, HRSA developed a timeline for the submission of reports covering the initial reporting period using client-level data, but some grantees did not submit the initial reports by the deadline. HRSA has taken actions to collect client-level data from CARE Act grantees and service providers. Beginning in December 2007, after the initial design and development of a client-level data collection and reporting demonstration project, HRSA held meetings with CARE Act grantees, national organizations, and federal agencies to discuss collecting and reporting client-level data. Topics discussed included data collection and reporting barriers, data elements to be collected, how the data would be used, and the technical assistance that would be available from HRSA. Using information from these sessions, HRSA finalized the Ryan White HIV/AIDS Program Services Report (RSR), its client-level data collection and reporting system. RSR consists of three reports: the Grantee Report, the Service Provider Report, and the Client Report. RSR was submitted to the Office of Management and Budget for approval in November 2008, which granted approval for HRSA to collect data from grantees and service providers using RSR in March 2009. HRSA stated that RSR will improve information on the clients served, the services provided to clients, and the outcomes of the services provided. RSR is designed to provide HRSA with a more accurate measure of the number of unique clients receiving CARE Act-funded services by assigning each individual an encrypted Unique Client Identifier thereby allowing the tracking of individuals who receive services from multiple providers. Because RSR will contain client-specific data, HRSA will be able to determine the services each client received and the outcomes of these services. RSR is part of a process through which HRSA plans to collect information, including client-level data, from grantees and service providers funded under CARE Act Parts A, B, C, D, and F. First, the grantees and service providers collect data using their own data collection systems. Second, the grantees and service providers report the data to HRSA in specified reports using RSR. HRSA has stated that it intends to use the data collected through RSR to generate reports on the use of CARE Act funds and the providers that receive them. HRSA reports are expected to provide client-level information on the characteristics of the clients served, the types of services they received from the provider, and their current health status. Additionally, HRSA has stated that it intends to conduct detailed analyses of national and regional information about clients and services. HRSA provided financial assistance to CARE Act grantees to develop or adapt their client-level data collection and reporting systems so that they could submit the required information to RSR. There are grantees who must develop new systems while other grantees’ systems require modification to enable them to generate data compatible with the requirements of RSR. HRSA administered a Special Projects of National Significance (SPNS) initiative in fiscal year 2008 and another in fiscal year 2009 to provide funds to support CARE Act grantees in developing client- level data systems that could be used to report information to RSR. Under the fiscal year 2008 SPNS initiative, HRSA awarded 17 grants ranging from $87,000 to $200,000 to all 17 CARE Act Parts A and B grantees that applied for funding. Under the fiscal year 2009 SPNS initiative, HRSA awarded a total of approximately $4 million to all 57 Parts C and D grantees that applied for funding. Officials from 4 of the 17 health departments we interviewed stated that they received financial assistance from HRSA to develop and implement a client-level data collection and reporting system. Two of these health departments received $200,000 each. One of these health departments used the funding to help build its own new system while the other department used the funding to adapt its current system to be compatible with CAREWare, a free, data collection system available through HRSA’s Web site. In addition to the SPNS funds, HRSA has made other funding available for infrastructure development. In 2008, HRSA provided a total of more than $1 million to 15 CARE Act Part C grantees that included funds for them to develop their client-level data systems. As of April 2009, HRSA was reviewing 72 applications for infrastructure development grants. HRSA also provided technical assistance to CARE Act grantees and service providers to develop client-level data collection and reporting systems. HRSA established the Technical Assistance Resources, Guidance, Education & Training Web site to provide information and resources, such as help desk support. HRSA conducted training sessions and webcasts to provide information on issues relating to implementing a client-level data system. Additionally, HRSA established the RSR Triage Committee to monitor and address the technical assistance needs of grantees. The committee meets weekly to discuss technical assistance concerns of grantees and monitors contractors charged with addressing technical concerns on behalf of HRSA. Officials from 7 of the 17 health departments we interviewed told us that they received technical assistance from HRSA to develop and implement a client-level data collection and reporting system. For example, one state grantee told us that HRSA provided a 2-day training session on CAREWare in November 2008. The HRSA official returned in March 2009 to provide assistance in implementing the CAREWare system. The state and local health departments that we interviewed have taken steps to implement a client-level data collection and reporting system that can report client-level data to RSR. Officials from all 17 health departments we spoke with stated that they either already had a client- level data system or were implementing such a system. Officials from six health departments indicated that they either currently use or plan to use CAREWare. The other eleven said they will use or plan to use a customized or vendor-distributed client-level information data system. Officials from 8 of the 17 departments stated that they had a system to collect client-level data before HRSA’s requirement to implement such a system. Officials from 10 of the 17 health departments we interviewed had concerns or challenges with implementing a client-level data collection and reporting system and reporting client-level data to HRSA. For example, officials from three health departments stated they were concerned about how to train service providers and other partners to collect client-level data. An official from 1 of these 3 health departments mentioned that it had been a challenge for his state to train the 100 case managers in the state to report client-level data in a consistent manner. Additionally, officials from three departments stated that they were concerned with potential breaches in the confidentiality of client information when data are entered into the RSR system. HRSA developed a timeline for grantees to submit their initial reports to RSR, but some grantees did not submit initial reports. The initial RSR reporting period covered January 1, 2009, through June 30, 2009; however, the deadlines varied for the different reports. Table 2 provides a description of the reports to be submitted to RSR by grantees and service providers and the deadline for the initial reporting period for each report. While most grantees submitted a Grantee Report to HRSA by the July 31, 2009 deadline, some did not do so. For the initial RSR reporting period, 538 of 638 (about 84 percent) CARE Act grantees submitted Grantee Reports to HRSA by the deadline. According to HRSA officials, as of August 13, 2009, of the 100 grantees and service providers that had not submitted their required reports, 50 had started the submission process and 50 had not begun. HRSA officials told us that they are contacting the grantees to determine the cause of the reporting delays. HRSA officials also stated that they are aware that some grantees have had problems generating data in the RSR-required format. The number of individuals on ADAP waiting lists increased during grant year 2008 and has continued to increase in 2009. In the first quarter of grant year 2008 (April 1, 2008, through June 30, 2008), 2 ADAPs had waiting lists with a total of 55 people on those lists. In the fourth quarter of grant year 2008 (January 1, 2009, through March 31, 2009), there were 3 ADAPs with waiting lists, but the number of individuals on the lists had increased to 112. By August 10, 2009, the most recent date for which data were available at the time of our analysis, these numbers had grown to 136 individuals on 4 ADAP waiting lists. Overall, this represents an increase of 147 percent (from 55 to 136) in the number of individuals on waiting lists from the first quarter of grant year 2008 to August 2009. Kentucky, Montana, Nebraska, and Wyoming all had waiting lists in August 2009. Nebraska had the largest ADAP waiting list with 71 individuals while Wyoming had the smallest list with 5 individuals. Five ADAPs had waiting lists at some point during the time period we examined. Montana had a waiting list at all three points while Kentucky and Wyoming had a waiting list at one of those times. Indiana and Nebraska had a waiting list at two points. Table 3 lists the grantees with ADAP waiting lists and the number of individuals on those lists. We also found that the total number of individuals enrolled in ADAPs increased during grant year 2008. In the first quarter of grant year 2008, 164,849 individuals were enrolled in ADAPs. The number enrolled by the fourth quarter of grant year 2008 was 177,746, an increase of 7.8 percent. Similarly, the number of individuals receiving at least one medication from an ADAP increased. In the first quarter of grant year 2008, 121,075 received at least one medication while 134,019 individuals received at least one medication in the fourth quarter of grant year 2008, an increase of 10.7 percent. The increase in the number and size of ADAP waiting lists, as well as the increase in the number of individuals enrolled in and receiving medications through ADAPs, indicates increased financial pressure on ADAPs as ADAPs balance client needs with available resources. HRSA officials told us that because of financial pressures they are closely monitoring five ADAPs—Arizona, Arkansas, California, Kentucky, and Iowa—for the initiation or expansion of waiting lists or other cost-control measures. For example, Arkansas is considering establishing an ADAP waiting list while Kentucky projects that additional individuals will be added to its waiting list. Arizona’s ADAP reduced the number of drugs on its formulary effective July 1, 2009, because of a budgetary shortfall. Additionally, Arizona’s ADAP still anticipates a budgetary shortfall this grant year even with the reduced number of drugs on its formulary and is considering additional cost-control measures. ADAP officials we interviewed also indicated that ADAPs were under increasing financial pressure. For example, Hawaii officials expressed concern that they will have to establish a waiting list. They stated that they are facing higher drug prices and an increasing number of people enrolled in their ADAP. Washington state officials noted that they are facing ADAP budget constraints. An advisory committee has developed a number of possible cost-control measures to stay within budget, including reducing the number of drugs on the ADAP formulary and reducing payments to pharmacies and medical laboratories. HRSA has been working to implement the unobligated balance provisions of RWTMA since its enactment in December 2006. As a result of the requirement to cancel unobligated balances and, in some cases, penalize grantees, HRSA implemented complex processes that have been difficult for grantees to comply with, thus delaying HRSA’s first implementation of the requirement. To implement the unobligated balance provisions, HRSA has required information on the amount of unobligated balances at the end of the grant year that some grantees either did not provide in a timely manner or that was inaccurate, or both. Three years after enactment of RWTMA, HRSA was continuing to develop its process for implementing the provisions and making adjustments based on some grantees’ continued inability to comply with the process that HRSA established. In addition, at least one key provision, the use of Part B supplemental grants to redistribute unobligated funds, has yet to occur for the first time. Because funds for these grants are only available until September 30, 2009, HRSA is at risk of losing the authority to make these grants. HRSA officials told us that, for grant years 2008 and 2009, they have changed their process for implementing the unobligated balance provisions in order to alleviate the burden on staff and to ensure that HRSA has the information it needs to implement the unobligated balance provisions in a timely manner. However, even with a changed process, HRSA will continue to depend upon grantees to provide useful information on their unobligated balances in a timely manner. This will not be achieved if grantees continue to provide information after the deadline by which it is required. HRSA must have complete, accurate, and timely information from grantees to complete the entire process to redistribute unobligated balances as supplemental grants within the period given for obligation of funds for Part A and Part B of the CARE Act. To help ensure that HRSA is able to implement the unobligated balance provisions in a timely manner, we recommend that the Secretary of HHS instruct the administrator of HRSA to take the following two actions to obtain timely and accurate information on grantees’ unobligated balances: Identify the causes of grantees’ difficulties in providing a timely and accurate accounting of their unobligated balances. Ensure that grantees adhere to deadlines for submission of their unobligated balances by developing steps to assist them in overcoming the causes of difficulties identified in accounting for unobligated balances. HHS reviewed a draft of the report, but did not comment on our conclusions and recommendations. HHS’ comments are reprinted in appendix I. We incorporated HHS comments and technical comments as appropriate. We are sending copies of this report to the Secretary of Health and Human Services. The report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512- 7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may found on the last page of this report. Other staff who made major contributions to this report are listed in appendix II. In addition to the contact above, Thomas Conahan, Assistant Director; Robert Copeland, Assistant Director; Leonard Brown; Romonda McKinney Bumpus; Cathleen Hamann; Sarah Resavy; Rachel Svoboda; and Jennifer Whitworth made key contributions to this report.
Under the CARE Act, funds are made available to assist over 530,000 individuals affected by HIV/AIDS. Grantees directly provide services to individuals (clients) or arrange with service providers to do so. The Department of Health and Human Services's (HHS), Health Resources and Services Administration (HRSA), which administers CARE Act programs, is required to cancel balances of grants that are unobligated after one year and redistribute amounts to grantees in need. HRSA began to collect client-level data in 2009. Under the CARE Act, states and territories receive grants for AIDS Drug Assistance Programs (ADAP), which provide HIV/AIDS drugs. GAO was asked to examine elements of the CARE Act. In this report, we review: (1) HRSA's implementation of the unobligated balance provisions, (2) HRSA's actions to collect client-level data, and (3) the status of ADAP waiting lists. GAO reviewed reports and agency documents and interviewed federal officials, officials from 13 state and 5 local health departments chosen based on location and number of cases, and other individuals knowledgeable about HIV/AIDS. The lack of timely and accurate information reporting by grantees has delayed HRSA's distribution of certain grants and has placed at risk HRSA's ability to obligate these funds. The late submission of actual unobligated balances for the 2007 grant year delayed HRSA's ability to determine grantees' unobligated balances and redistribute these funds to other grantees. A number of grantees were late in their submissions. For example, 21 of the 56 metropolitan areas submitted their information beyond the date initially set by HRSA. Additionally, some grantees reported inaccurate unobligated balances, which required HRSA staff to correspond with grantees and request revised information, creating additional delays. HRSA is authorized to obligate fiscal year 2007 funds for a 3-year period and is at risk of losing the authority to make grants from these funds. HRSA officials said they have made changes to how they implement the unobligated provisions in an effort to avoid these issues in the future. HRSA has taken actions to collect client-level data by implementing a new data collection and reporting system. However, some grantees and service providers did not submit the initial reports by HRSA's deadline. HRSA set a July 31, 2009, submission deadline for grantees' initial reports, but 100 of 638 grantees did not meet this deadline. Client-level data includes information such as the dates clients were served, the types of services provided, and the clients' health status. HRSA has implemented a system to collect data on the number of unique clients from grantees and service providers that will allow HRSA to determine the services each client received and the outcomes of these services. In order for HRSA to collect this information, grantees and service providers must first collect the data using their own systems, and HRSA has provided technical and financial assistance so that they can develop these systems. For example, under a project initiated in 2009, HRSA awarded approximately $4 million to CARE Act grantees for the development of their own client-level data collection systems. The number of ADAPs with waiting lists and the number of individuals on those lists is increasing. In the first quarter of grant year 2008 (April 1, 2008, through June 30, 2008), 2 ADAPs had waiting lists with a total of 55 people on those lists; this grew to 3 ADAPs and a total of 112 people in the fourth quarter of the year, and increased to 4 ADAPs and 136 individuals in August 2009. Kentucky, Montana, Nebraska, and Wyoming were each maintaining a waiting list for ADAP services in August 2009; Nebraska had the largest number of individuals (71), and Wyoming had the smallest number (5). ADAP officials expressed concern that they will have to establish or expand waiting lists or implement other cost-control measures, such as limiting the number of drugs they make available.
The GOES satellite system, which has been operational since 1975, plays a critical role in weather forecasting. The continuous availability of GOES data is vital to the success of NWS’ approximately $4.5 billion systems modernization program. GOES is one of two weather satellite systems operated by NOAA; the other is a system of polar-orbiting satellites. Unlike the polar satellites, geostationary weather satellites are placed into a special orbit that allows them to continuously maintain the same view of the earth’s surface. Thus, they are uniquely positioned to observe the development of hazardous weather, such as hurricanes and severe thunderstorms, and track their movement and intensity so that major losses of property and life can be reduced or avoided. Further, the unique ability of geostationary satellites to provide broad, continuously updated coverage of atmospheric conditions over land as well as oceans is very important to NOAA’s weather forecasting operations. NOAA’s operating strategy calls for two GOES satellites to be active at all times—one satellite to observe the Atlantic Ocean and the eastern half of the U.S., and the other to observe the Pacific Ocean and the western part of the country. Figure 1.1 shows the coverage provided by two GOES satellites. GOES satellites have two primary instruments for collecting weather data: an imager and a sounder. The imager is akin to a camera; it collects data in the form of digital images of the earth or some part of it, based on radiation that is sensed at five different spectral wavelengths or “channels,” including four in the infrared range and one that corresponds to visible light. Forecasters use animated sequences of imager data to track the development of various weather phenomena. The sounder is mechanically similar to the imager but receives data much more slowly and is sensitive to a broader range of spectral wavelengths. The sounder’s sensitivity to 19 different channels allows it to collect data on a number of natural variables, such as temperature and humidity, and attribute those measurements to specific levels of the earth’s atmosphere. The data from both the imager and sounder are relayed to a ground station at Wallops Island, Virginia, which processes the data to make them usable by weather forecasters. Then the data are retransmitted back up to the GOES satellites, which broadcast them to the weather forecasting community. NOAA has never been directly responsible for the design and development of any of its meteorological satellites. Instead, the agency has relied on NASA’s expertise in spacecraft design and development. After NOAA defines user requirements for its satellite systems, it turns them over to NASA to contract with industry to design and develop satellites that meet NOAA’s needs. NASA launches and tests the satellites, which are subsequently turned over to NOAA to operate. Beginning in the 1970s, NASA had a formal ongoing program, called the Operational Satellite Improvement Program (OSIP), to develop and demonstrate experimental versions of advanced meteorological satellites and instruments. Successful designs from the OSIP program were often incorporated into NOAA’s operational satellite systems. OSIP was terminated in 1981 due to budgetary constraints at NASA. However, NASA continues to act as the procurement agent for NOAA’s weather satellites. Even though GOES satellites have been operational for over 20 years, only one major design change has been implemented. The first generation design was developed and operated experimentally by NASA in the 1960s and early 1970s and subsequently became the basis for the first operational satellites, GOES-1 through GOES-7. Figure 1.2 is an illustration of the first generation design. This series of satellites was “spin-stabilized,” meaning that the satellites slowly spun while in orbit to maintain a stable position with respect to the earth. While these satellites operated effectively, they had technical limitations that NOAA wished to eventually overcome. The imager and the sounder on these satellites shared the same telescopic viewing apparatus and could not collect data at the same time. Further, because the satellite was spinning, it had to collect data very slowly, capturing one narrow band of data each time that its field-of-view swung past the earth. A complete set of sounding data, for example, took 2 to 3 hours to collect. In 1982, the National Weather Service (NWS) within NOAA sponsored a review of what new technologies were available and what additional missions could be performed by a new generation of geostationary satellites. The review was supported by NOAA’s National Environmental Satellite, Data and Information Service (NESDIS) as well as by NASA’s Goddard Space Flight Center and industry representatives. Based on input from these sources, requirements for a new generation spacecraft were developed. The new spacecraft design, called GOES-Next, was a significant departure from the first generation GOES. For example, GOES-Next was to be “body-stabilized.” This meant that the satellite would hold a fixed position in orbit relative to the earth, allowing for continuous meteorological observations. Instead of maintaining stability by spinning, the satellite would preserve its fixed position by continuously making small adjustments in the rotation of internal momentum wheels or by firing small thrusters to compensate for drift. Further, the imager and sounder would be completely separate, so that they could function simultaneously and independently. These and other enhancements meant that the GOES-Next satellites would be able to collect significantly better quality data more quickly than the older series of satellites. However, the improvements would be made at the expense of a heavier and more complex spacecraft. Figure 1.3 is an illustration of the GOES-Next design. Although GOES-Next represented a complete redesign of NOAA’s geostationary satellite system, satellite industry observers told us that the technical risks involved in developing GOES-Next appeared in the early 1980s to be manageable. Polar-orbiting meteorological spacecraft had already evolved from spin-stabilized to body-stabilized designs, and the GOES-Next builder, Ford Aerospace, had already built a body-stabilized geostationary meteorological satellite for India. Furthermore, the instrument manufacturer, ITT Corporation, had proposed designs that were closely based on successful imagers and sounders it was building for NOAA’s polar-orbiting satellites. On this basis, NOAA did not authorize and NASA did not require engineering analysis prior to GOES-Next development work. Despite the spacecraft and instrument design heritage, the GOES-Next program experienced severe technical problems, massive cost overruns, and dangerous schedule delays. Technical issues that had seemed straightforward when the spacecraft design was being conceptualized proved to be substantially more difficult to implement. For example, the original design did not sufficiently take into consideration the harshness of geostationary orbit, which is subject to large daily temperature variations that can stress and warp ordinary materials. Accordingly, the scan mirrors on the instruments had to be completely redesigned using other materials. It was also discovered that it would be very difficult to establish the fine pointing necessary to meet requirements for accurately mapping the satellite’s detailed images to their exact position on earth. These and other problems led to an increase of over 200 percent in NOAA’s estimate of the overall development cost of the GOES-next program—from $640 million in 1986 to $2.0 billion in 1996. Also, the first launch of a GOES-next satellite, which had been planned for July 1989, did not occur until April 1994. This nearly 5-year schedule slip left NOAA in real danger of temporarily losing geostationary satellite data coverage. Fortunately, due to the exceptional robustness of the last remaining first-generation satellite, GOES-7, as well as the use of a borrowed European satellite, NOAA was able to avoid a gap in coverage. GAO reported in 1991 that design complexity, inadequate management of the program by NASA and NOAA, and poor contractor performance all contributed to the cost, schedule, and technical problems experienced by the GOES-next program. Although some technical problems remain, the first two of these satellites, GOES-8 and GOES-9, are now producing useful, high quality weather data daily. The GOES-Next contract with Space Systems/Loral is for five spacecraft, designated GOES-I through GOES-M. Once the first two in the series, GOES-I and GOES-J, were successfully launched and placed in orbit, they were redesignated GOES-8 and GOES-9 respectively. The other three spacecraft in the GOES-Next series, GOES-K, GOES-L, and GOES-M, are in various stages of production. The GOES-K spacecraft has been completed and is scheduled for launch in April 1997. If GOES-8 and GOES-9 are still operational then, GOES-K will be stored at a central location in orbit and activated when either of its two predecessors fails. GOES-M and GOES-L are planned to be launched in 2000 and 2002, respectively. GOES-M, which has a stronger frame than the other satellites in the series, will be launched ahead of GOES-L in order to accommodate a new and heavier secondary instrument for measuring the space environment, called the Solar X-ray Imager. In February 1996, the House Committee on Science, Subcommittee on Energy and Environment, requested that we review NOAA’s management of the GOES Program. On the basis of subsequent discussions with subcommittee staff, our specific objectives were to assess: (1) the agency’s strategy for procuring continuation series satellites, (2) what steps the agency should be taking now to prepare for the next generation series of satellites, and (3) whether the potential exists for improving the system and reducing costs in the long term. To meet our objectives, we reviewed NOAA and NASA documents regarding GOES historical background, current status, mission operations, spacecraft and instrument improvements, ground systems, future procurement strategies, and proposed technology infusion. We reviewed NASA documents regarding the GOES Project and proposed technology infusion. We reviewed NOAA cost and budget documents and NASA Program Operating Plans. In addition to discussing these issues with agency officials from NOAA and NASA, we met with a broad range of representatives from academia and industry. Staff also attended a 3-day conference on “GOES-8 and Beyond,” sponsored by the International Society for Optical Engineering. Specifically, with regard to the continuation series procurement strategy, we obtained and analyzed information from NOAA and NASA satellite acquisition officials. We discussed our analysis and obtained additional information from industry representatives of: Hughes Space and Communications Company, El Segundo, California; Lockheed Martin Corporation, Sunnyvale, California; and Space Systems/Loral, Palo Alto, California. Regarding what steps the agency should be taking now to prepare for the next generation series of satellites, we obtained information from researchers and other officials at a range of NOAA and NASA facilities, including: NOAA System Acquisition Office, Silver Spring, Maryland; NOAA NESDIS GOES Program Office, Suitland, Maryland; NOAA NESDIS Cooperative Institute for Meteorological Satellite Studies, NOAA NESDIS Cooperative Institute for Research in the Atmosphere, Ft. NOAA NWS Headquarters, Silver Spring, Maryland; NOAA NWS Weather Forecast Offices in Sullivan, Wisconsin; Denver, Colorado; and Pueblo, Colorado; NOAA Forecast Systems Laboratory, Boulder, Colorado; NWS Cooperative Program for Operational Meteorology, Education, and Training, Boulder, Colorado; and NASA GOES Project Office, Goddard Space Flight Center, Greenbelt, Maryland. Regarding the potential for improving the GOES system while reducing costs in the long run, we began by obtaining information from NOAA and NASA officials at the sites listed above. We analyzed this information and sought additional input from representatives of industry and academia, including: Aerospace Corporation, El Segundo, California; Applied Physics Laboratory, Johns Hopkins University, Laurel, Maryland; Ball Aerospace & Technologies Corporation, Boulder, Colorado; Hughes Space and Communications Company, El Segundo, California; Lockheed Martin Corporation, Sunnyvale, California; MITRE Corporation, McLean, Virginia; National Research Council, Washington, D.C.; Northrop Grumman Corporation, Baltimore, Maryland; Space Systems/Loral, Palo Alto, California; TRW Space and Electronics Group, Redondo Beach, California; and University Corporation for Atmospheric Research, Boulder, Colorado. We were unable to perform a detailed audit of the cost of the continuation series and next generation satellites because cost information was unavailable. A budget figure of $2.2 billion for a program to build four spacecraft had been estimated within NOAA for the fiscal year 1997 budget. However, during our audit, NOAA restructured the program and its procurement strategy on two different occasions, each of which resulted in different cost estimates. At the time we concluded our review, NOAA’s System Acquisition Office, which will manage the continuation series procurement, did not have an official estimate for the overall cost of the program. We conducted our review from March 1996 through February 1997, in accordance with generally accepted government auditing standards. We requested written comments on a draft of this report from the Secretary of Commerce. The Secretary provided us with written comments that are discussed in chapters 2 and 3 and are reprinted in appendix I. Based on the best available analysis, the potential for a gap in geostationary satellite weather coverage will be significant in the early years of the next century if procurement of new satellites does not begin soon. Although three satellites in the current series are still in production and scheduled for launch over the next 5 years, designing and producing an entirely new spacecraft would take much longer—approximately 10 years, according to aerospace experts. Accordingly, NOAA plans to procure at least two “continuation series” spacecraft that will carry the same meteorological instruments as the current spacecraft and incorporate only limited technical improvements. NOAA expects this approach to allow for development of the new spacecraft within 5 years. Calculating the quantity and need dates for the continuation series is a complex process involving factors that cannot be precisely defined. Although NOAA has determined that it will need the first continuation series satellite in 2002, the actual date that a replacement satellite is launched may be different. According to NOAA officials, a major risk for any satellite program is the chance that a spacecraft launch will fail, necessitating that future planned launches be moved up to try to compensate for the lost spacecraft. Unexpected component failures on operational satellites—such as GOES-8 and GOES-9 have recently experienced—can also advance the need dates for future satellites. Conversely, a string of successful launches and robust, long-lived satellites can significantly delay the need for new satellites. Once a change in needs is identified, scheduling a new launch may be constrained by the unavailability of flight-ready replacement spacecraft, launch vehicles and facilities, or funding to support a launch. Given these risks and uncertainties, NOAA’s procurement strategy, which calls for two continuation series spacecraft to be built but includes separate options to build two additional spacecraft, provides a reasonable degree of flexibility to cope with unexpected schedule changes. We identified several shortcomings in NOAA’s spacecraft planning process that, if remedied, could lead to better planning in the future. First, the need for the continuation series arose because planning for a follow-on series has been repeatedly deferred since it was first attempted in 1989. Second, NOAA’s official policy for replacing satellites that experience partial failures is unclear, increasing the uncertainty about when replacements will be needed. Third, NOAA does not have a consistent policy for providing backup in the event of a launch failure. More consistent policies for replacing partially failed spacecraft and backing up launches would provide better assurance of meeting future needs with minimal risk. In order to procure continuation series spacecraft quickly, NOAA plans to minimize design changes from the current series. The same meteorological instruments as the current series will be used, and the spacecraft itself (called the spacecraft “bus”) will be very similar. According to government and industry officials, limiting the amount of new design work should make an accelerated procurement feasible. NOAA, working through NASA, its procurement agent, has already negotiated a contract with the instrument manufacturer, ITT Corporation, to deliver up to four additional sets of GOES imagers and sounders to be flown on the continuation series satellites. NOAA and NASA also plan to soon issue a Request for Proposals for two to four spacecraft busses and expect several manufacturers to submit bids. In most cases, bids are likely to be based on modified versions of standard spacecraft busses that manufacturers have developed to satisfy commercial needs for geostationary communications satellites. NOAA and NASA plan to negotiate a firm fixed-price contract with the winner of the spacecraft bus competition. Although the instruments on the continuation series spacecraft will be identical to those currently in use, the spacecraft busses will not. The current spacecraft bus, which was designed by Space Systems/Loral in the mid 1980s, has never been able to fully meet NOAA’s original GOES-Next specifications for spacecraft pointing. Designing the spacecraft to point very precisely at the earth and maintain that precise orientation is important because it allows the data collected by the instruments, especially the imager, to be mapped very accurately to their exact location on the surface of the earth. Because the GOES-Next spacecraft has been unable to achieve the originally required precision, extra work routinely needs to be done by spacecraft operators to correct for errors in mapping GOES data to its proper position over the earth’s surface. According to NASA and NOAA officials, improvements in pointing accuracy made in commercial spacecraft busses since the time that the GOES-Next design was finalized will better meet original GOES-Next specifications and are expected to be incorporated into the continuation series spacecraft. Other, relatively minor improvements are expected in the spacecraft busses as well. For example, an improved power system, based on more recent battery technology, should reduce certain brief observation gaps that occur periodically with the current design. NOAA considered several other approaches before arriving at its current procurement strategy. Originally, NOAA intended to procure four or five additional “clones” of the current spacecraft from Loral on a sole-source basis. The clones would have been largely identical to the current spacecraft, using new parts only in cases where original parts were no longer available. However NASA and NOAA officials jointly concluded that the government would not be justified in avoiding a competitive procurement, and this strategy was dropped. NOAA then considered buying just one or two clones from Loral, to be followed by a competitive procurement for a continuation series. In September 1996, we reported that significant cost savings were not expected from the sole-source clone procurement and that requirements for a follow-on system had not been determined. Because of concerns raised by ourselves and others, NOAA eventually also abandoned this second strategy. NOAA’s current strategy has advantages over earlier approaches that involved buying clones of the GOES-Next spacecraft. As discussed above, procuring a new spacecraft bus will allow NOAA to take advantage of technical improvements that have already been developed for commercial customers, such as greater pointing accuracy and a more capable power subsystem. In addition, use of a competitively awarded, firm fixed-price contract can be expected to help control or reduce costs. While moving to a fully competitive procurement approach for the continuation series, NOAA is also planning to reserve the option to obtain an additional satellite in the current series in the event that one is needed before the first satellite in the continuation series can be completed. To do this, NOAA and NASA are negotiating a “warranty option” as an extension to the current contract with Space Systems/Loral. Under this arrangement, NASA will contract with Loral to procure necessary long-lead time parts so that it is ready to build an extra spacecraft of the current type, if such a spacecraft is needed due to (1) the premature failure of either GOES-8 or GOES-9, which were designed to last 5 years each, or (2) a launch failure of the GOES-K spacecraft in April 1997. Should either of these occur, NOAA plans to advance the launches of GOES-L and GOES-M and subsequently launch the warranty spacecraft to ensure continuity until the first continuation series spacecraft is available. NOAA and NASA will determine by mid-1998 whether to exercise this warranty option and complete construction of the additional spacecraft. NOAA does not yet know what the continuation series will cost. A budget figure of $2.2 billion for a program to build four spacecraft had been estimated within NOAA for the fiscal year 1997 budget. However, as discussed above, NOAA restructured the program and its procurement strategy on two different occasions, each of which resulted in different cost estimates. At the time we concluded our review, NOAA’s System Acquisition Office, which will manage the continuation series procurement, did not have an official estimate for the overall cost of the program. Calculating the quantity and need dates for the continuation series satellites is a complex process involving factors that cannot be precisely defined. Although NOAA has determined that it will need the first one in 2002, the actual date that a replacement satellite is launched may be different. Figure 2.1 shows NOAA’s planned GOES launch schedule. A major risk for any satellite program is the chance that a spacecraft launch will fail, necessitating that future planned launches be moved up to try to compensate for the lost spacecraft. Unexpected component failures on operational satellites—such as GOES-8 and GOES-9 have recently experienced—can also advance the need dates for future satellites. Conversely, a string of successful launches and robust, long-lived satellites can significantly delay the need for new satellites. Once a change in needs is identified, scheduling a new launch may be constrained by the unavailability of flight-ready replacement spacecraft, launch vehicles and facilities, or funding to support a launch. Given these risks and uncertainties, NOAA’s procurement strategy, which calls for two spacecraft to be built but includes separate options to build two additional spacecraft, provides a reasonable degree of flexibility to cope with unexpected schedule changes. The risk of launch failure is significant in any spacecraft program. NOAA and NASA officials have told us that a failure rate of one in five launches is a reasonable estimate for the GOES program. NOAA has factored this risk into its launch schedule by designating the GOES-L launch in 2002 as a “planned failure.” GOES-L will be the fifth and last in the current (GOES-Next) series. Because NOAA assumes for planning purposes that the GOES-L launch will fail, it is planning to have the next spacecraft (the first in the continuation series) ready for launch at the same time. NOAA officials told us that it is especially important to plan for the next spacecraft to be available at the same time as GOES-L is launched because it will be the first in a new series and may be vulnerable to schedule delays because of development problems. Conservatively scheduling its launch at the same time as GOES-L is one way to try to compensate for the risk of development delays. However, the success of other launches, especially the launch of GOES-K in April 1997, will also be of critical importance. If the GOES-K launch were to fail, NOAA could risk a gap in coverage between 1998 and 2000. NOAA GOES program officials told us that if this situation were to occur, they would attempt to move up the GOES-L or GOES-M launches to reduce the length of the coverage gap. Unexpected component failures are another source of risk to the launch schedule. GOES-8 and GOES-9, for example, are now expected to operate for only 3 years, due to several technical problems that were unforeseen when they were launched. The two satellites were launched in April 1994 and May 1995, respectively, and had been designed to last 5 years each. The most serious of the technical problems is a tendency of the motor windings within the satellites’ meteorological instruments to break due to thermal stress. Each of the satellite’s two instruments has a primary and a backup motor winding. If both windings fail, the instrument cannot operate. The 3-year lifetime for GOES-8 and GOES-9 was determined in mid 1996 after one winding (out of a total of four) had already failed on each spacecraft. If the revised predictions for the lifetimes of GOES-8 and GOES-9 are accurate, NOAA runs the risk of having only one operational satellite (GOES-K, assuming it is successfully launched in April 1997) between 1998 and 2000. As described above for launch failures, if this situation were to occur, NOAA officials would attempt to move up the GOES-L or GOES-M launches to reduce the length of the coverage gap. They would also likely exercise the warranty option on the GOES-Next contract to ensure continuity until the first continuation series satellite were available. Although it is possible to move up scheduled launches, NOAA officials say that it is difficult to do so for several reasons. First, the spacecraft itself must be ready for launch at the earlier date, which may not be practical if integration and ground testing have not been completed well in advance of the previously anticipated launch date. Second, only a limited number of commercial launch opportunities (usually six) are available each year for the Atlas launch vehicle that GOES spacecraft are designed to use. Most, if not all, of those launch opportunities are reserved far in advance. In order to move a launch forward, NOAA officials need to be able to find another scheduled launch that can be deferred and replaced by the GOES spacecraft. Third, it may be difficult to move a launch forward from one fiscal year to another because funding may not be available to support a launch. NOAA officials told us that a GOES launch costs approximately $25 million (not including the cost of the Atlas IIA launch vehicle itself, which is approximately $80 to 90 million). Because of the many uncertainties in its planned launch schedule, NOAA has not made a final determination of how many satellites in the continuation series it will procure. The possibility of exercising the warranty option on the current GOES-Next contract, in addition to the chance that the existing satellites will last longer than 3 years and that none of the planned launches will fail, are all factors that could delay the need date for the first continuation series spacecraft, either singly or in combination. Conversely, NOAA’s current predictions for satellite lifetimes and launch failures could hold true, in which case the first continuation series spacecraft would be needed in 2002. The number of continuation series satellites needed also depends on when the potential for a coverage gap ends. The potential gap will end whenever the first of a new, follow-on series of satellites is available for deployment. As stated earlier, government and industry aerospace experts agree that it takes approximately 10 years to develop a new spacecraft system. If work were begun in 1998, the first spacecraft in a new GOES series would, therefore, be ready in about 2008 and could be launched as the GOES-Q spacecraft. (See figure 2.1.) Under this scenario, three continuation series satellites would be needed (GOES-N, -O, and -P). If satellites in the current series last longer than NOAA expects, or the expected launch failure does not occur, NOAA’s schedule could easily slip one or two years for the later launches. In that situation, only two continuation series satellites might be needed. NOAA’s planned continuation series contract will be for two spacecraft with two separate options for one additional spacecraft each. Thus, as few as two or as many as four spacecraft may be procured through this contract. Given the uncertainties in the launch schedule, NOAA’s flexible procurement strategy is reasonable. We identified several shortcomings in NOAA’s spacecraft planning process that, if remedied, could lead to better planning in the future. First, the need for the continuation series exists now only because planning for a follow-on series has been repeatedly deferred since it was first attempted in 1989. Second, NOAA’s official policy for replacing satellites that experience partial failures is unclear, increasing the uncertainty about when replacements will be needed. Third, NOAA does not have a consistent policy for providing backup in the event of a launch failure. Timely initiation of follow-on planning combined with clearer, more consistent policies for replacing partially failed spacecraft and backing up launches would provide better assurance of meeting future needs with minimal risk. NOAA officials have recognized for many years that a follow-on program to GOES-Next would have to be started early in order to avoid facing a potential gap in coverage. In 1989, NOAA commissioned a working group to identify requirements for a follow-on system. A list of requirements was developed and turned over to NASA in May 1989 for an assessment of architectural options for a follow-on GOES program. Specifically, NOAA asked that NASA examine options for modifying the GOES-Next system to improve efficiency, reduce costs, and satisfy the new requirements. In response, NASA examined a range of three architectural options and presented its results in October 1990. NASA’s final report indicated that the study had been very limited, both by resources and by the restriction of only looking at modifications to the GOES-Next architecture. NASA recommended that a more thorough study be conducted and that development work be immediately begun on the more challenging technical features of its design options. However, no further resources were committed to this line of effort. Since 1990, NOAA officials involved in the GOES program have made several attempts to initiate a follow-on program but have not received agency approval to move forward. An internal presentation delivered in March 1993 proposed studying a number of alternative approaches to the current GOES architecture, including flying low-cost weather cameras as secondary payloads on non-NOAA geostationary satellites. The presentation stressed the need to begin a formal study phase in fiscal year 1996 in order to have sufficient time to develop and implement a new architecture by 2008. Another presentation made in April 1995 also urged that engineering studies be conducted early in order to meet tight time frames. Both the 1993 and 1995 presentations assumed that several additional spacecraft in the GOES-Next series would be procured before the first follow-on satellite would be ready in 2008. Program officials told us that, faced with budget constraints, NOAA did not act on any of the recommendations of these studies. NOAA’s official policy for replacing partially failed satellites is unclear. The stated policy has been to launch and activate a replacement satellite if either of the two primary meteorological instruments (the imager or the sounder) fails on either of the two operational spacecraft. However, according to NASA and NOAA officials, it is not certain that a replacement would actually be launched in the event of a sounder failure, since sounder data is less critical than imager data. (Use of sounder data is discussed at greater length in chapter 3.) Also, no official criteria exist for launching a replacement satellite if other partial failures were to occur. For instance, a detector failure in a satellite’s imager could reduce the number of channels that it uses to collect data. Such a reduction may or may not be cause to replace the satellite. NOAA officials told us that they prefer to exercise judgement on a case-by-case basis as specific failures occur. However, the lack of explicit criteria for replacement makes it more difficult to forecast how soon new satellites are most likely to be needed. As discussed above, all spacecraft programs have to address the risk of launch failure. However, NOAA’s approach of designating certain launches as “planned failures” and providing backup spacecraft for only those launches is arbitrary, because NOAA does not know in advance which launches will actually fail. In other words, the risk of a launch failure is no greater for the “planned failure” than for any of the other launches, which do not have specifically designated backups. Although NOAA’s approach is effective in putting an extra spacecraft into the production stream to compensate for a launch failure, it is ineffective in providing backup for each launch. An alternative approach would be to schedule each launch to be backed up by the next spacecraft in the production stream. Such an approach would not require procurement of any additional spacecraft or launch vehicles and would enhance NOAA’s ability to compensate for launch failures by planning to have spacecraft always available for backup launches. According to NOAA satellite acquisition officials, the GOES program originally included the concept of maintaining an on-orbit spare in addition to the two operational satellites. The spare would be maintained in a central position and then moved either east or west to replace the first operational satellite that failed. As soon as possible after the on-orbit spare was activated, a new spare would be launched. If both GOES-8 and GOES-9 are still operating in April 1997 when GOES-K is launched, it will be put into on-orbit storage in the central location for up to 2 years. However, aside from this particular case, NOAA has not decided to move to this method of backup. Among aerospace experts, on-orbit storage of satellites is controversial. Although the practice can reduce the risk of a break in satellite coverage, other risks are incurred in the process of storing a spacecraft in orbit that could reduce its capabilities once it is activated. For example, a satellite stored in orbit would be susceptible to the possibility of radiation damage that it would not face if it were stored on the ground. In our opinion, further analysis of this strategy is necessary before it is adopted on an ongoing basis. Given the importance of maintaining continuous geostationary weather coverage, NOAA’s decision to immediately begin procuring two to four continuation series spacecraft through a competitively bid, firm fixed-price contract is reasonable. The planned procurement has been designed to be flexible enough to deal with the uncertainties of determining exactly how many satellites to buy and when they need to be available. However, the continuation series became necessary because a follow-on program had been repeatedly deferred since 1989. Such a program must be initiated soon if the number of continuation series satellites is to be kept to a minimum. Clarifying official policies for replacing partially failed spacecraft and backing up planned launches could improve program planning for the future. We recommend that the NOAA Administrator ensure that the National Environmental Satellite, Data, and Information Service (NESDIS) (1) clarifies official criteria for activating replacement spacecraft in the event of a failure of an operational GOES satellite or any of its instruments or subsystems and (2) reexamines the agency’s strategy for anticipating possible launch failures and considers scheduling backups for all future launches. The Secretary of Commerce concurred with the recommendations that appear in this chapter but objected to our use of the term “gap filler” to refer to the GOES-N, O, P, and Q satellites in the draft report. Accordingly, we have used the term “continuation series” to refer to these satellites in the final report. In addition to procuring satellites to prevent a gap in coverage, NOAA needs to begin planning for a follow-on program of GOES satellites if it is to avoid continuing to procure additional continuation series spacecraft in the future. Although several preliminary efforts have been made to study the feasibility of making incremental enhancements to the current meteorological instrument designs, NOAA has no formal program underway to develop a follow-on series. Based on the President’s fiscal year 1998 budget, NOAA does not plan to begin a follow-on GOES program until fiscal year 2003 at the earliest. Current usage of GOES data by weather forecasters suggests that a reexamination of the GOES satellite architecture is warranted. Although requirements have not been formally updated since the GOES-Next satellite series was developed, usage of GOES data has continued to evolve. The current satellite design hosts two meteorological instruments that are devoted to a range of capabilities, some of which are increasing in importance to weather forecasters and others of which remain largely experimental. According to NOAA, limited experience with GOES-Next data makes it difficult to precisely determine which capabilities will be of most value to users in the future. Before a decision can be made about what kind of follow-on satellite system to build, an updated analysis of user needs is necessary. Once user needs are determined and requirements established, a full range of potential architectural solutions needs to be identified and evaluated. Several new approaches and technologies for geostationary satellite meteorology have been suggested in recent years by government, academic, and industry experts. Some of these options may offer the potential for reducing system costs and improving performance in the long-term. Examples include moving to an architecture of smaller satellites as well as incorporating various spacecraft and instrument technologies that were not available for the previous spacecraft generation. NOAA officials involved in GOES acquisition and development agree that these options need to be considered, given that the follow-on GOES program will be subject to cost constraints. Identifying and evaluating options will require thorough engineering analysis. In addition, past NOAA experience shows that developing new technologies is done most efficiently as a separate line of effort, outside of the operational satellite program. Such an effort would benefit from greater collaboration with NASA, whose expertise and support have, in the past, significantly contributed to the development of NOAA’s weather satellite systems. NOAA and NASA are both likely to find it difficult to fund extensive engineering analysis or technology demonstration projects. Based on the President’s fiscal year 1998 budget, NOAA does not plan to begin a follow-on GOES program until fiscal year 2003 at the earliest. Agency officials told us that, lacking a formal follow-on program, NOAA’s primary ongoing efforts related to future planning for the GOES system are described in the GOES I-M Product Assurance Plan. Most of the plan addresses efforts to assess and improve the utilization of data from the current GOES satellites in order to maximize the return on the investment made in developing GOES-Next. The plan also discusses goals and potential capabilities for a follow-on system, concentrating on proposed incremental improvements to the current system, including enhancements to both the imager and sounder. The plan also suggests the need for additional instruments. However, none of these possible improvements has yet been funded for production. In accordance with the plan, NOAA funded some research at the Massachusetts Institute of Technology’s (MIT) Lincoln Laboratory and at ITT, the current manufacturer of the imager and sounder, to test potential incremental enhancements to both instruments. One possible enhancement would change the way the GOES sounder processes the radiance signal it receives from the earth, allowing that signal to be divided into a much greater number of discrete spectral bands. The larger number of bands would allow extrapolation of more information about the temperature, humidity, and pressure of the atmosphere over a given spot on the earth’s surface. The device that would do this spectral separation, called an interferometer, was originally designed and demonstrated on aircraft flights in the mid 1980s. Although NOAA spent several million dollars for engineering studies of the interferometer at MIT Lincoln Laboratory and at ITT, it recently decided not to continue development of the device. The second potential enhancement would change the configuration of the imager to speed up its operation. However, a faster imager would produce a larger data stream than the current space-to-ground communications system can handle. Because it would necessitate changes in other systems, this enhancement has also not been approved by NOAA. The GOES I-M Assurance Plan also suggests the possible need for two new instruments, a lightning mapper and a microwave sounder, in the next-generation system. The lightning mapper could improve severe weather monitoring, while the microwave sounder would allow sounder data to be collected through cloud cover, which the current sounder cannot do. No engineering analysis has yet been done on the lightning mapper. NOAA commissioned a preliminary engineering study of the microwave sounder from MIT’s Lincoln Laboratory, which is due in March 1997. NOAA is not yet in a position to make decisions about what kind of follow-on satellite system to build because its future needs are not yet well understood. NOAA has not conducted a formal revision or update of user requirements since 1989. However, recent positive experience with GOES-8 and GOES-9 has led to increasing demands for imager data. Data from the GOES sounders, on the other hand, is in less demand because it has seen little operational use. Changing the follow-on GOES architecture to facilitate greater collection of imager data and deemphasize sounder data might better serve user needs. Current GOES user requirements were established in 1983 and have not been formally revised since 1989. In 1994, just after the launch of the first of the GOES-next satellites, a NWS draft document identified potential requirements for a next-generation GOES system. However, this document was never finalized because NOAA officials wanted to wait for the chance to evaluate the utility of the enhanced data from GOES-next satellites before specifying requirements for future systems. To this end, an assessment group was formed and a strategy for evaluating GOES-next data was developed. Although assessment results for the first year have now been collected from users, NWS officials estimate that it will take from 2 to 3 more years to complete the study because of delays in the implementation of the NWS’ new Advanced Weather Interactive Processing System, which is needed by forecasters to properly display GOES-next data, and because many forecasters have not yet been trained in how to make best use of the enhanced data. NOAA has undertaken several other activities that could help in defining requirements for a follow-on series. For example, in developing the GOES I-M Product Assurance Plan, NOAA researchers suggested possible needs for future spacecraft capabilities. Also, a 2-day conference held in 1994 invited experts from NOAA’s research and operations community to consider future requirements for GOES. However, because NOAA has neither given formal programmatic endorsement to establishing future GOES requirements nor set aside resources to conduct this activity, requirements for the follow-on series remain undefined. Although the full range of GOES-Next capabilities is still not available to all local weather forecasters, many have access to at least some enhanced GOES-Next products, processed from data collected by the imager. Several significant new uses of GOES imager data have already been developed. For example, imager data have been used in combination with Doppler radar data to enhance winter snowstorm forecasting in the Great Lakes region, allowing local forecast offices to closely monitor the development, orientation, and movement of “lake effect” snow bands, formed when relatively cold air sweeps across the warmer Great Lakes. Forecasters have also discovered that combining data from two of the imager’s infrared channels allows them to detect fog at night, a new capability that had not been planned when the imager was designed. This capability has helped forecasters in the West give advance warning to airports of the likelihood of early morning fog that could affect the startup of flight operations. According to NOAA and NASA officials, many forecasters would also like to see an increased availability of “rapid scan” images of severe weather activity, such as thunderstorms and hurricanes. Rapid scan images are collected at short time intervals—every few minutes—so that a rapidly evolving storm can be carefully monitored and its direction and severity predicted. Since accurate prediction of severe weather is a critical activity for the NWS, there is high demand for rapid scan data when severe weather develops. However, GOES imagers cannot simultaneously produce rapidly updated imagery of storm activity within the continental United States and also collect a full set of data from the rest of the western hemisphere, which is important for routine weather forecasting. The conflicting demands for close-up (or “mesoscale”) views of severe storms and broad (or “synoptic”) views of hemispheric weather patterns are difficult to resolve. As a result, NOAA researchers see a coming need for significantly more data than the current GOES-Next imager can produce. In contrast, usage of GOES-Next sounder data has not progressed as rapidly and remains largely experimental. Although sounder data from polar satellites is routinely used in preparing near-term weather forecasts, geostationary sounder data were never used on a daily basis in the numerical prediction models that provide the basic guidance to NWS forecasters until very recently. The sounder on GOES-4 through GOES-7 was very slow and could not be used at the same time as the imager. As a result, sounder data were used only for special experiments. With the advent of GOES-8 in 1994, continuous geostationary sounder data has been available for the first time. However, as stated above, these data are available mainly to researchers. Most weather forecasters have had no direct exposure to GOES sounder data. NOAA researchers are investigating a number of promising uses for GOES sounder data. For example, studies performed at the University of Wisconsin have shown that precipitation forecasts and hurricane landfall predictions can be improved by using temperature and moisture data from the sounder in conjunction with the imager data that is traditionally used for such predictions. Although key NOAA officials believe sounder data will grow in importance in the future, the degree of added value that the sounder could contribute to NWS’ prediction models has been difficult to determine. Some researchers believe the data could significantly improve forecasts, while others believe the improvement would be only marginal. Meteorologists at the National Centers for Environmental Prediction, which run the prediction models that guide NWS forecasters, had been hesitant to put the sounder data into operational use until they completed their own evaluations. However, they now plan to begin incorporating GOES sounder data into their standard prediction models by the middle of 1997. Given that experience with this data has been limited, it is difficult to determine how valuable sounder data may be in the future. In contrast, the well-defined utility of imager data for critical forecasting activities and the need for additional imager data suggest that the mix of instruments to be flown on future GOES satellites should be re-examined. An architecture that would facilitate a greater collection of imager data and deemphasize sounder data might better serve user needs. A formal update of user requirements is needed before the potential advantages of alternative architectures can be fully assessed. According to GOES program officials, current GOES satellites are more expensive to launch and operate than the earlier generation of satellites. When NOAA developed the current generation, it moved from a relatively small and easy to operate spacecraft to one that is larger and much more complex. The newer satellites require a more expensive launch vehicle because they are larger and heavier than the first generation satellites. Furthermore, more extensive ground support is required to keep the spacecraft operating. These factors contribute to increased costs. Aerospace experts in industry and academia have identified a variety of options for attempting to reduce the costs of weather satellite systems such as GOES. For example, a number of studies have been done of alternative architectures based on smaller satellites carrying fewer instruments, which would have the potential to reduce launch and production costs. In the case of GOES, an architecture based on smaller satellites carrying one critical meteorological instrument instead of two could be considered. According to a recent study supported by NASA and the Department of Defense, cost reduction occurs predominantly, although not entirely, in small spacecraft, which tend to be inherently simpler and cost less than large spacecraft. Further, a smaller spacecraft would not need as large a launch vehicle as the current GOES system uses. Currently, GOES satellites are launched on Atlas IIA vehicles, which cost $80 to $90 million each. Smaller satellites could be designed to use Delta vehicles, for example, which currently cost $45 to $50 million apiece, or perhaps an even smaller vehicle. While the actual cost of launching a smaller GOES satellite 10 or more years from now cannot be determined, it is likely to continue to be cheaper than the launch cost for a large satellite. A recent study by the Applied Physics Laboratory of Johns Hopkins University shows that a small spacecraft architecture can increase the flexibility of the system to respond to failures and, in doing so, potentially reduce costs relative to an architecture based on larger satellites. For example, in the GOES system, failure of an instrument or a critical subsystem on one of the current spacecraft would likely necessitate the launch of a replacement, even though the original spacecraft might still retain some capabilities. If a smaller satellite architecture were used, in which each spacecraft would have only one primary meteorological instrument, the failure of an instrument would not affect the operations of the instruments flying on other spacecraft. Similarly, the failure of a critical subsystem, such as the communications or power subsystems, would only affect one instrument instead of two. Thus the overall robustness of the system would be enhanced. Based on discussions with NOAA, NASA, and academic experts, it appears that a smaller satellite architecture could also provide greater flexibility in the deployment of meteorological instruments. Currently, imagers and sounders are always deployed in pairs (one set per satellite) so that an operational constellation of a pair of instruments in both the east and west locations can be maintained. Flying the instruments on separate spacecraft would allow greater flexibility to position individual instruments in orbital locations where they are most needed and to change the locations of specific instruments in the event of a spacecraft failure or other emergency. It could also allow deployment of differing numbers of imagers and sounders to meet changing user needs. Making a decision about this or any other alternative architecture is not a simple task. Clearly, there are drawbacks to the small satellite architecture as well as advantages. Using such an architecture could require significantly more spacecraft launches, for example, even though the launch vehicles used would be smaller. The increased launch workload would have to be manageable by available launch facilities and ground crews. Ground operations, though possibly simplified for each spacecraft, would have to handle a larger total number of spacecraft. Also, the secondary instruments currently flown on GOES satellites would have to be accounted for, either within the new architecture or on other satellite systems. In reaching a decision on an architecture for a follow-on system, NOAA will need to carefully weigh these factors against the potential benefits of moving to small satellites. Technological advances have been made in recent years that strongly suggest that more efficient and effective instruments and spacecraft could be designed today to replace the current GOES series, which was designed in the early 1980s and uses 1970s technology in its meteorological instruments. While the planned continuation series satellites will incorporate some improvements to the design of the spacecraft bus to improve pointing and power management, further improvements could be made with a new spacecraft design. In a recent evaluation of the state of spacecraft technology, the National Research Council identified a number of new technologies that could contribute to smaller spacecraft that are cheaper to build and operate. For example, greater operational autonomy could be built into the spacecraft’s control systems, allowing them to carry out orbit determination and station-keeping with less intensive involvement of ground controllers. High-density computers and memory devices combined with advanced software techniques could enable extensive on-board data processing and screening, reducing the amount of data to be transmitted to earth. Such data processing advances could be of critical importance in compensating for the increased data volumes that would likely be produced by more advanced meteorological instruments. According to NASA and aerospace industry experts, significant advances have been made in sensor technology, which, if incorporated, could result in faster meteorological instruments that could produce significantly higher resolution data. Specifically, technological advances now allow for placing a much larger array of more sensitive optoelectronic detectors inside the instruments, thus producing higher resolution data more quickly. In 1996, NASA’s Goddard Space Flight Center proposed developing and flying an experimental satellite to be called the Geostationary Advanced Technology Environmental System (GATES) that would demonstrate this technology, known as focal plane arrays. Other proposals for advanced geostationary weather imagers have also been made in recent years, based on focal plane array technology. For example, the MITRE Corporation prepared a report in 1993 that assessed the development of an advanced focal plane array imager that could fly as a secondary payload on a commercial communications satellite. MITRE concluded in its study that such an imager would be feasible and would offer improved resolution and radiometric performance. MIT’s Lincoln Laboratory also completed a conceptual design study of an advanced imager. The study found that it would be feasible to exploit advanced technologies, such as focal plane arrays, to resolve the conflict in forecasters’ need for simultaneous close-up and broad views. A focused effort would be needed to develop focal plane array technology for possible use in the GOES system. According to an analysis by the Aerospace Corporation, although focal plane arrays are now considered the state of the art in infrared sensor technology, they are generally designed for highly specialized purposes and can be expensive to produce. A necessary enabling technology for focal plane array sensors is active cooling, which has advanced to the point that it is being considered for use in operational systems, according to aerospace experts. However, further development and testing is still needed to demonstrate that active coolers can remain reliable over long lifetimes. As another example, work underway by the University Corporation for Atmospheric Research shows that small, low earth orbiting satellites equipped with special receivers can use Global Positioning System (GPS) signals to measure temperature and humidity in the atmosphere. Preliminary results indicate that this system, called GPS/MET (Meteorology), may provide superior vertical resolution in the lower atmosphere compared to the GOES sounder. Further development and expansion of this system could reduce the need for potentially expensive improvements to the GOES sounder to improve its accuracy. NOAA officials involved in GOES acquisition and development agree that new approaches and technologies need to be considered, given that the follow-on GOES program will be subject to cost constraints. In public presentations, NOAA officials have stressed the importance of looking at new ways of doing the GOES mission, including flying smaller GOES satellites or constellations of small satellites carrying different instruments. However, NOAA has not yet conducted any in-depth analysis of alternative approaches. If revised user requirements suggest that a new GOES architecture may be needed, thorough engineering analysis of a range of design options will then be necessary. Past experience in developing NASA spacecraft, such as the Hubble Space Telescope and the Gamma Ray Observatory, shows a clear correlation between the amount of resources focused on the early phases of a project, which include concept definition and engineering trade studies, and the ability of that project to meet its cost and schedule commitments. NASA has a standard project model that it generally uses for planning spacecraft development. The NASA model calls for a six-phase life cycle, the first three phases of which are all dedicated to ensuring that the proposed project is well defined, feasible, and will likely meet requirements. The first phase, called the Pre-Phase A or Advanced Studies phase, is intended to produce a broad spectrum of ideas and alternatives from which new projects can be selected. Possible system architectures are defined in this phase, and initial cost, schedule, and risk estimates are developed. The second phase, Phase A or Preliminary Analysis, determines the feasibility and desirability of a suggested new system by demonstrating that a credible, feasible design exists after considering alternative design concepts by conducting feasibility and risk studies. The third phase, Phase B or Definition, aims to define the project in enough detail to establish an initial baseline capable of meeting mission needs. During this phase, system functional and performance requirements along with architectures and designs become firm as engineers conduct trade studies of design options for the various systems and subsystems that make up the spacecraft. These trade studies are conducted iteratively in an effort to seek out cost-effective designs. According to NASA, it is generally accepted that cost overruns in the later development phases of a spacecraft project are caused by inadequate attention to the early phases of mission design. This principle was borne out in the GOES-Next development experience, which suffered an over 200 percent cost increase and serious schedule slippages. Because development risks were thought to be well understood and manageable, NOAA did not authorize and NASA did not require that engineering analysis be done prior to GOES-Next development work. However, as discussed in chapter 1, a number of technical problems arose that were expensive and time-consuming to fix. In addition, some of NOAA’s performance requirements for the spacecraft, such as the pointing requirement mentioned in chapter 2, had to be relaxed because the planned spacecraft could not meet them. If a more thorough engineering analysis of the proposed design had been conducted early on, these problems likely could have been identified and resolved more cheaply and expeditiously. NOAA faces several significant obstacles in developing a new architecture for its geostationary satellite system. Most significantly, as numerous industry and government aerospace experts told us, it is difficult and expensive to develop new satellite capabilities within the constraints of an operational program such as NOAA’s. Research and development are more effectively conducted separately, with proven results incorporated into the operational program afterwards. Originally, all of NOAA’s satellites and meteorological instruments were developed experimentally by NASA and subsequently adopted for operational use by NOAA. However, NASA canceled its formal weather satellite research program in 1981 and is now reluctant to fund technology demonstration projects that will primarily benefit NOAA. NASA originally developed prototypes of both the GOES system and NOAA’s polar-orbiting weather satellite system, using its own funding. The first experimental satellite dedicated to meteorological observations, called the Television and Infrared Observational Satellite 1 (TIROS 1), was launched by NASA in 1960. Nine more experimental TIROS satellites were launched between 1960 and 1965. These experimental satellites gave NASA the opportunity to test a number of significant technological features that since have become standard on meteorological satellites, such as including a transmitter that would allow weather stations around the world to receive data from the satellite when it is overhead. These early satellites also gave the U.S. weather forecasting community the opportunity to experiment with the data transmitted back from the satellites to determine its best uses. The first geostationary meteorological observations were made by NASA’s Applications Technology Satellites (ATS 1 through 3), launched in 1966 and 1967. As with the early TIROS polar satellites, the ATS satellites gave NASA and NOAA the opportunity to gain experience in operating meteorological satellites in geostationary orbits and analyzing their observations on an experimental basis. In 1973, NASA and NOAA formalized their successful ongoing relationship by establishing the Operational Satellite Improvement Program (OSIP) at NASA. Through the OSIP program, NASA continued to fund the development of the Nimbus series of experimental polar-orbiting weather satellites. Derivatives of many of the meteorological instruments developed for the Nimbus program are now being operated on NOAA’s polar-orbiting satellites. For example, the High Resolution Infrared Radiometer which flew on Nimbus 1 in 1964 was a progenitor of the Advanced Very High Resolution Radiometer (AVHRR) that currently flies on NOAA polar-orbiting satellites. The AVHRR, in turn, was the basis for the design of the current GOES imager. Despite the success of OSIP, NASA canceled the program in 1981 because of budgetary pressures. NASA’s elimination of OSIP left NOAA without the engineering support required to design, develop, and test new spacecraft and instrument technologies before incorporating them into the agency’s operational satellite systems. According to NASA and NOAA officials, many of the technical problems that plagued GOES-Next development could have been addressed and resolved more efficiently and less expensively within the context of a smaller, experimental precursor program, such as OSIP. Although OSIP no longer exists as an ongoing program to improve weather satellites, NASA has several avenues within its existing programmatic structure for undertaking research and demonstration projects related to advanced weather satellites. However, no such projects are currently being funded. As mentioned above, NASA’s Goddard Space Flight Center proposed developing and flying its experimental GATES satellite in 1996. Although it would lack a sounder and other secondary GOES instruments, GATES would feature a much faster and more efficient imager that would take advantage of advanced focal plane array technology to include more channels and offer higher resolution than the current GOES imager. If successful, GATES could demonstrate the feasibility of addressing user needs for more imager data from a small satellite platform. However, only preliminary design work for the GATES system has been completed to date. Further opportunities for collaboration may exist within NASA’s New Millennium or Earth System Science Pathfinder programs. The New Millennium Program is a NASA effort to develop and validate revolutionary technologies that will enable the construction of highly capable and agile spacecraft in the 21st century. The program has already committed to the development of an advanced land imager, which will be its first earth science mission. A geostationary weather monitoring mission is also under consideration, along with a number of other possibilities, but no commitment has yet been made. While the New Millennium program is focused on space technology, the Earth System Science Pathfinder program is a similar effort aimed at furthering earth science. An advanced geostationary weather monitoring mission could also fit within its mission. NOAA officials also recognize that development of a new generation of instruments and spacecraft would benefit from greater collaboration with NASA. NOAA recently agreed to modest participation, at a rate of $1 million per year, in NASA’s GATES project, which in February 1997 became part of a new Advanced Geostationary Studies program. However, NOAA has generally been reluctant to provide funding to NASA to support new research efforts, believing that they should be NASA’s responsibility. NOAA did not previously provide funding for NASA’s OSIP program. NOAA faces a difficult decision in determining how and when to proceed with development of a next generation GOES system. Because of budget constraints, NOAA has decided not to begin planning for a follow-on system until after fiscal year 2002. While delaying the start of a follow-on GOES program saves funds in the near term, it also incurs a significant measure of risk, in that NOAA, as a result, may have to procure more of the continuation series type of satellite farther into the future, delaying the opportunity to adopt an improved design. Indeed, the continuation series is now necessary because the start of a follow-on program has been delayed repeatedly since 1989. Deferring development of a follow-on GOES satellite system is risky because it forgoes consideration of two kinds of potential benefits. First, a follow-on system could provide the opportunity to design a system architecture that is more flexible, less costly, and better able to meet users’ needs. Second, a follow-on system could incorporate advanced technologies that could lead to improvements in weather forecasts in the future. We believe that these potential benefits are significant and that a decision on when and how to develop the follow-on generation is one that should be carefully considered. Given that options may exist for NOAA to develop a significantly improved follow-on GOES system, the Congress may wish to closely examine the costs and benefits of different approaches for the timing, funding, and scope of the follow-on program. Further, the Congress may also wish to examine NASA’s potential role in working with NOAA to support the needs of geostationary weather satellites within NASA’s advanced spacecraft technology programs. We recommend that the Administrator of the National Oceanic and Atmospheric Administration prepare a formal analysis of the costs and benefits of several alternatives for the timing, funding, and scope of the follow-on program, including the possibility of starting the program as early as fiscal year 1998 and the potential need to fund some types of technology development apart from the operational satellite program. This analysis should be provided to the Congress for its use in considering options for the future of the GOES program. The Secretary of Commerce did not concur with our recommendations to reconsider NOAA’s decision to defer the follow-on program and to prepare a formal analysis of options for such a program. The draft that we provided to Commerce for comment was based on the fiscal year 1997 budget, which showed that a follow-on program would begin in 2000. However, the fiscal year 1998 budget request, released since then, shows no follow-on program beginning through 2002. In discussions with us, NOAA officials confirmed that a follow-on program is not being planned until 2003 at the earliest. Commerce did provide information on four small research efforts that it has recently funded or that are currently underway to examine advanced technology and alternative architectures for potential adoption in the future. Two of these were initiated in February 1997, as we were completing our review. They include the Advanced Geostationary Studies program being supported by both NOAA and NASA and the contract with the Jet Propulsion Laboratory to develop design concepts for an advanced imager. The other two items mentioned by Commerce in its comments are an Aerospace Corporation study of possible future architectures, begun in late 1996, and support from MIT’s Lincoln Laboratory for several items, including the Aerospace architecture study, the advanced imager work, and a geostationary microwave sounder study. We believe that these are valuable activities and have included references to them where appropriate in the report. However, they do not obviate our overall concerns about planning for the future of the GOES program. Activities such as these are useful but do not represent a commitment to exploring all options and developing a new generation of satellites. The fiscal year 1998 NOAA budget request does not allow for either a follow-on program to formally begin until 2003 at the earliest or for enhanced instruments to be flown on the continuation series. Therefore, NOAA’s ability to take action based on the results of these studies is questionable. Other studies funded by NOAA, such as the work on advanced sounders and imagers that is mentioned in our report, have not led to any operational implementation. We believe that continued deferral of the follow-on program is risky because it forgoes the opportunity to identify and develop a potentially more effective and economical architecture. Furthermore, the longer that NOAA continues without actively considering other options for a future system, the more it risks having to procure additional continuation series satellites, because the availability date for a fully developed new satellite system will slip farther into the future.
Pursuant to a congressional request, GAO reviewed the National Oceanic and Atmospheric Administration's (NOAA) management of the Geostationary Operational Environmental Satellite (GOES) Program, focusing on: (1) NOAA's strategy for procuring satellites in the GOES continuation series; (2) what steps NOAA should be taking now to prepare for the next generation series of satellites; and (3) whether the potential exists for improving the system and reducing costs in the long term. GAO noted that: (1) based on the best available analysis, the potential for a gap in geostationary satellite coverage will be significant in the early years of the next century if procurement of new satellites does not begin soon; (2) to prevent this problem, NOAA plans to competitively procure two to four continuation series spacecraft that will carry the same meteorological instruments as the current spacecraft and incorporate modest technical improvements; (3) the satellites are planned for launch beginning in 2002; (4) given the importance of maintaining continuous geostationary weather coverage, NOAA's plans are reasonable; (5) however, there are inherent difficulties in determining exactly when and how many of the continuation series spacecraft will be needed; (6) despite these difficulties, GAO identified several specific shortcomings in NOAA's spacecraft planning process that, if remedied, could improve planning in the future; (7) based on the President's fiscal year (FY) 1998 budget, NOAA does not plan to begin a follow-on GOES program until FY 2003 at the earliest; (8) given that the opportunity now exists to consider alternatives for a follow-on system, current usage of GOES data by weather forecasters suggests that a reexamination of the GOES satellite architecture is warranted; (9) before a decision can be made about what kind of follow-on satellite system to build, an updated analysis of user needs must be completed; (10) several new approaches and technologies for geostationary satellite meteorology have been suggested in recent years by government, academic, and industry experts, however, identifying and evaluating the full range of options will require thorough engineering analysis; (11) in addition, past NOAA experience shows that developing new technologies is done most efficiently as a separate line of effort, outside of the operational satellite program; (12) such an effort would benefit from greater collaboration with the National Aeronautics and Space Administration, whose expertise and support have, in the past, significantly contributed to the development of NOAA's weather satellite systems; (13) the longer that NOAA continues without actively considering other options for a future system, the more it risks having to procure additional continuation series satellites, because the availability date for a fully developed new satellite system will slip farther into the future; and (14) the potential advantages of advanced technologies and small satellite constellations as well as questions about changing user requirements suggest that alternatives to the present architecture should be seriously considered.
SSA is currently undertaking a multiyear, multibillion dollar systems modernization effort that is intended to replace aging equipment, support current and future redesigned work processes, and improve productivity. The cornerstone of this modernization effort is the agency’s transition from its current centralized, mainframe-based computer processing environment to a highly distributed client/server processing environment. The IWS/LAN infrastructure—consisting of networks of intelligent workstations connected to each other and to SSA’s mainframe computers—is intended to provide SSA with the initial computing framework for using client/server technology to achieve cost savings and improve customer service by distributing selected processes and information closer to where they are needed. Through fiscal year 1997, SSA had reported spending approximately $565 million on acquiring workstations, local area networks, and other services to support the IWS/LAN infrastructure. Software development is a critical component of the modernization initiative. SSA’s Office of Systems, with contractors’ assistance, is designing and developing a new generation of software that is scheduled to operate on the IWS/LAN to support redesigned work processes in a client/server environment. It has selected the disability claims process as the first major redesign effort and is currently developing the software—referred to as the Reengineered Disability System (RDS)—that is intended to automate this redesigned process. RDS is scheduled for national implementation on the IWS/LAN from July 1999 through May 2001. To help SSA’s software development efforts succeed, however, it is important that the agency have disciplined and consistent software development practices that produce high-quality software within budget and on schedule. In September 1996, we reported that SSA had experienced problems in developing RDS, which contributed to a delay of more than 2 years in its scheduled implementation. These problems included (1) using programmers with insufficient experience, (2) using software development tools that did not perform effectively, and (3) establishing initial software development schedules that were too optimistic. During that same month, a contractor’s preliminary risk assessment of SSA’s client/server transition strategy identified various risks associated with the existing software development processes, including ineffective requirements definition and inadequate configuration management. In addition, SSA is currently facing the critical challenge of ensuring that its information systems are Year 2000 compliant. By the end of this century, SSA must review all of its computer software and make the changes needed to ensure that its systems can correctly process information relating to dates. These changes affect not only its new network but computer programs operating on both its mainframe and personal computers. We recently reported that while SSA has made significant progress in its Year 2000 efforts, it faces the risk that not all of its mission-critical systems will be corrected by the turn of the century. At particular risk are the systems used by state DDSs to help SSA process disability claims. Making software process improvements to address problems such as those SSA faces is considered to be a challenging undertaking for any organization. To guide agencies in assessing the strengths and weaknesses of their software development processes, SEI, in the late 1980s, developed the Capability Maturity Model (CMM). CMM is organized into five levels that characterize an organization’s software process maturity. As shown in table 1, these levels range from initial (level 1), characterized by ad hoc and chaotic processes, to optimizing (level 5), characterized by continuous process improvement based upon analysis and quantitative data. Further, to assist agencies in implementing effective software process improvement programs, SEI developed the IDEALSM model, which defines five phases of process improvement activity. These phases are: Initiating. Management determines that there is a business reason for improving their processes, sets general process improvement goals, and sponsors a process improvement program. Diagnosing. Using CMM, the current practices of the organization are appraised and characterized. Results of the assessment are documented and recommendations are made regarding areas in which to focus improvement efforts. Establishing. Based on the results of the diagnosing phase and the general goals that were defined in the initiating phase, the organization develops a strategy for improvement, prioritizes activities, and formulates measurable goals. Process action teams are established to develop action plans for improvement. Acting. Action plans are implemented through pilots. The results of pilots are evaluated and action plans are modified as appropriate. When proven effective, action plans are implemented throughout the organization. Learning. After the new processes have been in place for some time, their effectiveness is evaluated, communicated throughout the organization, and, as appropriate, used to formulate new action plans to ensure that goals are achieved. To determine the status of SSA’s efforts to improve its software development processes, we analyzed key documents, including SSA’s Software Process Improvement Program Management Plan, dated April 1997, Client/Server Transition Strategy: Preliminary Risk Assessment, dated September 1996, and relevant systems and strategic planning documents, such as the Information Systems Plan. In addition, to determine the status of specific projects being undertaken by contractors in support of the improvement initiatives, we reviewed the statements of work for contractor services and final deliverables, such as baseline assessments and software process improvement reports. We did not independently verify the accuracy of information reported in the contractors’ assessments of SSA’s software development processes. We analyzed SEI’s IDEALSM: A User’s Guide for Software Process Improvement, dated February 1996, which SSA is using to implement and manage its process improvement program, to determine whether SSA’s current and planned software development practices are consistent with this guidance. We reviewed additional SEI reports, including Moving On Up: Data and Experience Doing CMM-Based Process Improvement, dated August 1995, to identify successful practices of organizations applying CMM-based process improvements. In addition, we reviewed documents discussing the implementation schedule, technical strategies, and risks associated with SSA’s development of the RDS software application to obtain information on the agency’s experiences in software development. However, we did not specifically evaluate the progress of SSA’s ongoing effort to develop RDS. To further support our assessment of the actions that SSA is taking to improve its software development capability, we interviewed the Deputy Commissioner for Systems and other systems officials directly involved in implementing the improvement initiative, the General Services Administration official responsible for administering the support contract for SSA’s client/server software development assessment, and representatives of the contractors involved in this initiative. We performed our work from March 1997 through November 1997 in accordance with generally accepted government auditing standards. SSA provided written comments on a draft of this report. These comments are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. Recognizing the need to improve its software development capability, SSA has launched a formal software process improvement program and initiated pilot projects to test improved software development processes. In doing so, it is seeking to achieve a repeatable (level 2) level of software capability maturity. SSA acquired the assistance of SEI to help formulate and implement the improvement program. It adopted SEI’s CMM as the framework for assessing the current state of its software development capability and establishing improvement priorities and is following SEI’s IDEALSM model as the methodology for implementing and managing its software process improvement actions. Further, SSA has developed a software process improvement schedule identifying the specific phases and tasks that it plans to undertake to complete the implementation of its improved software development processes by June 2000. Consistent with SEI’s IDEALSM model, SSA has performed a number of the steps recommended for the initiating, diagnosing, and establishing phases of its software process improvement program and is beginning to implement steps under the acting phase. It has put in place the initial management infrastructure to support and facilitate its software process improvement initiatives by establishing a management steering committee to raise organizational awareness of the improvement program and a software engineering process group to oversee the development and implementation of the software process activities. It has also established various work groups to carry out the specific process improvement initiatives. SSA also has undertaken two assessments of the maturity of its existing software development processes, focusing on identifying (1) effective software development policies and procedures already being used within the agency and (2) key software development process areas needing improvement. SEI’s CMM specifies key process areas and criteria that must be addressed to achieve a particular software development maturity level. The key process areas for level 2 (repeatable) maturity—the level that SSA is currently seeking to achieve—are (1) requirements management, (2) software project planning, (3) software project tracking and oversight, (4) software subcontract management, (5) software quality assurance, and (6) software configuration management. With SEI’s assistance, SSA conducted a self-assessment to determine the strengths and weaknesses of its current processes for developing and maintaining software, which are primarily mainframe-oriented. This self-assessment identified 22 weaknesses in the level 2 key process areas. For example, the assessment found within the area of software project planning that risks had not been identified, assessed, or documented for some projects, and within the area of software project tracking and oversight, that the results and performance of some projects had not been tracked against key elements—such as costs, schedules, and risks—of SSA’s software project plans. In addition, a support contractor hired to identify client/server best practices and assist SSA in transitioning to a client/server environment performed an independent assessment of the agency’s software development processes. This assessment focused specifically on identifying the strengths and weaknesses in SSA’s ability to develop client/server software. It, too, identified weaknesses in SSA’s software development practices. For example, the assessment identified within the area of requirements management a need to improve practices for defining system requirements and specifications, and in the area of project planning, a need to improve practices for scheduling and estimating the cost of software development efforts. Based on the findings identified in both assessments, SSA developed a software process improvement program implementation plan. This plan will be used during the acting phase to, among other things, direct the software development activities of three pilot projects that the Office of Systems intends to undertake to help institutionalize the software process improvements. According to the implementation plan, project teams for the three pilots are expected to test and evaluate improved software development processes to address the identified weaknesses. SSA initiated its pilot activities in September 1997 and, at the conclusion of our review, had begun developing the policies and procedures that it will use to test each of the key process areas. SSA expects to complete all of the pilots by March 1999, after which it will finalize its strategy for implementing the improved software development processes. Although SSA has made important progress in its efforts to improve its software development processes, its improvement program does not yet include specific, measurable goals and baseline data that are essential to helping it achieve a repeatable (level 2) software development capability. Without measurable goals and baseline data, SSA does not yet have critical information needed to guide its improvement efforts and to provide evidence that the efforts are resulting in more consistent, cost-effective, and timely production of higher quality products. According to SEI’s IDEALSM: A User’s Guide for Software Process Improvement, clearly defined and measurable goals are necessary to provide guidance and to assist in developing tactics for improving the software development process. They also allow for objective measurement of the improvement results. SEI prescribes that general goals of the improvement program be defined during the initiating phase based on the business needs of the organization. These general goals are used, in conjunction with baseline data on the agency’s existing processes (such as software size estimates, defects identified, and calendar time for project completion), to develop specific short- and long-term measurable goals during the establishing phase. For example, one general goal could be to make software projects more predictable in terms of cost and schedule. If the measurement baseline established that 80 percent of the organization’s current projects exceed their original cost and schedule estimates by more than 25 percent, then the specific, measurable goal could be to improve that measure such that 80 percent of all projects are completed within 10 percent of their original cost and schedule estimates within 2 years. At the conclusion of our review, SSA had established general goals for its improvement program that included (1) achieving a repeatable (level 2) software capability maturity and (2) creating a software development environment that encourages continuous improvements in quality, productivity, and timeliness. However, it had not yet established specific, measurable improvement goals for its software development processes, nor had it defined the actual baseline data that it will use to monitor progress in achieving its goals. SSA’s Deputy Commissioner for Systems and the director of the software process improvement program told us that they recognized the importance of and need for establishing specific, measurable goals for the agency’s improvement program. However, they stated that the agency has not yet been able to define such goals for the improvement initiatives because it has not traditionally maintained baseline data on its software development projects that would be required to make such determinations. These officials told us that they intend to develop the necessary measures based on data compiled during the pilots being conducted to test improved software development processes. While SSA officials stated that they intend to develop the necessary measures during the pilots, at the time of our review they did not have a detailed strategy explaining how the pilots will be used to generate these measures. SSA contracted with the Gartner Group to develop a strategic measurement plan which recommends a general framework and steps for establishing consistent, repeatable processes to collect and track measurements; it also has outlined within its implementation plan the general strategy it intends to use for the three pilots. However, neither of these documents provided detailed information on how and in what time frames baseline data and specific, measurable goals will be derived from the pilots. Without an explicit strategy and time frames for generating baseline data and measurable goals, SSA cannot ensure that it will have essential information to monitor the progress of its improvement efforts. Recognizing the need to reassess its software development processes in light of transitioning to a client/server processing environment, SSA is taking important steps to improve its software development capability. If successfully completed, these actions should help position the agency to strengthen its processes for developing quality software products. However, until SSA develops baseline data and establishes specific measurable goals for its improvement initiatives as part of its pilot projects, it will not have necessary information to monitor its progress toward achieving its intended software process improvements. To strengthen SSA’s software process improvement program, we recommend that, as part of its recently initiated pilot projects, the Commissioner of Social Security direct the Deputy Commissioner for Systems to develop and implement plans that explicitly articulate SSA’s strategy and time frames for (1) developing baseline data, (2) identifying specific, measurable goals for its improvement initiative, and (3) monitoring and measuring progress in achieving these goals. In commenting on a draft of this report, SSA agreed with our recommendation and described actions that it is undertaking to develop a plan for its measurement activities. These actions include obtaining support for its pilot projects from the Gartner Group and working with SEI to formulate a plan that will include (1) tasks and time frames for developing baseline data, (2) measurable goals for the implementation of CMM-compliant processes, and (3) methods for measuring progress against established goals. We are encouraged by SSA’s response and will continue to monitor the agency’s progress in implementing its software improvement effort. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from its date. At that time, we will send copies to the Commissioner of Social Security; the Director of the Office of Management and Budget; appropriate congressional committees; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e:mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Valerie C. Melvin, Assistant Director Leonard J. Latham, Technical Assistant Director Michael A. Alexander, Senior Information Systems Analyst Gwendolyn M. Adelekun, Business Process Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the status of the Social Security Administration's (SSA) software development process improvement efforts. GAO noted that: (1) SSA has initiated a number of actions to improve its software development capability; (2) among other things, it has: (a) launched a formal software process improvement program and initiated pilot projects to test improved software development processes; (b) acquired the assistance of the Software Engineering Institute (SEI) to help it assess the strengths and weaknesses in its current software development processes and to assist in implementing the improvement program; and (c) established a management steering committee and a software engineering process group within the Office of Systems to oversee software process improvement activities; (3) these are positive steps that should help position SSA to improve its software development capability; (4) although these initiatives are under way, SSA has not yet established key elements of its software process improvement program that are needed to measure the progress and success of its improvement efforts; (5) in particular, SSA has not yet defined specific, measurable goals for its software process improvement program or established the baseline data that it will use to assess its progress in achieving these goals; and (6) without this essential information, SSA cannot be assured of the extent to which its improvement efforts will result in the consistent and cost-effective production of high-quality products.
As the lead federal agency for maritime homeland security within the Department of Homeland Security, the Coast Guard is responsible for homeland and nonhomeland security missions, including ensuring security in ports and waterways and along coastlines, conducting search and rescue missions, interdicting drug shipments and illegal aliens, enforcing fisheries laws, and responding to reports of pollution. The deepwater fleet, which consists of 186 aircraft and 88 cutters of various sizes and capabilities, plays a critical role in all of these missions. As shown in table 1, the fleet includes fixed-wing aircraft, helicopters, and cutters of varying lengths. Some Coast Guard deepwater cutters were built in the 1960s. Notwithstanding extensive overhauls and other upgrades, a number of the cutters are nearing the end of their estimated service lives. Similarly, while a number of the deepwater legacy aircraft have received upgrades in engines, operating systems, and sensor equipment since they were originally built, they too have limitations in their operating capabilities. In 1996, the Coast Guard began developing what came to be known as the Integrated Deepwater System (IDS) acquisition program as its major effort to replace or modernize these aircraft and cutters. This Deepwater program is designed to replace some assets—such as deteriorating cutters—with new cutters and upgrade other assets—such as some types of helicopters—so they can meet new performance requirements. The Deepwater program represents a unique approach to a major acquisition in that the Coast Guard is relying on a prime contractor—the system integrator—to identify and deliver the assets needed to meet a set of mission requirements the Coast Guard has specified. In 2002, the Coast Guard awarded a contract to Integrated Coast Guard Systems (ICGS) as the system integrator for the Deepwater program. ICGS has two main subcontractors—Lockheed Martin and Northrop Grumman—that in turn contract with other subcontractors. Rather than using the traditional approach of replacing classes of ships or aircraft through a series of individual acquisitions, the Coast Guard chose to employ a “system of systems” acquisition strategy that would replace its deteriorating deepwater assets with a single, integrated package of new or modernized assets. This system-of-systems approach is designed to provide an improved, integrated system of aircraft, cutters, and unmanned aerial vehicles to be linked effectively through systems that provide command, control, communications, computer, intelligence, surveillance, reconnaissance, and supporting logistics. The Deepwater program’s three overarching goals are to maximize operational effectiveness, minimize total ownership cost, and satisfy the customer—the operational commanders, aircraft pilots, cutter crews, maintenance personnel, and others who will use the assets. The revised Deepwater schedule calls for acquisition of new assets under the Coast Guard’s Deepwater program to occur over an approximately 20- year period at an estimated cost of $19 billion to $24 billion. By 2007, for example, the Coast Guard is to receive the first 418-foot National Security Cutter, which will have the capability to conduct military missions related to homeland security. Current plans call for 6 to 8 of these cutters to replace the 12 existing 378-foot cutters. However, in order to carry out its mission effectively, the Coast Guard will also need to keep all of the legacy assets operational until they can be replaced or upgraded. We have been reviewing the Deepwater program for several years, pointing out difficulties and expressing concern over a number of facets of the program. In our 2001 report, we identified several areas of risk for Deepwater. First, the Coast Guard faced potential risk in the overall management and day-to-day administration of the contract. At the time, we reported on the major challenges such as developing and implementing plans for establishing effective human capital practices, having key management and oversight processes and procedures in place, and tracking data to measure system integrator performance. In addition, we expressed concerns about the potential lack of competition during the program’s later years and the reliance on a single system integrator for procuring the Deepwater assets. We also reported there was little evidence that the Coast Guard had analyzed whether the approach carried any inherent risks for ensuring the best value to the government and, if so, what to do about them. We reviewed the program again in 2004 and found many of the same concerns. Specifically, we reported that key components needed to manage the program and oversee the system integrator’s performance had not been effectively implemented. The Coast Guard’s primary tool for overseeing the system integrator, the integrated product teams (IPTs) were struggling to effectively collaborate and accomplish their missions because of changing membership, understaffing, insufficient training, and inadequate communication among members. Also, the Coast Guard had not adequately addressed the frequent turnover of personnel in the program and the transition from existing assets to those assets that will be part of the Deepwater program moving forward. Further, the Coast Guard’s assessment of the system integrator’s performance in the first year of the contract lacked rigor, and the factors that formed the basis for the award fee determination were supported only by subjective performance monitor comments and not by quantifiable measures. This resulted in the system integrator receiving a high performance rating and an award fee of $4.0 million out of a maximum of $4.6 million despite documented problems in schedule, performance, cost controls, and contract administration. At the time of our March 2004 report, the Coast Guard had begun to develop models to measure the extent to which Deepwater was achieving operational effectiveness and reduced total ownership cost, but it had not made a decision as to which specific suite of models would be used. Further, Coast Guard officials were not able to project a time frame for when the Coast Guard would be able to hold the contractor accountable for progress toward the goals of maximizing operational effectiveness, minimizing total ownership cost, and customer satisfaction. Furthermore, the Coast Guard had not measured the extent of competition among suppliers of Deepwater assets or held the system integrator accountable for taking steps to achieve competition. The Coast Guard’s lack of progress on these issues has contributed to our concerns about the Coast Guard’s ability to rely on competition as a means to control future programmatic costs. Finally, we found that the Coast Guard had not updated the Deepwater integrated acquisition schedule despite numerous changes, making it difficult to determine the degree to which the program was on track with its original plan. In response to these concerns, we made a number of recommendations to improve Deepwater management and oversight of the system integrator. The Coast Guard welcomed our observations and concurred with our recommendations and has begun to take actions to address them. Coast Guard condition measures show that the condition of most deepwater legacy assets generally declined between 2000 and 2004, but the Coast Guard’s available measures are inadequate to capture the full extent of the decline in the condition of deepwater assets with any degree of precision and are insufficient for determining the impact on mission capabilities. Further, other evidence we gathered, such as information from discussions with maintenance and operations personnel, points to conditions that may be more severe than the available measures indicate. The Coast Guard acknowledges that it needs better condition measures, but it has not yet finalized or implemented such measures. The Coast Guard anticipates having the new measures finalized by the end of 2005. During fiscal years 2000 through 2004, the Coast Guard’s various condition measures showed a general decline, although there were year-to-year fluctuations (see table 2). For deepwater legacy aircraft, a key summary measure of the condition—the availability index (the percentage of time aircraft are available to perform their missions)—showed that except for the HU-25 medium-range surveillance aircraft, the assets continued to perform close to or above fleet availability standards over the 5-year period. In contrast, other condition measures for aircraft, such as cost per flight hour and labor hours per flight hour, generally reflected some deterioration. For cutters, a key summary measure of condition—percent of time free of major casualties—fluctuated but generally remained well below target levels. The number of major casualties generally rose from fiscal years 2000 through 2003 and then dropped slightly in fiscal year 2004. (Appendix II provides further details on condition measures for each of the deepwater legacy aircraft and cutters.) Another, albeit less direct, measure of an asset’s condition is deferred maintenance—the amount of scheduled maintenance on an asset that must be postponed in order to pay for unscheduled repairs. Such deferrals can occur when the Coast Guard does not have enough money to absorb unexpected maintenance expenditures and still perform all of its scheduled maintenance, thus creating a backlog. For example, in spring 2004, while on a counterdrug mission, the 210-foot cutter Active experienced problems in the condition of its flight deck that were to be corrected during its scheduled depot-level maintenance. However, because of a shortage of maintenance funds, the maintenance was deferred and the flight deck was not repaired. As a result, the cutter lost all shipboard helicopter capability, significantly degrading mission readiness. As table 3 shows, deferred maintenance does not show a clear pattern across all classes of deepwater legacy assets. For the deepwater legacy aircraft, the overall amount of estimated deferred maintenance increased each year during fiscal years 2002 through 2004, from $12.3 million to about $24.6 million. However, most of the increase came for one type of asset, the HH-60 helicopter, and the increase came mainly from deferring maintenance past the 48-month interval requirement—thereby increasing the scheduled maintenance workload—and not from having to divert money to deal with unscheduled maintenance. For the deepwater cutters, the amount of estimated deferred maintenance increased from fiscal year 2002 to 2003, but then it dropped significantly in fiscal year 2004. The decrease in fiscal year 2004 came mainly because the Coast Guard received supplemental funding allowing it to address both scheduled and unscheduled maintenance. Thus, the drop in the estimate of deferred maintenance costs for fiscal year 2004 is not necessarily an indicator that the condition of the legacy assets was improving; it could be the result of the Coast Guard having more money to address the maintenance needs. At the time we began our work, the Coast Guard’s measures generated some limited information on the condition of its legacy assets, but the measures were not sufficiently robust to link the assets’ declining condition to degradation in mission capabilities or performance. As a result, the picture that emerges regarding the condition of the deepwater legacy assets based on current Coast Guard condition measures should be viewed with some caution. While there is no systematic, quantitative evidence sufficient to demonstrate that deepwater legacy assets are “failing at an unsustainable rate,” as the Coast Guard has asserted, this does not mean the assets are in good condition or have been performing their missions safely, reliably, and at levels that meet or exceed Coast Guard standards. We identified two factors that need to be considered to put these condition measures into proper context. The first factor deals with limitations in the measures themselves. Simply put, the Coast Guard’s measures of asset condition do not fully capture the extent of the problems. As such, they may understate the decline in the legacy assets’ condition. More specifically, the Coast Guard measures we assessed focus on events, such as equipment casualties or flight mishaps, but do not measure the extent to which these and other incidents degrade mission capabilities. The following is an example in which Coast Guard measures we assessed are not sufficiently robust to systematically capture degradation in mission capabilities: The 378-foot cutter Jarvis recently experienced a failure in one of its two main gas turbines shortly after embarking on a living marine resources and search and rescue mission. While Jarvis was able to accomplish its given mission, albeit at reduced speed, this casualty rendered the cutter unable to respond to any emergency request it might have received—but did not in this case—to undertake a mission requiring higher speeds, such as drug interdiction. The Coast Guard condition measures are not robust enough to capture these distinctions in mission capability. The second factor that needs to be kept in mind is the compelling nature of the other evidence we gathered outside of the Coast Guard’s condition measures. This evidence, gleaned from information collected during our site visits and discussions with maintenance personnel, indicated deteriorating and obsolete systems and equipment as a major cause of the reduction in mission capabilities for a number of deepwater legacy aircraft and cutters. Such problems, however, are not captured by the Coast Guard’s condition measures. One example of this involves the HH-65 short-range recovery helicopter. While this helicopter consistently exceeded availability standards established by the Coast Guard over the 5-year period we examined, it is currently operating with engines that have become increasingly subject to power failures, which may potentially render the fleet unable to meet mission requirements. As a result, Coast Guard pilots employ a number of work-arounds, such as dumping fuel or occasionally leaving the rescue swimmer on scene if the load becomes too heavy. Further, because of increasing safety and reliability problems, the Coast Guard has also implemented a number of operational restrictions— such as not allowing the helicopter to land on helipads—to safeguard crew and passengers and prevent mishaps until all of the fleet’s engines can be replaced. The Coast Guard has already undertaken two main types of actions to keep its legacy assets operational: developing a compendium of information for making decisions regarding maintenance and upgrades needed, and performing increasingly more maintenance on these assets between deployments. These efforts are likely helping to prevent a more rapid decline in the condition of the assets, but the condition of these assets has nonetheless generally continued to worsen. In response to both the continued decline in the condition of its legacy assets, as well as to various observations we have made to the Coast Guard about its need to develop more objective information on mission capability needs and more precise condition measures, the Coast Guard has begun to undertake additional efforts. These additional efforts include developing a knowledge-based model to provide more objective data on where to best spend budget dollars to achieve the greatest enhancements in mission capabilities, improving the condition measures it uses to more clearly quantify the impact declining conditions have on mission capabilities, and, at the Pacific Area Command, applying new business rules and strategies to better sustain the 378-foot high-endurance cutters through 2016. These ongoing efforts, while promising, are largely untested, and so it is too soon to tell whether they will allow the Coast Guard to better determine and improve the mission capabilities of its legacy assets. Since 2002, the Coast Guard has annually issued a Systems Integrated Near Term Support Strategy compendium. Among other things, this compendium consolidates information needed to make planning and budgeting decisions regarding maintenance and upgrades to sustain legacy assets. Its purpose is to serve as a tool for senior Coast Guard management in setting priorities and planning budgets. From this strategic document, the Coast Guard has identified a number of upgrades to improve the capabilities of the deepwater legacy aircraft and cutters. The most recent compendium (for fiscal year 2006) lists more than $1 billion worth of upgrades to the deepwater legacy assets. The planned upgrades identified in the compendium that have been approved and received initial funding account for an estimated $856 million the Coast Guard anticipates it will need to complete those projects. The approved upgrades for deepwater legacy assets are shown in table 4. Among the projects already begun is the re-engining of the HH-65 helicopters to address safety and reliability concerns. The Coast Guard is also upgrading several other aviation systems in an effort to improve aircraft capabilities. Enhancements are also planned for certain classes of deepwater cutters. For example, during fiscal year 2005, the Coast Guard is beginning a maintenance effectiveness project on the 210-foot and 270-foot cutters. This project includes replacing major engineering subsystems with the goal of extending the cutters’ service lives until their replacement by the Offshore Patrol Cutter. Of the $856 million total estimated costs needed for the planned upgrades to the deepwater legacy assets listed above, $215 million has been allocated through fiscal year 2005, and the Coast Guard has requested another $217.3 million in its fiscal year 2006 budget. The remaining estimated costs of $423.7 million would have to be funded beyond fiscal year 2006. Coast Guard personnel consistently reported to us that crew members have to spend increasingly more time between missions to prepare for the next deployment. For example, due to the aging main landing gear on the HH-65 helicopter, Coast Guard official stated that maintenance crews spend extensive time servicing, troubleshooting and fixing them in pre- deployment maintenance. Comparable accounts were given by personnel working on cutters. For example, officers of the 270-foot cutter Northland told us that because of dated equipment and the deteriorating condition of its piping and other subsystems, crew members have to spend increasingly more time and resources while in port to prepare for their next deployment. While we could not verify these increases in time and resources because the Coast Guard does not capture data on these additional maintenance efforts, the need for increasing amounts of maintenance was a message we consistently heard from the operations and maintenance personnel with whom we met. Such efforts are likely helping to prevent a more rapid decline in the condition of these deepwater legacy assets, but it is important to note that even with the increasing amounts of maintenance, these assets are still losing mission capabilities because of deteriorating equipment and system failures. For example, in fiscal year 2004, the 378-foot cutter Chase lost 98 counterdrug mission days because of a number of patrol-ending casualties—including the loss of ability to raise and lower boats and run major electrical equipment—requiring $1.2 million in emergency maintenance. In addition, the 378-foot cutter Hamilton lost 27 counterdrug mission days in the fall of 2004 when it required emergency dry dock maintenance because of hydraulic oil leaking into the reduction gear. In the past, we have recommended that the Coast Guard develop a long- term strategy to set and assess levels of mission performance. We found this was an important step for the Coast Guard to take because it links legacy asset investments to asset capabilities, mission priorities, and goals so that the Coast Guard can better decide how limited budget dollars should be spent. The Coast Guard has recently begun to apply the principles behind such a strategy to (1) better prioritize the projects needed to upgrade legacy assets that will be part of the Deepwater program and (2) obtain the greatest overall mix of capabilities for its assets within its budget in order to maximize mission performance. The tool it is developing is called the Capital Asset Management Strategy (CAMS). CAMS, once fully implemented, is expected to help the Coast Guard to better manage its assets by linking funding decisions to asset condition. Unlike the Coast Guard’s current compendium, CAMS is designed to provide analyses on the capability trade-offs for upgrades and maintenance projects across asset classes, thereby allowing the Coast Guard to determine which combination of projects will provide the most capability for the dollars invested. For example, when Coast Guard officials are trying to decide among potential project upgrades such as an HC-130 weather radar replacement, an HH-65 sliding cabin door replacement, or a 110-foot patrol boat fin stabilizer replacement, CAMS, once fully implemented, could provide the officials with a recommended mix of project upgrades that would achieve the greatest capability enhancements based on the available budget. CAMS analyses are to be based on legacy asset condition and readiness data, asset retirement and replacement timelines, asset degradation estimates, project production rates, cost data, and mission utility rankings. Mission utility rankings will grade an asset’s importance to specific missions, such as search and rescue or counterdrug operations. Rankings may also be assigned to an asset’s critical subsystems or may be altered based on an asset’s geographic location. For example, a 378-foot cutter may be critical to the success of fisheries patrols in the Pacific but may not be as important for alien/migrant interdiction operations in the Caribbean. However, according to Coast Guard headquarters officials, the Coast Guard remains cautious about employing such a strategy because an investment strategy of this nature could lead to cutters that are no longer multmission capable and are unable to respond to an emergency due to reduced capabilities. In addition, the Coast Guard plans to rank its missions within CAMS based on their relative importance. Each of these elements is to form the basis for recommendations regarding which combination of upgrade and maintenance projects will provide the greatest enhancements to fleet capabilities. According to Coast Guard staff, CAMS recommendations are not a replacement for the existing budget development process, but rather are to augment and make more consistent the information currently provided to decision makers. Because the recommendations are to be based, in part, on user assumptions, CAMS recommendations are to be reviewed by several internal Coast Guard officials before any final funding requests are made. Further, in order to prevent user “gaming”—making assumptions in such a way as to ensure a positive recommendation or outcome for a particular project—the Coast Guard is developing a series of job aids, manuals, and training courses to ensure data integrity and consistency. Coast Guard officials expect to have CAMS fully implemented by September 2005 and intend to use it while developing the Coast Guard’s fiscal year 2008 budget submission. Although it is too soon to assess the effectiveness of CAMS, we view this approach as a good faith effort toward knowledge-based budgeting for legacy asset sustainment. At the time we began our work, in August 2004, the majority of the Coast Guard’s condition measures were not sufficiently robust to link an asset’s condition with its impact on mission capabilities. As we discussed with Coast Guard officials, without such condition measures, the extent and severity of the decline in the existing deepwater legacy assets and their true condition cannot be fully determined. On the basis of our inquiries and a series of discussions we held with cognizant Coast Guard officials, the Coast Guard has begun developing improved measures to more accurately capture data on the extent to which its deepwater legacy assets are degraded in their mission capabilities. However, because these measures have not been finalized or fully implemented, we were unable to assess their effectiveness. The Coast Guard anticipates having the new measures finalized by the end of 2005. Coast Guard naval engineers told us that they had begun developing a “percent of time fully mission capable” measure to reflect the degree of mission capability, as well as measures to track cutter readiness. As part of this measure, the Coast Guard is developing mission criticality codes, which would rank the degree of importance of each piece of a cutter’s equipment to each possible mission that the cutter could perform. These codes would then be linked to electronic casualty reports for each cutter, which would provide the cutter engineers and operators with information on the impact that the equipment casualties would have on each possible mission. This casualty report/mission criticality linkage will then be factored into the calculation of the percent of time fully mission capable measure for each cutter class and mission type. Coast Guard officials could then review this measure to determine, for example, the degree of capability that its 270-foot medium endurance cutter fleet has to conduct search and rescue missions at any given time. We agree that measures like this are needed—and as soon as possible. According to Coast Guard officials, while the availability index will remain the Coast Guard’s primary measure for aircraft condition and operational readiness, the Coast Guard is working to improve its dispatch reliability index measure, which provides causal information on delayed, aborted, or canceled missions. The Coast Guard can use the dispatch reliability index—in conjunction with data captured by unit-level and depot-level maintenance staff and entered into the Coast Guard’s Electronic Aircraft Logbook and Aviation Logistics Management Information System, respectively—to determine which components and systems are failing most frequently and thus causing degradation in aircraft availability and mission performance. According to Coast Guard officials, data provided from these systems rival the information that will be produced by the cutter community’s proposed percent of time fully mission capable measure. Because the dispatch reliability index measure and the electronic aircraft logbook are relatively new and have only recently been fully implemented Coast Guard-wide, we have not assessed their effectiveness. However, we view these tools as a positive step toward providing Coast Guard decision makers with more detailed information on the primary factors leading to mission degradation. One effort is under way at the Coast Guard’s Pacific Area Command to improve maintenance practices for its 378-foot cutters, which are among the oldest cutters in its fleet. Pacific Area officials have recognized that a different approach to maintaining and sustaining legacy cutters may be needed since they are dependent on 378-foot cutters for meeting missions, such as defense operations and fisheries patrols. As a first step, Pacific Area officials have undertaken an initiative applying what they refer to as “new business rules and strategies” to better maintain the 378-foot high- endurance cutters through 2016, when they are scheduled to be fully replaced by National Security Cutters. Under the original Deepwater proposal, the final 378-foot cutter was to be decommissioned in 2013, but by 2005, that date had slipped to 2016. To help keep these cutters running through this date, Pacific Area officials are applying such rules and strategies as (1) ensuring that operations and maintenance staffs work closely together to determine priorities, (2) recognizing that maintaining or enhancing cutter capabilities will involve trade-off determinations, and (3) accepting the proposition that with limited funding not all cutters will be fully capable to perform all types of missions as they near the end of their useful lives. Pacific Area officials believe that in combination, these rules and strategies will result in more cost-effective maintenance and resource allocation decisions—recognizing that difficult decisions will still have to be made to balance maintenance and operations. However, according to Coast Guard headquarters officials, if such strategies are employed, careful planning must occur to avoid placing a cutter in an operational emergency where it is incapable of adequately responding. One example of the bridging strategies Pacific Area officials are exploring is the development of what Pacific Area officials refer to as a “class within a class” approach. Under this strategy, the individual cutters within the 378-foot high-endurance cutter fleet would be designated to perform specific mission types based on an assessment of their condition and mission capabilities. Cutters possessing full mission capabilities could be assigned to the more demanding defensive operations, while cutters in poorer condition and less than fully capable would be assigned to less demanding missions, such as fisheries enforcement. According to Pacific Area officials, this strategy is designed to more effectively spend the maintenance funds available for the 378-foot cutters, since current funding levels for the 378-foot cutters make it very difficult for Pacific Area to maintain all 10 of its 378-foot cutters as fully mission capable. Pacific Area Command’s new initiative has the potential for assisting the Coast Guard in making more informed choices regarding the best use of their resources, but according to Pacific Area officials, the approach will likely require that the Coast Guard allocate additional maintenance funds. Further, because the approach has not been fully implemented, it is too soon to tell whether the approach will provide the results intended. Coast Guard headquarters officials stated that before such a strategy can be implemented further analysis is required, to include: (1) determining the estimated savings associated with creating multiple 378-foot cutter classes; (2) analyzing other cost saving concepts, such as decommissioning cutters or rotating crews; (3) obtaining further information on the effect on Coast Guard mission readiness; and (4) assessing the operational risk associated with operating cutters that are no longer multimission capable. Officials from Coast Guard headquarters officials further stated that they are exploring the possibility of increasing the funds available for operating expenses for the 378-foot high-endurance cutters in fiscal year 2007. In its fiscal year 2006 budget request, the Administration requested $966 million for the Deepwater program—-$242 million more than Congress appropriated for the program in fiscal year 2005. This request reflects significant revisions to the Deepwater program’s requirements, capabilities, and schedule necessitated by the Coast Guard’s new homeland security mission. Recently, the House Appropriations Committee recommended $500 million for the Deepwater program, $466 million less than the Administration requested. The committee expressed concern about the path the program has taken and the lack of information provided to Congress as the primary reasons for this recommendation. Specifically, the committee did not believe that the Coast Guard’s revised implementation plan provided enough programmatic information such as asset delivery timelines and funding projections for each year through the program’s completion. In late May 2005, the Coast Guard submitted documentation to the committee in response to the committee’s request. In June 2005, the Senate Appropriations Committee expressed concern about the lack of information concerning the Deepwater plan in the fiscal year 2006 budget request but recommended funding of $905.6 million for the program for fiscal year 2006. As of early July 2005, the fiscal year 2006 appropriation for the Deepwater program was still pending, and so the funding level is still not known. Since the inception of the Deepwater program, we have expressed concerns about the degree of risk in the acquisition approach and the Coast Guard’s ability to manage and oversee the program. In 2004 we reported that, well into the contract’s second year, key components needed to manage the program and oversee the system integrator’s performance had not been effectively implemented. We also reported that the degree to which the program was on track could not be determined because the Coast Guard was not updating its schedule. We detailed improvements needed in a number of areas, shown in table 5. These concerns have a direct bearing on any consideration to increase the program’s pace. Because the Coast Guard was having difficulty managing the Deepwater program at the pace it had anticipated, increasing the pace by expediting the acquisitions would only complicate the problem. The Coast Guard agreed with nearly all of our recommendations and has made progress in implementing them. Specifically, the Coast Guard has fully addressed three of the recommendations and has actions under way on others. However, in light of continuing management challenges, it will likely take some time for the Coast Guard to fully address the remaining recommendations. While actions are under way, management challenges remain that are likely to take some time for the Coast Guard to fully address. We have seen mixed success in the Coast Guard’s efforts to improve management of the program and contractor oversight. Three of the four areas of concern—improving integrated project teams (IPT), ensuring adequate staff for the program, and planning for human capital requirements for field units receiving new assets—have yet to be fully addressed. Although the Deepwater program has made some efforts to improve the effectiveness of IPTs, we continue to see evidence that more improvements are needed—such as greater coordination—for the teams to effectively do their jobs. These teams, the Coast Guard’s primary tool for managing the program and overseeing the contractor, are generally chaired by a subcontractor representative and consist of members from subcontractors and the Coast Guard. The teams are responsible for overall program planning and management, asset integration, and overseeing delivery of specific Deepwater assets. Since our March 2004 report, the teams have been restructured, and 20 teams have charters setting forth their purpose, authority, and performance goals. And new, entry-level training is being provided to team members. Despite this progress, however, the needed changes are not yet sufficiently in place. A recent assessment by the Coast Guard of the system integrator’s performance found that roles and responsibilities in some teams continue to be unclear. Decision making is to a large extent stovepiped, and some teams still lack adequate authority to make decisions within their realm of responsibility. One source of difficulty for some team members has been the fact that each of the two major subcontractors has used its own databases and processes to manage different segments of the program. Decisions on air assets are made by Lockheed Martin, while decisions regarding surface assets are made by Northrop Grumman. This approach can lessen the likelihood that a system-of-systems outcome will be achieved if decisions affecting the entire program are made without the full consultation of all parties involved. Deepwater program officials told us that more attention is being paid to taking a system-wide approach and that the Coast Guard has emphasized the need to ensure that the two major subcontractors integrate their management systems. We will continue to monitor the Coast Guard’s progress in implementing this recommendation during our pending review of the revised Deepwater plan. The Coast Guard has taken steps to more fully staff the Deepwater program, with mixed effects. In February 2005, the Deepwater program executive officer approved a revised human capital plan. The plan emphasizes workforce planning, including determining needed knowledge, skills, and abilities and developing ways to leverage institutional knowledge as staff rotate out of the program. This analysis is intended to help determine what gaps exist between needed skills and existing skills and to develop a plan to bridge these gaps. The Coast Guard has also taken some short-term steps to improve Deepwater program staffing, hiring contractors to assist with program support functions, shifting some positions from military to civilian to mitigate turnover risk, and identifying hard-to-fill positions and developing recruitment plans specifically for them. Finally, the Deepwater program and the Coast Guard’s acquisition branch have begun using an automated system for forecasting military rotation cycles, a step Deepwater officials believe will help with long- range strategic workforce planning and analysis. Despite these actions, however, vacancies remain in the program, and some measures that may have highlighted the need for more stability in the program’s staff have been removed from the new human capital plan. As of January 2005, 244 positions were assigned to the program, with 206 of these filled, resulting in a 16 percent vacancy rate. A year ago, 209 staff were assigned to the program. Further, the new human capital plan removes a performance goal that measured the percentage of billets filled at any given time. Coast Guard officials stated that the prior plan’s goal of a 95 percent or higher fill rate was unduly optimistic and was a poor measure of the Coast Guard’s ability to meet its hiring goals. For example, billets for military personnel who plan to rotate into the program in the summer are created at the beginning of the budget year, leading the measure to count those positions as vacant from the beginning of the budget year until summer. Other performance measures that were included in the prior plan to measure progress in human capital issues have also been removed. For example, to help ensure that incoming personnel received acquisition training and on-the-job training, a billet was included in the prior plan to serve as a floating training position that replacement personnel could use for a year before the departure of military incumbents. The Coast Guard did not fund this position, and the new plan removes the billet. According to the Coast Guard, these measures were removed because the revised Deepwater plan focuses on the long-range strategic human capital issues associated with the execution of the acquisition over the entire period, whereas the prior plan had a short-term operational focus. We will continue to monitor the Coast Guard’s progress in implementing this recommendation during our pending review of the revised Deepwater plan. The Coast Guard recognizes the critical need to inform the operators who are to use the Deepwater assets of progress in the program, and officials stated that on the basis of our recommendations, they have made a number of improvements in this area. A November 2004 analysis of the Deepwater program’s communication process, conducted in coordination with the National Graduate School, found that the communication and feedback processes were inadequate. Emphasis has now been placed on outreach to field personnel, with a multipronged approach involving customer surveys, face-to-face meetings, and presentations. We have not yet evaluated the effectiveness of the new approach. Human capital requirements for the Deepwater program—such as crew numbers and schedules, training, and support personnel—will have an increasing impact on the program’s ability to meet its goals as the pace at which assets are delivered to field units picks up. Recent assessments by Coast Guard performance monitors show this to be an area of concern. Coast Guard officials have expressed concern about whether the system integrator is appropriately considering human capital in systems engineering decisions. The system integrator is required to develop a workforce management plan for Deepwater, as well as “human factors engineering” plans for each Deepwater asset and for the overall system of systems. The Coast Guard rejected the contractor’s workforce management plan and several of the proposed human factors engineering plans as being inadequate. The rejections were due, in part, to the lack of an established and integrated system-level engineering approach that shows how issues relating to human capabilities and limitations of actually performing with the system will be approached. One performance monitor noted that as of late 2004, requirements for staffing and training of maintenance facilities and organizations had yet to be determined. According to the Coast Guard, emphasis on a system integrator for addressing human capital considerations is necessary to ensure that Deepwater goals are met, especially as they pertain to operational effectiveness and total ownership cost. We will continue to monitor the Coast Guard’s progress in implementing this recommendation during our pending review of the revised Deepwater plan. The Coast Guard has recently undertaken efforts to update the original 2002 Deepwater acquisition schedule—an action that we suggested in our June 2004 report. The original schedule had milestone dates showing when work on an asset would begin and when delivery would be expected, as well as the integrated schedules of critical linkages between assets, but we found that the Coast Guard was not maintaining an updated and integrated version of the schedule. As a result, the Coast Guard could not demonstrate whether individual components and assets were being integrated and delivered on schedule and in critical sequence. As recently as October 2004, Deepwater performance monitors likewise expressed concern that the Coast Guard lacked adequate visibility into the program’s status and that lack of visibility into the schedules for component-level items prevented reliable forecasting and risk analysis. The Coast Guard has since taken steps to update the outdated schedule and has indicated that it plans to continue to update the schedule each month for internal management purposes, and semiannually to support its budget planning efforts. We think this is an important step toward improving the Coast Guard’s management of the program because it provides a more tangible picture of progress, as well as a baseline for holding contractors accountable. We will continue our oversight of the Coast Guard to ensure progress is made and to monitor how risks are mitigated. We have seen progress in terms of the rigor with which the Coast Guard is periodically assessing the system integrator’s performance, but concerns remain about the broader issues of accountability for achieving the overarching goals of minimizing total ownership cost and maximizing operational effectiveness. Improvements continue to be made to the criteria for assessing the system integrator’s performance. In March 2004, we reported that the process for assessing performance against specific contract tasks lacked rigor. The criteria for doing so have since been revised to more clearly reflect those that are objective, (that is, measured through automated tools against established measures) and those that are subjective, meaning the narrative comments by Coast Guard performance monitors. Weights have been assigned to each set of evaluation factors, and the Coast Guard continues to refine the distribution of the weights to reach an appropriate balance between automated results and the eyewitness observations of the performance monitors. Coast Guard officials told us that they have also provided additional guidance and training to performance monitors. We found that efforts have been made to improve the consistency of the format used for their input in assessments of the system integrator’s performance. Coast Guard officials said that they are continuing to make improvements to ensure that performance monitors’ relevant observations are appropriately considered in making award fee determinations. It is important to note that although performance monitor comments are considered subjective, they are valuable inputs to assessing the system integrator’s performance, particularly when they are tied to measurable outcomes. According to Coast Guard officials, the Coast Guard will continue to refine the award fee factors as the program progresses. In some cases, we noted that the performance monitors’ assessments differed vastly from the results of automated, data-driven assessments. For example, while schedule management is discussed in the Coast Guard’s midterm assessment of the system integrator’s performance as a major area of challenge and risk, the objective measure showed 100 percent compliance in this area. Another measure assesses the extent to which integrated product teams consider the impact of their decisions on the overall cost and effectiveness of the Deepwater program. Performance monitors reported that because system-level guidance had not been provided to the teams responsible for specific assets, they had a limited ability to see the whole picture and understand the impact of decisions on total ownership cost and operational effectiveness. However, the automated measure was again 100 percent compliance. Coast Guard officials said that, in some cases, the data-driven measures do not accurately reflect the contractor’s performance. We will continue to monitor changes to the Coast Guard’s measures for assessing the system integrator’s performance. Changes have been made to the award fee measures that place additional emphasis on the system integrator’s responsibility for making integrated product teams effective. Award fee criteria now incorporate specific aspects of how the integrator is managing the program, including administration, management commitment, collaboration, training, and empowerment of these teams. However, as discussed above, concerns remain about whether the teams are effectively accomplishing their goals. While the Coast Guard has developed models to measure the system integrator’s performance in operational effectiveness and total ownership cost, concrete results have not yet emerged. Minimizing total ownership cost and maximizing operational effectiveness are two of the overarching goals of the Deepwater program. The system integrator’s performance in these two areas will be a critical piece of information when the Coast Guard makes a decision about whether to award the contractor the first contract option period of 5 years. Initial decision making is to start in June 2006. With regard to the operational effectiveness of the program, measuring the system integrator’s impact has yielded limited results to date because few of the new assets are operational. The Coast Guard has developed modeling capabilities to simulate the effect of the new capabilities on its ability to meet its missions. However, until additional assets become operational, progress toward this goal will be difficult to determine. With regard to total ownership cost, the Coast Guard does not plan to implement our recommendation, despite concurring with it at the time of our March 2004 report. The Coast Guard has not adhered to its original plan, set forth in the Deepwater program management plan, of establishing as its baseline a cost not to exceed the dollar value of replacing the assets under a traditional approach (e.g., on an asset-by-asset basis rather than a system-of-systems approach). In addition to providing for greater synergies between air, sea, sensor and communications assets and equipment, the system-of-systems approach was to yield cost savings when compared with a traditional acquisition approach. Although the Coast Guard initially established a cost baseline consistent with the program management plan’s approach, the Coast Guard has not updated it to reflect changes made to the system integrator’s cost estimate baseline, and therefore is not being used to evaluate the contractor’s progress in holding down total ownership cost. As a result, the cost baseline being used to measure total ownership cost is not the Coast Guard’s, but rather is the system integrator’s own cost estimate. As we reported in March 2004, we believe that measuring the system integrator’s cost growth compared with its own cost proposal will tell the government nothing about whether it is gaining efficiencies by turning to the system-of-systems concept rather than the traditional asset-by-asset approach. Although the Deepwater program has undergone a number of alterations since the contract was awarded in 2002, the Coast Guard has not studied whether the system-of-systems approach is still more cost effective as opposed to a traditional acquisition approach. Thus, the Coast Guard will lack this information as it prepares to decide whether to award the first contract option beginning in June 2006. Coast Guard officials stated that the contract total ownership cost and operational effectiveness baseline is adjusted based on approved decision memorandums from the Agency Acquisition Executive, the Vice Commandant of the Coast Guard. Such memorandums were originally approved by the program executive officer on a case-by-case basis. As we reported in March 2004, establishing a solid baseline against which to measure progress in lowering total ownership cost is critical to holding the contractor accountable. The Coast Guard reported that it is taking steps to address our recommendations concerning cost control through competition among second-tier suppliers and notification of “make” decisions. While we have not assessed the effectiveness of the Coast Guard’s actions regarding competition among second-tier suppliers, we are satisfied with its efforts regarding notification of make decisions. It should be noted, though, that we have not assessed the effectiveness of the following actions. Competition among second-tier suppliers. Coast Guard officials told us that in making the decision about whether to award the first contract option, the government will specifically examine the system integrator’s ability to control costs by assessing the degree to which competition is fostered at the major subcontractor level. The evaluation will consider the subcontractors’ project management structure and processes to control costs, as well as how market surveys of similar assets and major subsystems are implemented. The Coast Guard is focusing its attention on those areas that were priced after the initial competition for the Deepwater contract was completed, such as the HH-65 re-engining and the C-130J missionization. For example, a new process implemented for the C-130J missionization was a requirement for competition in subcontracting and government approval of all subcontracts exceeding $2 million in order for the Coast Guard to monitor the integrator’s competition efforts. Notification of make decisions. According to the Federal Acquisition Regulation, the prime contractor is responsible for managing contract performance, including planning, placing, and administering subcontracts as necessary to ensure the lowest overall cost and technical risk to the government. The Federal Acquisition Regulation further provides that when “make-or-buy programs” are required, the government may reserve the right to review and agree on the contractor’s make-or-buy program when necessary to ensure negotiation of reasonable contract prices, among other things. We recommended that the Coast Guard be notified of make-or-buy decisions over $5 million in order to facilitate controlling costs through competition. We suggested the $5 million threshold because Lockheed Martin, one of the major subcontractors, considers that amount to be the threshold for considering its suppliers major. The Coast Guard has asked the system integrator, on a voluntary basis, to provide notification 1 week in advance of a make decision of $10 million or more based on the criteria in the make-or-buy program provisions of the Federal Acquisition Regulation. According to Coast Guard officials, to date, no make decision has exceeded $10 million since the request was made. The details implementing this recommendation have not yet been worked out, such as specifically who in the Coast Guard will monitor the subcontractors’ make decisions to ensure that the voluntary agreement is complied with. We will continue to monitor the Coast Guard’s progress in implementing this recommendation during our pending review of the revised Deepwater plan. Our work suggests the costly and important Deepwater program will need constant monitoring and management attention to successfully accomplish its goals. In this respect, we identified three points that should be kept in mind in considering how to proceed with the program. First, the need to replace or upgrade deteriorating legacy assets is considerable. While the Coast Guard currently lacks measures that clearly demonstrate how this deterioration affects its ability to perform deepwater-related missions, it is clear that the deepwater legacy assets are insufficient for the task. As the Coast Guard continues to develop condition measures that are more robust and able to link the assets’ condition with mission capabilities, and as it further develops and implements its Capital Asset Management System, it will be in a better position to make more informed decisions regarding where its budget should be spent to maximize the capabilities of its legacy assets as the Coast Guard transitions to the Integrated Deepwater System. Second, there are signs that as the Deepwater program moves ahead, the Coast Guard will continue to report more problems with sustaining existing assets, together with the attendant need for additional infusions of funding to deal with them. Some of these problems, such as those on the 378-foot cutters, are included in the compendium the Coast Guard uses to set sustainment priorities and plan budgets, but the Coast Guard has not allocated funds because the problems pertain to assets that are among the first to be replaced. However, projects to address these problems are nevertheless likely to be needed. While the Coast Guard is moving to improve the information it uses to set budget priorities through development of CAMS, the system has not been implemented, and therefore, it is too soon to tell how effective the system will be. We will continue to work with the Coast Guard to monitor its progress on CAMS as a means for ensuring that there is a more systematic and comprehensive approach to keeping Congress abreast of the potential bill for sustaining these assets. Third, although the need to replace and upgrade assets is strong, there still are major risks in the Coast Guard’s acquisition approach. The cost increases and schedule slippages that have already occurred are warning signs. While the Coast Guard has initiated actions to address problems we have raised involving system integration, cost and schedule management, and integrated product teams, we remain concerned that the program still carries major risks. We will continue to work with the Coast Guard to determine how best to manage these risks so that the Deepwater missions can be accomplished in the most cost-effective way. We requested comments on a draft of this report from the Department of Homeland Security and the U.S. Coast Guard. The U.S. Coast Guard provided technical comments, which have been incorporated into the report where appropriate. We are providing copies of this report to the Secretary of the Department of Homeland Security, the Commandant of the U.S. Coast Guard, and interested congressional committees. The report will also be made available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http//www.gao.gov. For information about this report, please contact me at (415) 904-2200, or wrightsonm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals making key contributions to this report are listed in appendix III. This report examines the condition of the U. S. Coast Guard’s Deepwater legacy assets and the acquisition management challenges the Coast Guard faces. Our work focused on three key questions: (1) How has the condition of the Coast Guard’s deepwater legacy assets changed during fiscal years 2000 through 2004? (2) What actions has the Coast Guard taken to maintain, upgrade, and better manage its deepwater legacy assets? (3) What are the management challenges the Coast Guard faces in acquiring new assets, especially if a more aggressive schedule is adopted? In assessing how the condition of the deepwater legacy assets has changed during fiscal years 2000 through 2004, we analyzed what Coast Guard officials told us were the best available condition measures. For deepwater aircraft, we obtained concurrence from Coast Guard Office of Aeronautical Engineering officials that the appropriate measures to use for aircraft condition included the availability index (percentage of time aircraft were available to complete missions), cost per flight hour, labor hours per flight hour, programmed flight hours per year, scheduled versus unscheduled maintenance expenditures, and estimated deferred maintenance. For cutters, we obtained concurrence from the Office of Naval Engineering and the Office of Cutter Forces that the appropriate measures to use for cutter condition were the number of major (category 3 and 4) casualties, the percent of time free of major casualties, scheduled versus unscheduled maintenance, and estimated deferred maintenance. We also reviewed data on mishaps and the dispatch reliability index for aircraft, and lost cutter days and unscheduled maintenance days for cutters, but we did not use these measures because the data were either not relevant to our analysis, incomplete, not available for the entire time period covered by our review, or not sufficiently reliable for our purposes. We supplemented our analyses of these measures with documentation from relevant reports and studies, as well as from interviews of asset program managers and crews for each type of deepwater legacy aircraft and cutter. For aircraft, we collected data from the Aircraft Repair and Supply Center in Elizabeth City, North Carolina; and visited selected air stations in Kodiak, Alaska; Miami, Florida; and Elizabeth City, North Carolina; to provide coverage of each of the four types of Deepwater aircraft—HC-130 and HU-25 fixed wing aircraft, and the HH-60 and HH-65 rotary aircraft. For cutters, we collected data at the Maintenance and Logistics Commands in Alameda, California; and Norfolk, Virginia; and visited selected Coast Guard facilities in Kodiak, Alaska; Portsmouth, Virginia; and Miami, Florida; to provide coverage of each of the three types of Deepwater vessels—high-endurance cutters, medium-endurance cutters, and patrol boats. We also reviewed Coast Guard policies and standards, including the Coast Guard Cutter Employment Standards, Coast Guard Aircraft Employment Standards for Days Employed Aboard Ship and Days Away from Home Station, and the Coast Guard Environmental Health and Safety Manual. In addition, to assess the reliability of the Coast Guard’s data and condition measures, we questioned knowledgeable officials and reviewed existing documentation about the data and the systems that produced the data. On the basis of our assessments, we determined that the data were sufficiently reliable for the purposes of this report. To determine the actions that the Coast Guard has undertaken to maintain, upgrade, and better manage its deepwater legacy assets, we reviewed documentation such as the Systems Integrated Near Term Support Strategy (SINTSS), which is a compendium of information on proposed asset sustainment projects, and spoke with various Coast Guard program officials from the Offices of Naval and Aeronautical Engineering, as well as the Atlantic Area Maintenance and Logistics Command, regarding the need to perform increasing maintenance on assets between deployments. To determine additional efforts that Coast Guard plans to undertake to better manage these assets, we met with Coast Guard officials from the Office of Naval Engineering to discuss the development of measures that the Coast Guard hopes will more accurately measure the impact that the declining condition of its legacy assets has on mission capabilities and reviewed documentation relevant to these measures. We also reviewed plans and guidance for the newly developed Capital Asset Management Strategy (CAMS), which the Coast Guard intends to use in establishing priorities for determining which Deepwater asset maintenance and sustainment projects to fund. In addition, we also met with officials at the Pacific Area Command and Maintenance and Logistics Command to discuss their fleet sustainment initiative for keeping the high-endurance cutters operational until their replacement by the National Security Cutter. To determine what management challenges the Coast Guard faces in acquiring new assets, we followed up on actions the Coast Guard has taken to implement the 11 recommendations in our report Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight (GAO-04-380), of March 9, 2004; and the 1 recommendation from our report Coast Guard: Deepwater Program Acquisition Schedule Update Needed, (GAO-04-695), of June 14, 2004. We received briefings and held several meetings with the Deepwater Program Executive Officer, the Deputy Program Executive Officer, and a number of Deepwater staff, including contracting officers. We also held a discussion with representatives of the system integrator to get their views on progress made in implementing the recommendations. We analyzed documentation supporting the Coast Guard’s midterm assessment of the contractor’s system integration and management performance in the third year of the contract, including written comments by the performance monitors. We also held discussions with Deepwater program performance monitors. We recently began an assessment of the third year of performance. However, we were not able to thoroughly review the documentation in time to include our observations in this report. We reviewed information on Deepwater integrated product teams, including membership lists and briefings provided by the Coast Guard on measures of effectiveness for the teams. We analyzed the Coast Guard’s plans to increase communications to field operators, including its August 2004 Integrated Deepwater Systems Internal Communications Plan, and received a briefing on how the plan is being implemented. We compared the September 2003 Deepwater Human Capital Plan with the February 2005 revised plan to identify changes that had been made and discussed Deepwater program office staffing numbers and plans with Coast Guard officials. Finally, we discussed with Coast Guard officials steps the Coast Guard has taken to hold the system integrator accountable for “make” versus “buy” decisions by the two major subcontractors and reviewed a January 2005 letter on this subject from the Director of Subcontracts for Integrated Coast Guard Systems to the subcontractors. We performed our review from July 2004 to June 2005 in accordance with generally accepted government auditing standards at various Coast Guard offices and facilities. The HC-130 is a long-range, fixed-wing, multimission aircraft used for search and rescue, drug interdiction, alien and migrant interdiction, living marine resources, and defense readiness and logistics missions. Manufactured by Lockheed Martin Aero, the HC-130 aircraft entered Coast Guard service beginning in 1972. There are currently 27 HC-130 aircraft within the Coast Guard. The estimated service life is approximately 30 years or 40,000 flight hours. The HC-130 fleet’s performance data show that while there was a decline in fiscal year 2004, fleet availability has steadily improved since 2000 and remains near the Coast Guard’s 71 percent availability standard. Similarly, the number of labor hours per flight hour remained fairly stable from fiscal year 2000 to 2003 but increased slightly in fiscal year 2004. Programmed flight hours have also remained reasonably stable, with some year-to-year fluctuations after a decline in fiscal year 2001. These performance measures are summarized in table 6. The HC-130 fleet’s maintenance costs have generally increased during fiscal years 2000 through 2004. Overall, the fleet’s cost per flight hour and scheduled maintenance expenditures have risen, driven by an increase in the scope of depot-level maintenance to improve the fleet’s material condition. Also, depot-level maintenance schedule delays have led to a backlog, thereby increasing the amount of fleet deferred maintenance, as shown in table 7. According to Coast Guard officials, there is an urgent need to replace the HC-130’s APS-137 surface search radar system. The radar system—part of the aircraft’s original configuration—is subject to frequent failures and is quickly becoming unsupportable, according to the Coast Guard officials. Replacement parts are very difficult to locate. While HC-130 flight crews will work around any failures, without the system, the flight crews are reduced to looking out windows for targets, thereby greatly reducing mission capabilities for performing search and rescue, alien-migrant interdiction, and drug interdiction missions. In the conference report accompanying the Coast Guard’s fiscal year 2005 appropriation, the conferees directed $9 million for the radar system. Total system replacement costs are estimated to be $78 million and are to be completed in fiscal year 2008. The Coast Guard has identified several additional HC-130 sustainment projects in its latest Systems Integrated Near Term Support Strategy. Included in these projects are an avionics modernization and a related wing-rewiring project. According to the Coast Guard, the HC-130’s avionics suite utilizes 1960s technology that is costly to maintain and will soon be unsupportable because of a lack of spare and repair parts. This cockpit modernization project is aimed at enabling the HC-130 aircraft to better support maritime safety and security and national defense and logistics missions. The Coast Guard estimates this project will cost $305 million and take 4 years to complete. The wing-rewiring project is designed to provide more power to an upgraded avionics suite and to ward off potential safety issues due to deteriorating wiring, such as electrical shorts and probability of fire. Coast Guard officials estimate the project will cost nearly $11 million and will take 5 years to complete. Five of the 27 operational HC-130s have recently been placed under operational restrictions at the request of the aircraft’s manufacturer, Lockheed Martin Aero, because of a problem associated with the aircraft’s center wing box. The restrictions include limitations on weight, airspeeds, maneuvering, and mission endurance. As of early June 2005, the Coast Guard was awaiting the release of inspection criteria from Lockheed Martin. Nevertheless, the Coast Guard estimates that the inspections will cost $2 million for the 5 aircraft. This problem is not limited to Coast Guard aircraft, but is affecting HC-130s worldwide. The remaining Coast Guard HC-130s are not subject to the operational restrictions but will likely have to undergo similar limitations and inspections beginning in fiscal year 2006. The HU-25 is a medium-range, fixed-wing, multimission aircraft used for search and rescue, drug interdiction, alien/migrant interdiction, fisheries law enforcement, defense readiness, and essential logistics missions. Manufactured by Falcon Jet, the HU-25 entered the Coast Guard aviation fleet in 1982. The Coast Guard’s fleet contains 23 aircraft. The Coast Guard maintained a fleet of 26 operational HU-25 aircraft in fiscal year 2000 but reduced the fleet because of budgetary constraints in fiscal year 2002. The original estimated service life was 20 years or 20,000 flights (landings) or 30,000 flight hours. The HU-25 fleet’s programmed flight hours have fluctuated during fiscal years 2000 through 2004 with changes in fleet size. In fiscal year 2004, the fleet flew 86 percent of the fiscal year 2001 programmed flight hours with 29 percent fewer aircraft. Moreover, the fleet’s availability index has generally improved during fiscal years 2000 through 2003, in large part because of the enhanced reliability of the HU-25’s ATF-3 engine. Though it consistently remained below the Coast Guard’s 71 percent availability standard, it has improved from fiscal year 2000 levels. The fleet’s labor hours per flight hour have also remained fairly consistent since fleet reduction. Table 8 provides a summary of the HU-25’s key performance measures. The maintenance measures for the HU-25 show varied results. During fiscal years 2000 through 2004, the fleet’s cost per flight hour has generally declined, scheduled and unscheduled maintenance expenditures fluctuated, and the amount of deferred maintenance dropped significantly. Table 9 provides a summary of the key maintenance measures. According to Coast Guard officials, the HU-25’s Honeywell ATF-3 engines are complex, have been historically unreliable, and are time-consuming to maintain—requiring as long as 4 days to re-install a repaired engine. Some of the engine problems have been mitigated by improvements in sensor capabilities that allow the aircraft to fly at higher altitudes for longer periods of time during surveillance missions. Flying at higher altitudes reduces the amount of saltwater introduced into the engines, thereby reducing corrosion and placing less stress on the engines. According to Coast Guard officials, this has contributed to increasing engine reliability and improvements in HU-25 fleet availability. The sensors on the six HU-25D models were recently upgraded at a cost of $43 million in acquisition, capital, and investment (AC&I) funding. Five of the six upgraded HU-25D aircraft are stationed at air station Miami. According to the air station’s commanding officer, the upgraded sensors, while critical to mission success, also have a relatively high rate of inoperability. Sensor inoperability is a function of the aircraft’s poor air conditioning system. When the cabin becomes too warm, the sensors fail. Air conditioning system and sensor failure does not present a safety of flight issue but does degrade mission capability. According to Coast Guard officials this problem is limited to the HU-25D models. The HH-60 helicopter is used for ports, waterways, and coastal security; drug interdiction; alien/migrant interdiction; defense readiness; search and rescue; ice operations; living marine resources; and marine environmental protection missions. Manufactured by Sikorsky Aircraft Corporation, the HH-60 entered into the Coast Guard fleet in 1990. The Coast Guard has a total of 41 HH-60 aircraft. The original estimated service life was approximately 20 years. The HH-60’s deteriorating subsystems, such as the avionics suite, are requiring increasing amounts of maintenance and thereby reducing fleet performance. Nevertheless, the HH-60 fleet has maintained a relatively high availability level, remaining close to or exceeding the Coast Guard’s 71 percent availability standard since fiscal year 2000. The fleet’s number of programmed flight hours has experienced some year-to-year fluctuations but has been relatively stable. At the same time, increasing subsystem failures are requiring more unit-level maintenance, as reflected by the fleet’s general rise in the number of labor hours per flight hour. Further, Coast Guard officials have told us that flight crews and maintenance personnel have to work harder and longer to maintain the fleet’s high availability levels. Table 10 provides a summary of the key performance measures. In constant dollars, the HH-60 fleet’s estimated scheduled and unscheduled maintenance expenditures generally trended downward during fiscal years 2000 through 2004, while the cost per flight hour has fluctuated. In contrast, the amount of HH-60 deferred maintenance incurred by the Coast Guard has nearly doubled since fiscal year 2000. HH- 60 fleet product line managers attribute this increase to budget constraints and an expansion in the scope of the HH-60 overhauls without a corresponding increase in the number of maintenance personnel. Table 11 provides a summary of the key maintenance measures. According to HH-60 flight crews and maintenance staff, the reliability of the aircraft’s 1970s era avionics system is steadily declining. The system’s increasing failure rate is directly affecting the HH-60 fleet’s mission capabilities, as avionics system failures are occurring every 11 flight hours, on average. Further, according to the Coast Guard, HH-60 avionics repair vendors will phase out system component repairs in fiscal year 2007. For these reasons, the Coast Guard has implemented an HH-60 avionics upgrade to replace the current system with a state-of-the-art open architecture system that Coast Guard officials claim will meet the future needs of HH-60 missions. The Coast Guard estimates that this program will cost about $84 million and will be completed in fiscal year 2010. The Coast Guard has allocated $30.8 million through fiscal year 2005 for the program. The Coast Guard has developed a service life extension program for the HH-60 fleet to upgrade structural components such as beams, fittings, and frames, and will increase depot-level maintenance production to nine aircraft per year. According to the Coast Guard, the program will extend the service life of the HH-60 fleet through 2022. The HH-65 is a twin-engine, short-range recovery helicopter used for ports, waterways and coastal security; drug interdiction; alien-migrant interdiction; defense readiness; search and rescue; ice operations; and marine environmental missions. The HH-65 entered Coast Guard service beginning in 1984. The helicopter’s airframe is manufactured by Eurocopter, and most HH-65s are equipped with Honeywell-manufactured LTS-101-750 engines. However, these engines are currently being replaced (see details below). The Coast Guard maintains 95 aircraft in the fleet. The original estimated service life for the HH-65 aircraft was 20 years, but according to Coast Guard aviation staff, the engine replacement program should extend the service live beyond that estimate. Despite safety and reliability concerns related to its engines, the HH-65 fleet has consistently maintained an availability level above the 71 percent Coast Guard standard during fiscal years 2000 through 2004. Moreover, the number of fleet programmed flight hours has steadily increased since fiscal year 2000. The fleet’s labor hours per flight hour have remained stable since fiscal year 2001. However, it should be noted that the number of fleet mishaps, particularly engine-related mishaps, increased sharply in 2004, primarily because of the engine and engine control system’s poor reliability. Table 12 provides a summary of the key performance measures. The HH-65 fleet has sustained a comparatively high level of availability even though maintenance data show that the fleet has had challenges related to poor engine performance. Fleet cost per flight hour steadily increased during fiscal years 2000 through 2004. The Coast Guard has not deferred any maintenance for the HH-65 fleet from fiscal year 2000 through 2004. Table 13 provides a summary of the key maintenance measures. The increasing trend in the number and seriousness of safety-related HH- 65 incidents prompted a Coast Guard decision in January 2004 to replace the existing engines and the associated engine control systems within 24 months. However, the Coast Guard now anticipates that the re-engining of all 84 operational HH-65 helicopters will take until February 2007. Total program costs are estimated to be nearly $350 million, or about $3.7 million per helicopter. As of June 7, 2005, 5 HH-65 aircraft have been successfully re-engined, 14 are under production at the Coast Guard’s Aircraft Repair and Supply Center, and an additional aircraft is under production at the American Eurocopter’s facility in Columbus, Mississippi. Upon completion of this test case, the Coast Guard will determine if the American Eurocopter facility is suitable to serve as the site for a second re-engining production line. According to the Coast Guard, the HH-65 was selected for conversion to the Deepwater program’s multimission cutter helicopter (MCH) beginning in fiscal year 2007. As such, the converted HH-65 helicopters will be part of the Deepwater program moving forward. There are several steps constituting the full MCH conversion, of which the current HH-65 re- engining program is one element. Other elements include the replacement of the HH-65’s landing gear and tail rotors. The HH-65’s new engine should allow the helicopter to support an increase in maximum gross weight. However, the current landing gear cannot support such an increase. The current tail rotors also need to be replaced because the product manufacturer is discontinuing production of the rotors, though supplies on hand should last until May 2005. Other elements of the MCH conversion, such as an upgrade of the avionics, will increase the aircraft’s service life and capabilities. These conversion elements are scheduled for integration beginning in fiscal year 2007. The 378-foot cutters are the largest cutters in the deepwater fleet, with a crew size of 19 officers and 147 enlisted. The Coast Guard has 12 of the 378-foot cutters in its deepwater fleet, with 10 of these stationed in the Pacific Area Command and the remaining 2 in the Atlantic Area Command. The 378-foot cutters typically operate 185 days away from home port per year. The 378-foot cutters are used in a number of missions, such as defense operations; maritime security/law enforcement; search and rescue; living marine resources; ports, waterway, and coastal security; alien-migrant interdiction; and drug interdiction. These cutters were commissioned by the Coast Guard during 1967 to 1972 and have an estimated service life of about 40 years, affected in part by the Fleet Rehabilitation and Modernization (FRAM) program, which is discussed in further detail below. The 378-foot cutters are considered by the Coast Guard to generally be deteriorating in condition, and this assertion is supported by the Coast Guard’s data measures. Major casualties per cutter have increased from fiscal year 2000 through 2004, and the percent of time free (POTF) of major casualties has fluctuated, but it has remained well below the target of 72 percent. Table 14 provides a summary of the key performance measures. Both scheduled and unscheduled maintenance expenditures for the 378- foot cutters have been on a general upward trend during fiscal years 2000 through 2004, with some fluctuations. The increasing age of these cutters, along with equipment obsolescence, appears to be driving these costs. Table 15 provides a summary of the key maintenance measures. The average age of the 378-foot cutters is 36.3 years. Each 378-foot cutter underwent the FRAM at approximately 20 years of age, beginning in the late 1980s and ending in 1992. As part of the FRAM, each cutter received an overhaul, costing anywhere from $70 million to $90 million, that Pacific Area Command officials estimated would add about 15 additional years of service—a mark that many of the cutters are beginning to reach. Many major propulsion and hull systems, however, were merely overhauled but not upgraded or replaced, and these systems are now at or near the end of their useful service life. In addition, the Coast Guard regularly compiles a list of the top 10 maintenance issues affecting each cutter class. The most recent top 10 list has identified service boilers, the gyrocompass navigation system, and propulsion shafting and shaft bearings, among other things, as the most critical sustainment issues for the 378-foot cutters. The Coast Guard’s 270-foot cutter fleet consists of 13 cutters, all of which are stationed in the Atlantic Area Command. These cutters were commissioned between 1983 and 1991, have an estimated service life of 30 years, and operate with a crew of 13 officers and 85 enlisted personnel. The 270-foot cutters typically operate 185 days away from home port each year and are used for maritime security/law enforcement; search and rescue; living marine resources; ports, waterway, and coastal security; alien-migrant interdiction; drug interdiction; and defense missions. Officials at Coast Guard headquarters stated that the condition of the 270-foot medium endurance cutters is generally worsening, and key condition measures seem to bear this out, though there were some improvements in fiscal year 2004. Major casualties per cutter saw a major increase from fiscal year 2000 to 2001, remained fairly steady during fiscal years 2002 and 2003, and then decreased in fiscal year 2004. The POTF of major casualties fluctuated during fiscal years 2000 through 2004 but remained well below the POTF target rate of 72 percent. Table 16 provides a summary of the key performance measures. Scheduled maintenance expenditures fluctuated for the 270-foot medium- endurance cutters from fiscal years 2000 to 2004, with a major increase in fiscal year 2003. Coast Guard officials attribute this increase in expenditures to the age and poor structural condition of the cutters, the replacement of obsolete equipment, and upgrades. The increased cutter maintenance that occurred in fiscal year 2003 was sourced from supplemental appropriations. Unscheduled maintenance expenditures saw a small amount of fluctuation for the 270-foot cutters during fiscal years 2000 through 2004. Table 17 provides a summary of the key maintenance measures. The average age of the 270-foot cutters is 18.0 years. During fiscal year 2005, the Coast Guard began a Mission Effectiveness Project (MEP) on the medium-endurance cutters (270-foot and 210-foot) in order to extend their service lives. The MEP includes replacement of the major systems, such as evaporators and gyrocompasses, as well as other auxiliary systems. The first 270-foot cutter entered the MEP in May 2005 at a cost of $7.5 million, funded from the Deepwater program’s acquisition, construction, and improvement account. Overall, the 270-foot cutter MEP is projected to cost $193.5 million, and the work will extend 10 years, into fiscal year 2015. In addition, regularly scheduled maintenance should continue to address the principal maintenance problems for the 270-foot cutters as identified in the top 10 list, including the air conditioning and refrigeration systems, the main propulsion control and monitoring system, and the auxiliary saltwater and sewage piping systems. The Coast Guard’s 210-foot cutter fleet consists of 14 cutters, 11 of which are stationed in the Atlantic Area Command, and the remaining 3 are based in the Pacific Area Command. These cutters were commissioned between 1964 and 1969, have an estimated service life of 43 to 49 years and operate with a crew of 12 officers and 63 enlisted personnel. The 210-foot cutters typically operate 185 days away from home port each year, during which time they perform missions such as maritime security/law enforcement; search and rescue; living marine resources; ports, waterway, and coastal security; alien-migrant interdiction; and drug interdiction. Officials at Coast Guard headquarters stated that the condition of the 210-foot medium endurance cutters is generally worsening, and key condition measures seem to bear this out, though there were some improvements in fiscal year 2004. Major casualties per cutter saw a major increase from fiscal year 2000 to 2001, a smaller increase during fiscal years 2002 and 2003, and then a decrease in fiscal year 2004. The POTF of major casualties has generally declined for the 210-foot cutters during fiscal years 2000 through 2004, and consistently remained well below the POTF target rate of 72 percent. Table 18 provides a summary of the key performance measures. Scheduled maintenance expenditures fluctuated for the 210-foot medium- endurance cutters from fiscal years 2000 to 2004, with a major increase in fiscal year 2003. Coast Guard officials attribute this increase in expenditures to the age and poor structural condition of the cutters, the replacement of obsolete equipment, and upgrades. The increased cutter maintenance that occurred in fiscal year 2003 was sourced from supplemental appropriations. Unscheduled maintenance expenditures saw a small amount of fluctuation during fiscal years 2000 through 2004. Table 19 provides a summary of the key maintenance measures. The average age of the 210-foot cutters is 38.3 years. The first 210-foot cutter will enter the MEP beginning in September 2005 at a projected cost of $5 million, funded from the Deepwater program’s acquisition, construction, and improvement account. Overall, the 210-foot cutter MEP is projected to cost a total of $98.5 million, and the work will extend into fiscal year 2009. In addition, regularly scheduled maintenance should continue to address the principal maintenance problems for the 210-foot cutters as identified in the top 10 list, such as the air conditioning system, refrigeration system, oily water separators, and supportability of the emergency diesel generators. Overall, there are currently 49 patrol boats in the Coast Guard Deepwater fleet. Of these, 41 are 110 feet long, with 29 of those stationed in the Atlantic Area Command and the remaining 12 stationed in the Pacific Area Command. Six of the Atlantic Area Command’s 110-foot patrol boats are currently serving in the Persian Gulf. These 110-foot patrol boats were acquired between 1986 and 1992, have estimated service lives of 14 to 20 years, and operate with a crew of 2 officers and 14 enlisted personnel. The patrol boats generally operate at 1,800 hours per year. The 110-foot patrol boats are used in a variety of missions, such as defense operations; maritime security/law enforcement; search and rescue; living marine resources; ports, waterway, and coastal security; alien-migrant interdiction; and drug interdiction. The remaining 8 patrol boats either have undergone or are in the process of being converted into 123-foot patrol boats. These patrol boats are to be stationed in the Atlantic Area Command and, like the 110-foot patrol boats, are to operate with a crew of 2 officers and 14 enlisted personnel. The 123-foot patrol boats are slated to perform the same missions as the 110-foot patrol boats but will have the capability to generally operate 2,500 hours per year. The first converted 123-foot patrol boat (Matagorda) became operational in February 2005, and as of early June 2005, 4 additional 123-foot patrol boats are operational, with restrictions. During fiscal years 2000 through 2004, the 110-foot patrol boats have experienced many problems, especially hull corrosion issues, which led to a worsening condition. However, the Coast Guard began addressing the hull condition issues (see details below), which likely contributed to the decreases in the major casualties in fiscal years 2003 and 2004. Table 20 provides a summary of the key performance measures. Scheduled and unscheduled maintenance expenditures saw large increases in fiscal years 2002 and 2003. These increases appear to be closely related to increased major casualties, deteriorating hull conditions, and an increase in operational tempo. In addition, increased cutter maintenance occurred during this time period due to supplemental appropriations. Table 21 provides a summary of the key maintenance measures. The average age of the patrol boats is 16.4 years. A number of the 110-foot patrol boats have experienced significant hull deterioration. To combat these corrosion problems and add other capabilities to the 110-foot patrol boats, the Coast Guard developed the Hull Sustainment Project (HSP) and the 123-foot patrol boat conversion program. The HSP was implemented to replace all deteriorated hull plates and structural members. The selected patrol boats were gutted, sandblasted, and thoroughly inspected, and all metal wasted beyond 15 percent was renewed. As of early June 2005, 9 of the original 49 110-foot patrol boats had completed the HSP. The Coast Guard believes that all remaining 110-foot patrol boats that have not had their hulls strengthened or replaced will eventually require such work. The Coast Guard is currently preparing a business case analysis in order to use $49.2 million in fiscal year 2005 supplemental appropriations for a 110-foot patrol boat MEP. This project would include hull sustainment work. In addition to the HSP, 8 patrol boats deemed to be among those in the worst condition were placed in the 123-foot conversion program. The Coast Guard had the option to place an additional 4 patrol boats (for a total of 12) in the 123-foot conversion program but has decided not to exercise this option. This program was implemented to renew the deteriorated hull structure and to add additional capability. Among the expected capability improvements are: enhanced and improved command, control, communications, computer, intelligence, surveillance, and reconnaissance capabilities; stern launch/recovery capability for the Short Range Prosecutor; renovation of some berthing areas, including relocation of aft berthing to a location forward and nonadjacent to the engine room; and renewing the pilot house to include a 360-degree bridge. As of early June 2005, 7 of the 8 patrol boats have completed the conversion, and 5 converted patrol boats are operational (Matagorda, Metompkin, Padre, Attu, and Vashon, with all patrol boats currently under operating restrictions). The first patrol boat to come out of the conversion process, the Matagorda, was delivered to the Coast Guard in March 2004 but experienced a number of problems that prevented it from becoming operational until February 2005. Specifically, upon delivery, the Coast Guard identified several discrepancies with the original performance specifications. One such discrepancy was the inability of the patrol boat to simultaneously launch or recover the short-range prosecutor while towing another vessel. In September 2004, Matagorda experienced hull buckling, and repairs were completed in December 2004. However, while en route from the shipyard to Key West, Florida, Matagorda encountered a storm, causing damage to the primary radar system and new cracks in the hull. These problems were resolved, and Matagorda began patrols in early February 2005. Additionally, the contractor that performed the work is applying lessons learned from the Matagorda conversion to the other patrol boats still undergoing conversion. Further, the contractor has increased the number of quality assurance personnel from one to four to improve oversight of the conversion process. The Coast Guard top 10 list mentions several maintenance concerns, in addition to hull corrosion, that have negatively affected the condition of the 110-foot patrol boats. These include difficulties in obtaining parts for the fin stabilizer system, steering spaces holding moisture (which leads to rust and corrosion), and exhaust piping leaks. In addition, the Coast Guard has stated that mechanical and electrical subsystems need to be upgraded or replaced if the patrol boats are to operate for another 10 to 15 years, even the newly converted 123-foot patrol boats. Margaret Wrightson, Director (415) 904-2200. Steven Calvo, Christopher Conrad, Adam Couvillion, Michele Fejfar, Geoffrey Hamilton, Julie Leetch, Michele Mackin, Stan Stenersen, and Linda Kay Willard. Coast Guard’s Acquisition Management: Deepwater Project’s Justification and Affordability Need to Be Addressed More Thoroughly, GAO/RCED-99-6 (Washington, D.C.: Oct. 26, 1998) Coast Guard: Budget Challenges for 2001 and Beyond, GAO/T-RCED-00-103 (Washington, D.C.: March 15, 2000) Coast Guard: Progress Being Made on Deepwater Project, but Risks Remain, GAO-01-564 (Washington, D.C.: May 2, 2001) Coast Guard: Actions Needed to Mitigate Deepwater Project Risks, GAO-01-659T (Washington, D.C.: May 3, 2001) Coast Guard: Strategy Needed for Setting and Monitoring Levels of Effort for All Missions, GAO-03-155 (Washington, D.C.: Nov. 12, 2002) Coast Guard: Comprehensive Blueprint Needed to Balance and Monitor Resource Use and Measure Performance for All Missions, GAO-03-544T (Washington, D.C.: March 12, 2003) Contract Management: Coast Guard’s Deepwater Program Needs Increased Attention to Management and Contractor Oversight, GAO-04-380 (Washington, D.C.: March 9, 2004) Coast Guard: Replacement of HH-65 Helicopter Engine, GAO-04-595 (Washington, D.C.: March 24, 2004) Coast Guard: Key Management and Budget Challenges for Fiscal Year 2005 and Beyond, GAO-04-636T (Washington, D.C.: April 7, 2004) Coast Guard: Deepwater Program Acquisition Schedule Update Needed, GAO-04-695 (Washington, D.C.: June 14, 2004) Coast Guard: Observations and Agency Priorities in Fiscal Year 2006 Budget Request, GAO-05-364T (Washington, D.C.: March 17, 2005) Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-307T (Washington, D.C.: April 20, 2005) Coast Guard: Preliminary Observations on the Condition of Deepwater Legacy Assets and Acquisition Management Challenges, GAO-05-651T (Washington, D.C.: June 21, 2005)
The Coast Guard has been asserting that its deepwater legacy assets are "failing at an unsustainable rate." After the events of September 11, 2001, the Coast Guard's deepwater missions expanded to include a greater emphasis on ports, waterways, and coastal security. These heightened responsibilities required changes to the Deepwater implementation plan to provide the assets with greater operational capabilities. To address these needs, in 2002, the Coast Guard began a multiyear acquisition program to replace or modernize its deepwater assets that is currently estimated to cost $19 to $24 billion. More recently, it began studying options for replacing or modernizing the assets more rapidly in an effort to avoid some of the costs that might be involved in keeping aging assets running for longer periods. This report addresses three questions related to this effort: (1) How has the condition of the Coast Guard's deepwater legacy assets changed during fiscal years 2000 through 2004? (2) What actions has the Coast Guard taken to maintain, upgrade, and better manage its deepwater legacy assets? and (3) What are the management challenges the Coast Guard faces in acquiring new assets, especially if a more aggressive acquisition schedule is adopted? Available Coast Guard condition measures indicate that the condition of most Coast Guard legacy aircraft and cutters generally declined during fiscal years 2000-2004, but these measures are inadequate to capture the full extent of the decline in the condition with any precision. GAO's field visits and interviews with Coast Guard staff, as well as reviews of other evidence, showed significant problems in a variety of asset systems and equipment that are not currently captured in the Coast Guard's condition measures. The Coast Guard has already taken actions to help keep its deepwater legacy assets operational. For example, to help meet mission requirements, Coast Guard staff are performing more extensive maintenance between deployments, but even so, aircraft and cutters continue to lose mission capabilities. Responding to these continued concerns, as well as to matters raised during this review and in prior GAO reports, the Coast Guard has begun to explore additional strategies and approaches to better determine and improve the mission capabilities of its legacy assets. These actions include (1) developing a more proactive approach for prioritizing maintenance and capability enhancement projects needed on its legacy assets; (2) developing measures that more clearly demonstrate the extent to which assets' conditions affect mission capabilities; and (3) for one command, proposing a new strategy to sustain one of its oldest classes of cutters. These ongoing efforts, while promising, are too new to allow GAO to assess whether they will allow the Coast Guard to better determine and improve the mission capabilities of its legacy assets. If the Coast Guard adopts a more aggressive acquisition schedule, it will likely continue to face a number of challenges to effectively manage the Deepwater program. GAO has warned that the Coast Guard's acquisition strategy of relying on a prime contractor ("system integrator") to identify and deliver the assets needed carries substantial risks. GAO found that well into the contract's second year, key components for managing the program and overseeing the system integrator's performance had not been effectively implemented. While the Coast Guard has been addressing these problems--for example, putting more emphasis on competition as a means to control costs--many areas have not been fully addressed. A more aggressive acquisition schedule would only heighten the risks.
Employer-sponsored health coverage is the leading source of health coverage in the United States. In 2010, 59 percent of Americans under age 65 received health coverage through employer-sponsored group health plans, and an additional 7 percent received coverage through health coverage purchased directly from health insurers in the individual market. Employers may provide coverage either by purchasing coverage from a health insurer (fully insured plans) or by funding their own health coverage (self-insured plans). Small employers typically offer fully insured plans, while large employers are more likely to be self- insured. Small employers are also less likely to offer their employees health coverage compared to large employers, citing the cost of coverage as a key reason. Additionally, firms with more high-wage workers are more likely to offer coverage to their employees than those with more low- wage workers. Rates of employer-sponsored health coverage have declined in the last decade—from 68 percent in 2001 to 60 percent in 2011. Most of this decline occurred by 2005 and was driven primarily by a decline in the number of very small employers with three to nine employees offering health coverage. In addition, employee participation in employer-sponsored coverage has also decreased—from 70 percent in 2001 to 65 percent in 2011, in part because of a decline in employee eligibility for the coverage. Further, employees’ share of the cost of coverage is increasing faster than employers’ share—the employee contribution to the average annual premium for family coverage increased 131 percent from 2001 to 2011 compared to a 108 percent increase in the employer contribution for the same time period. PPACA contains a number of provisions that may affect whether employers offer health coverage. These provisions include an “individual mandate,” or the requirement that individuals—subject to certain exceptions—obtain minimum essential health coverage or pay a tax penalty starting in 2014; the establishment of health insurance exchanges in 2014— essentially, health insurance marketplaces in which individuals and small businesses can compare, select, and purchase health coverage from among participating carriers; health insurance market reforms including a requirement that prevents health plans and insurers in the individual and small group markets from denying coverage or charging higher premiums because of pre- existing conditions or medical history, and that limits the extent to which premiums may vary; premium subsidies—which provide sliding scale tax credits starting in 2014 to limit premium costs for individuals and families with incomes up to 400 percent of the federal poverty level—for purchasing individual coverage through an exchange; penalties for certain large employers that do not offer qualified health coverage and have at least one full-time employee receiving a subsidy (in the form of a premium tax credit or cost-sharing reduction) in a plan offered through an exchange starting in 2014, or for certain large employers that provide access to coverage but do not meet certain requirements for affordability; tax credits for certain small businesses toward a share of their employee health coverage beginning in 2010; a 40 percent excise tax on certain employer-sponsored health plans whose costs exceed a certain threshold in 2018; a state Medicaid expansion effective in 2014 for individuals who are under 65 years old, have incomes at or below 133 percent of the federal poverty level, and meet other specified criteria. Researchers have used various types of studies to predict the effect of PPACA on employer-sponsored health insurance, including microsimulation models, other analytic approaches, and employer surveys. Microsimulation models—commonly used statistical models— generally use published survey data to construct a base data set of individuals, families, and employers, and then attempt to predict responses to public policy changes by drawing from the best available evidence in health economics literature and, in some cases, existing empirical evidence from related or smaller-scale policy changes (such as prior changes in Medicaid eligibility and state insurance reform efforts). The models systematically estimate the combined effect of multiple provisions in legislation, such as PPACA, based on this previous research and empirical data. For example, with respect to PPACA, models can provide an estimate of employer-sponsored coverage that considers both the number of employers that may discontinue offering coverage and the number that may begin to offer coverage. Models can also incorporate into their analyses estimates of the number of employees who may take up or refuse offers of such coverage. Model limitations include their dependence upon multiple types of data from multiple sources of varying quality and that they must rely on many assumptions. The impact of past policy changes also may not necessarily be predictive of the impact of future changes, and there is little information available with which to assess the validity of their projections.analytic approaches to model behavior in response to policy changes Studies we reviewed using other varied in their methods, ranging from a cost-benefit comparison to an analysis that used survey data and economic theory to predict behavior. Employer surveys have also been cited to illustrate the potential impact of PPACA on employer health benefits. Unlike microsimulation models, surveys have the advantage of reflecting the actual, current perspectives of employers, and they can also assess how employers’ behavior may be affected by the actions of other employers of similar size and industry. However, they have limitations as a predictive tool. For example, most surveys relating to PPACA asked respondents about employers’ likelihood of dropping coverage, rather than the likelihood of newly offering coverage as a result of PPACA or the number of employees that may take up or refuse such coverage. Thus, they may not illustrate the net effect of PPACA on employer-sponsored coverage. Further, the validity of their results may be limited by the knowledge of survey respondents. Experts have noted that employer surveys tend to be answered by human resource officials with varying levels of knowledge about PPACA. In addition, researchers note that survey responses do not require careful analysis or extensive deliberation and have no consequences for the responders. Therefore, surveys are more limited in their ability to systematically assess the combined effect of multiple PPACA provisions—that is, they cannot ensure that respondents consider (or have the ability to consider) all of the relevant provisions when deciding how to respond. Moreover, the results of the sample of employers surveyed may not always be generalizable to all employers, depending on the number of respondents and other aspects of the survey methodology. Microsimulation studies generally predicted little change in employer- sponsored health coverage in the near term, but results of studies using other analytic approaches and employer surveys varied more widely. Few studies provided longer-term predictions of the prevalence of employer- sponsored coverage, and those that did so expressed uncertainty about their estimates. Microsimulation studies that examined the effect of the individual mandate estimated that more people would have employer- sponsored coverage with the mandate in place compared to without the mandate. Among the five microsimulation studies we reviewed, estimates of PPACA’s net effect on changes in the rates of employer-sponsored coverage ranged in the near term from a decrease of 2.5 percent to an In increase of 2.7 percent in the number of individuals with coverage.particular, three projected an increase in the number of individuals with coverage. The Centers for Medicare & Medicaid (CMS) estimated a net increase of about 0.1 percent (200,000 individuals), and the studies by the RAND Corporation (RAND) and the Urban Institute/Robert Wood Johnson Foundation (RWJF) each projected a net increase of 2.7 percent affecting about 4 million individuals. The remaining two studies projected a decrease: the Congressional Budget Office (CBO) projected a 2.5 percent net decrease affecting about 4 million individuals, while The Lewin Group projected a net decrease of 1.6 percent affecting about 2 million individuals. (See fig. 1.) Two of the studies also indicated that the majority of individuals who lose employer-sponsored coverage would transition to other sources of coverage. For example, the RAND study indicated that out of the 6.5 million individuals it projected to lose employer-sponsored coverage after implementation of PPACA, 1.9 million would enroll in individual coverage through an exchange and 3.5 million would enroll in Medicaid. The remaining 1.1 million individuals would become uninsured. Estimates from the three studies we reviewed that used other analytic approaches varied more widely than those from the microsimulation models. Two of the three studies predicted small near-term changes in the number of individuals with employer-sponsored coverage. One of the studies, published by the Employment Policies Institute (EPI), used a modeling approach that predicted behavioral responses of all workers in a nationally representative sample to three main provisions of PPACA. This study projected a net increase of about 6 percent, or 4 million, in the number of individuals with employer-sponsored coverage. Another study by Booz & Company Inc. that used a combination of surveys, interviews, focus groups, and modeling projected a net decrease of 2 to 3 percent, or from 3 million to 4 million individuals. The third study, conducted by the American Action Forum, used a decision-making model based on cost-benefit comparisons to project a larger decrease of up to 35 million in the number of people with employer-sponsored coverage. However, this study did not consider whether employers may newly offer coverage or estimate the number of individuals that would be newly covered as a result. Employer surveys varied widely in their estimates of employers’ responses to PPACA. Sixteen of the 19 surveys we reviewed reported estimates of employers dropping coverage for employees in general (rather than only for certain types of employees). Among these 16 surveys, 11 indicated that 10 percent or fewer of employers were likely to drop coverage in the near term, and 5 indicated that from 11 to 20 percent were likely to drop coverage in the near term. The estimates ranged from 2 to 20 percent across these 16 surveys. (See table 1.) Because these surveys were typically of employers currently offering coverage, most did not reflect the number of employers that may be likely to begin offering coverage under PPACA. A higher proportion of employers indicated that they were “somewhat likely” to drop coverage, among the 6 surveys that also provided this response option. Among these surveys, 2 (the National Federation of Independent Businesses (NFIB) and Towers Watson) indicated that 10 percent or fewer of employers were “somewhat likely” to drop coverage, 2 surveys (Willis and Mercer) indicated that 11 to 20 percent of employers had such plans, and the remaining 2 surveys (McKinsey & Co. (McKinsey) and PricewaterhouseCoopers) indicated that over 20 percent had such plans. In addition, two surveys asked respondents how their decisions to drop or offer coverage may be affected by other employers’ actions. In one survey 78 percent of employers indicated that they were planning to follow the lead of other employers. In the other survey 25 percent of employers indicated that it would have a “major impact” on their decision if “one or a few large, bellwether employers” or one of their major competitors dropped coverage for a majority or all of their employees. Three of the 16 surveys that also examined employer plans to newly offer coverage as a result of PPACA indicated that from 1 and 28 percent of employers were likely to do so. The NFIB survey indicated that about 1 percent of the employers surveyed were likely to begin offering coverage as a result of PPACA; the McKinsey survey indicated that 13 percent of employers with 2 to 49 employees, and 14 percent of employers with 50 to 499 employees, were likely to begin offering coverage. In addition, the Kaiser Family Foundation/Health Research & Educational Trust survey that examined employer plans to only newly offer (but not drop) coverage indicated that 15 percent of small employers (fewer than 50 employees) that did not offer health coverage and were aware of the small business tax credit were planning to add coverage as a result of it; and the Market Strategies International survey indicated that 28 percent of employers not offering health coverage would begin to do so. Among the studies we reviewed, only two microsimulation studies examined the longer-term effects of PPACA on employer-sponsored coverage. CMS projected that the number of individuals with employer- sponsored coverage would decrease by approximately 1 percent relative to estimates without PPACA in each year from 2017 through 2019, and that this annual gap would accelerate after that as a result of the high- cost plan excise tax. CBO projected a drop of about 3 percent, slightly larger than its near-term estimate, in employer-sponsored coverage in each year from 2017 through 2019, relative to estimates without PPACA in each year, and projected that this annual gap would decrease thereafter. The studies also noted that there is a large amount of uncertainty regarding how employers and employees will respond to policy changes as sweeping and complex as those included in PPACA, and some researchers indicated that long-term predictions of the effects of PPACA are particularly uncertain. Four of the five microsimulation studies examined the effect of the individual mandate and predicted that fewer individuals would have employer-sponsored coverage without the mandate as compared to with the mandate. These studies separately estimated the effect of PPACA both with and without the individual mandate. The estimates ranged from about 2 million to 6 million fewer people covered without the mandate compared to with the mandate. (See fig. 2.) Certain differences in key assumptions may have contributed to some variation in the estimates from the microsimulation studies we reviewed. Variation in estimates from the studies that used other analytic approaches was likely caused in part by differences in their methodologies and the extent of their incorporation of PPACA provisions into their analyses. Variation in estimates from the employer surveys was likely due in part to differences in survey methods, respondents, and the manner in which PPACA provisions were referenced throughout the survey. Certain differences in factors, such as underlying assumptions about employer and employee decision making, may have contributed to some variation in the estimates, although the five microsimulation studies we reviewed shared methodological similarities and therefore generated relatively similar estimates of changes to employer-sponsored coverage. The studies generally used similar modeling techniques and many of the same data sets to calculate their estimates. Specifically, to construct baseline distributions of coverage in the United States and “synthetic” firms intended to reflect the demographics of employees in actual firms, the studies relied on data sets such as the Medical Expenditure Panel Survey (MEPS), the Current Population Survey (CPS), and the Survey of Income and Program Participation (SIPP). The studies also made certain common assumptions. For instance, most assumed, as illustrated by evidence in the literature, that employers electing to drop coverage for their employees would increase wages in order to compensate for the loss of health benefits, and certain studies noted that the increased wages would factor in the tax exclusion of health benefits. However, another researcher has noted that employers’ decisions to increase employees’ wages in lieu of offering health coverage will depend on a number of factors—most important the strength of the economy and the labor market. Further, most studies assumed that employers generally make decisions about health coverage based on their entire workforce and would not offer health benefits to some, but not all, employees. For example, CBO noted that there are legal and economic obstacles to offering health benefits to only certain employees, including a prohibition on discrimination in favor of highly compensated individuals. Such similar assumptions likely contributed to the consistency of the studies’ estimates, which suggested that PPACA would result in relatively small changes to employer-sponsored coverage in the near term. However, differences in underlying assumptions about employer and employee responses to PPACA, the time frames of projections, and assessment of the effectiveness of PPACA’s individual mandate likely contributed to some variation in the estimates. Modeling employer and employee responses to PPACA: The studies generally used one of two different approaches to model employer and employee responses to PPACA. The CBO study drew from available evidence in health economics literature about historical responses to premium changes in order to model the future decisions of employers and employees in response to PPACA. The RAND and Urban Institute/RWJF studies assumed that employers and employees would make optimal choices by weighing the financial costs and benefits of available options, taking into account factors such as the PPACA-imposed individual and employer penalties for not obtaining or offering coverage. The Lewin Group study used a combination of the two approaches. Time frames of the estimates: While each microsimulation model estimated the effects of PPACA in a certain year as compared to coverage without PPACA in a given year, the models varied in their time frame of focus. The Lewin Group and Urban Institute/RWJF studies we reviewed simulated the effects of PPACA in 2011 (assuming implementation of key PPACA provisions). However, the RAND study simulated the effects of PPACA in 2016, and the CBO and CMS studies simulated the effects of PPACA over a range of years (2012 through 2022 and 2010 through 2019, respectively). Compliance with the individual mandate: Models varied in their assessment of the degree of compliance with PPACA’s individual mandate. The CMS and Urban Institute/RWJF studies assumed compliance would be driven by both the financial incentive of a penalty as well as the desire to obey a statutory mandate. Similarly, the CBO study assumed that compliance with the mandate would be high, even among individuals exempt from penalties, because of a natural preference for complying with the law. CBO also assumed that the penalties for noncompliance may be imperfectly enforced. However, the RAND study assumed that penalties for noncompliance would be perfectly enforced, but did not assume that the mandate would increase compliance among individuals exempt from penalties. Similarly The Lewin Group also assumed a lower compliance with the individual mandate than CBO, in part because there are no legal consequences to going without coverage beyond the penalty. Estimates from the three studies that used other analytic approaches varied more widely likely in part because of differences in the studies’ methodologies as well as their consideration of PPACA provisions. For example, the EPI study, which predicted a net increase of 4 million in the number of individuals with employer-sponsored coverage, incorporated some of the statistical modeling techniques and underlying theory of employer and employee behavior used by the microsimulation models, and was therefore able to more systematically examine the combined effects of PPACA’s provisions. The American Action Forum study, which predicted that up to 35 million individuals may lose employer- sponsored coverage, used a cost-benefit comparison, examining individual employers’ financial trade-offs between offering coverage and dropping coverage for employees of different income levels and paying the employer penalties and increasing employees’ wages to compensate. The study suggested that PPACA provides strong financial incentives for employers to drop coverage for many of their low-income employees, but that there are few incentives to drop coverage for higher-income employees. Certain researchers have noted key limitations of the study, including that it did not take into account the impact of PPACA’s individual mandate, the nonfederal tax advantage of employer-sponsored coverage, the cost of single health coverage plans, and the nondiscrimination rules that may prevent employers from dropping coverage for some, but not all, employees.measure the net effect of PPACA on employer health coverage, thus Additionally, unlike the other two studies, this study did not addressing only those that may drop coverage but not those that may newly offer it. Finally, the Booz and Company Inc. study, which predicted a net decrease of 3 to 4 million in the number of individuals with employer-sponsored coverage, used a combination of interviews, focus groups, surveys, and statistical modeling to derive its estimates. The study estimated the change in employer-sponsored coverage between 2 years—2009 and 2016—but did not separate the effects of PPACA from any changes to employer-sponsored coverage that may occur between these years because of factors unrelated to PPACA, such as a continuation of the overall declining rates of employer-sponsored coverage since the last decade. Varying estimates from the 16 employer surveys of the extent to which employers were likely to drop health coverage may have stemmed from differences in sampling techniques, the response rates and number of respondents, the types of employers surveyed, the framing of survey questions, and the manner in which PPACA provisions were referenced throughout the survey. Sampling techniques and number of respondents: Surveys varied in the methodology used to draw their sample of respondents. Some, such as the Mercer survey, sampled randomly within the national employer population, which helped ensure that results were generalizable to all nonsurveyed employers with similar characteristics. Others, such as the International Foundation of Employee Benefit Plans (IFEBP) survey, used nonrandom sampling techniques, which limited the generalizability of their results. In addition, the number of survey respondents ranged widely, from 104 in the Benfield Research survey to about 2,840 in the Mercer survey, which also could have implications for the generalizability of results.The surveys generally did not publicly disclose their response rates. Employer respondent type: Surveys varied in the type of employers surveyed. Some, such as those conducted by trade groups, were limited to members of the surveying organization. Others were limited to only small or only large employers, or employers within a particular industry, or included a broader mix of small, midsize, and large employers across all types of industries. For example, the NFIB survey included only small employers with 50 or fewer employees, while the majority of respondents to the HighRoads survey were from hospitals and other health care systems. The Mercer and Willis surveys included a wider range of employer sizes and industries. Some surveys, such as the Benfield Research survey, included only self-insured employers, and others, such as the McKinsey survey, included only private sector employers. Framing of the survey questions: Surveys varied in the manner in which they asked whether employers were planning to drop health coverage in response to PPACA. For example, the Fidelity Investments (Fidelity) survey reported whether respondents were “seriously thinking about no longer offering health care coverage,” the HR Policy Association survey asked if respondents were giving “serious consideration to discontinuing providing health benefits,” and the NFIB survey asked if employers were “not at all likely” or “not too likely” to “have an employee insurance plan 12 months from now.” In addition, some surveys reported specifically about active employee health plans, while others did not distinguish between active employees and retirees. For example, the Towers Watson survey reported whether respondents planned to “replace health care plans for active employees working 30+ hours per week with a financial subsidy” while the GfK Custom Research North America survey reported whether employers were “very or somewhat likely to drop coverage” without specifying whether this was for active employees or retirees. Referencing of PPACA provisions: Surveys varied in their assumptions of respondent knowledge of PPACA provisions. For example, 11 surveys assumed a certain level of respondent awareness of key PPACA provisions and did not specifically refer to the provisions in the phrasing of their questions about plans to drop coverage. However, other surveys phrased their questions in the context of specific PPACA provisions or explicitly asked respondents about their knowledge of the provisions. For example, the PricewaterhouseCoopers survey asked how likely respondents were to “cover employees through state-run health insurance exchange pools,” and the Willis survey asked how likely respondents were to “drop coverage to trigger migration of employees to state-based exchanges.” The McKinsey survey also phrased its questions about discontinuing health coverage in the context of select PPACA provisions and provided additional information to respondents to inform them about the provisions. PPACA may affect certain types of employers or employers with certain employee populations more than all employers or employees. Some employers were considering benefit design changes. Four of five surveys that examined changes in the prevalence of employer-sponsored coverage by employer size indicated that a greater share of small employers (from 5 to 22 percent) were considering dropping coverage compared to large employers (from 2 to 14 percent) in these surveys. These surveys included Fidelity (22 percent and 14 percent for small and large employers, respectively), McKinsey (9 percent and 5 percent for small and large employers, respectively), and Mercer (5 percent and 2 percent for small and large employers, respectively). One survey (Willis) did not indicate any differences between small and large employers. Surveys that examined changes in the prevalence of employer-sponsored coverage for certain types of beneficiaries indicated that these individuals could be more affected than others. Five of the nine surveys that considered the effect on retirees indicated that a higher proportion of employers were considering dropping coverage for retirees compared to all employees in these surveys—between 9 and 20 percent compared to 4 percent and 9 percent, respectively. For example, Mercer indicated that 17 percent and 5 percent of employers were considering dropping coverage for new retirees and all employees, respectively, and Willis indicated that 9 percent and 5 percent of employers were considering dropping coverage for retirees and all employees, respectively. Two of the four remaining surveys (PricewaterhouseCoopers and IFEBP) indicated no differences between rates of employers dropping coverage for retirees and for all employees, and the remaining two only examined the effect of PPACA on subsets of employees, but not all employees. In addition, two surveys that examined the effect of PPACA on spouses and dependents indicated that between 12 and 15 percent of employers were considering dropping health coverage for spouses and dependents compared to a lower proportion for all employees. For example, McKinsey indicated that 15 percent and 9 percent of employers were definitely considering dropping coverage for spouses/dependents and all employees, respectively. Several of the 19 employer surveys that we reviewed also indicated that PPACA may prompt employers to consider key changes to benefit designs that will generally result in greater employee cost for health insurance. Increased employee cost sharing: The 9 surveys that examined benefit design changes indicated that from 16 to 73 percent of employers were considering increasing employees’ share of the cost of coverage, for example, through increased premiums, deductibles, or co-payments. For example, the IFEBP survey indicated that about 40 percent of employers had increased or were planning to increase employee premium sharing, and about 29 percent had increased or planned on increasing in-network deductibles. Similarly, the PricewaterhouseCoopers survey indicated that 61 percent planned to increase employee premium sharing, and 57 percent planned to increase employee cost sharing through other benefit design changes. In addition, the 7 surveys that examined employer responses to the high-cost excise tax effective under PPACA in 2018 indicated that from 11 to 88 percent of employers had plans to take steps to avoid paying the tax; in 5 of these surveys, employers planned to redesign benefits and in 2 surveys employers had not identified specific strategies but planned to take steps. For example, the Aon-Hewitt survey indicated that 25 percent of employers anticipated changing their benefits to reduce plan cost, while the Willis survey indicated that 22 percent planned to increase deductibles or co-payments to avoid the tax. Use of account-based plans: The 9 surveys that examined employer plans to offer account based plans, such as high-deductible health plans (HDHP), consumer-directed health plans (CDHP), or health savings accounts indicated that from 17 to 73 percent of employers either had plans to offer such plans or saw the plans as attractive options for providing health coverage. For example, the Benfield Research survey indicated that about two-thirds of employers planned to offer a CDHP by 2015, and the Towers Watson survey indicated that 17 percent planned to start offering HDHPs in 2013 or 2014, bringing the total share of employers with HDHPs up to 74 percent. Move to self-insurance: Two of the 3 surveys that examined employers potentially becoming self-insured in response to PPACA indicated that from 12 to 52 percent were considering doing so, and the remaining survey indicated that 13 percent of employers reported increasing their consideration of such a move in response to PPACA. For example, the IFEBP survey indicated that about 52 percent of employers were considering such a move, compared to only about 6 percent in a prior year’s survey. We provided a draft of this report to two researchers with expertise in employee health benefits issues. They agreed with our report and provided suggestions and technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send a copy to the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or dickenj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. We reviewed the 27 studies listed below that contained original numerical estimates of the effect of the Patient Protection and Affordable Care Act (PPACA) on the prevalence of employer-sponsored coverage—5 based on microsimulation models, 3 based on other analytic approaches, and 19 based on employer surveys. 1. Centers for Medicare & Medicaid Services. Foster, R. S., Centers for Medicare & Medicaid Services Office of the Actuary. Estimated Financial Effects of the “Patient Protection and Affordable Care Act,” as Amended. Baltimore, Md.: April 2010. 2. Congressional Budget Office (CBO). CBO and JCT’s Estimates of the Effects of the Affordable Care Act on the Number of People Obtaining Employment-Based Health Insurance. Washington, D.C.: March 2012. Updated Estimates for the Insurance Coverage Provisions of the Affordable Care Act. Washington, D.C.: March 2012. Banthin, J. Effects of Eliminating the Individual Mandate to Obtain Health Insurance. Presentation at Bloomberg Government/Rand Corporation event. Washington, D.C.: March 2012. Elmendorf, D. W. CBO’s Analysis of the Major Health Care Legislation Enacted in March 2010. Testimony before the Subcommittee on Health, Committee on Energy and Commerce, House of Representatives. Washington, D.C.: March 2011. H.R. 4872, Reconciliation Act of 2010 (Final Health Care Legislation). Washington, D.C.: March 2010. 3. The Lewin Group. Sheils, J. F. and R. Haught. “Without the Individual Mandate, the Affordable Care Act Would Still Cover 23 Million; Premiums Would Rise Less Than Predicted.” Health Affairs, vol. 30, no. 11 (2011). Patient Protection and Affordable Care Act (PPACA): Long Term Costs for Governments, Employers, Families and Providers. Staff Working Paper # 11. Falls Church, Va.: 2010. 4. RAND Corporation. Eibner, C. and C. C. Price. The Effect of the Affordable Care Act on Enrollment and Premiums, With and Without the Individual Mandate. Santa Monica, Calif.: 2012. Eibner, C. et al. Establishing State Health Insurance Exchanges: Implications for Health Insurance Enrollment, Spending, and Small Business. Santa Monica, Calif.: 2010. 5. The Urban Institute/Robert Wood Johnson Foundation. Buettgens, M. and C. Carroll. Eliminating the Individual Mandate: Effects on Premiums Coverage, and Uncompensated Care. Washington, D.C., and Princeton, N.J.: January 2012. Garrett, B. and M. Buettgens. Employer-Sponsored Insurance under Health Reform: Reports of Its Demise Are Premature. Washington, D.C., and Princeton, N.J.: January 2011. 6. Ahlquist, G. D., P. F. Borromeo, and S. B. Saxena. The Future of Health Insurance: Demise of Employer-Sponsored Coverage Greatly Exaggerated. Booz & Company Inc. 2011. 7. Burkhauser, R. V., S. Lyons, and K. Simon. An Offer You Can’t Refuse: Estimating the Coverage Efffects of the 2010 Affordable Care Act. Washington, D.C.: Employment Policies Institute, July 2011. Burkhauser, R. V., S. Lyons, and K. Simon. The Importance of the Meaning and Measurement of “Affordable” in the Affordable Care Act. Working Paper # 17279, National Bureau of Economic Research. Cambridge, Mass.: August 2011. 8. Holtz-Eakin, D. and C. Smith. Labor Markets and Health Care Reform: New Results. American Action Forum. Washington, D.C.: May 2010. 9. Aon Hewitt. Employer Reaction to Health Care Reform: Retiree Strategy Survey. Lincolnshire, Ill.: 2011. 10. Benfield Research. Special Report: Employer Market Healthcare Reform Research Summary. St. Louis, Mo.: 2011. 11. Ceridian Health Care Compass. “Health Care Reform Presents New Challenges, Choices to U.S. Employers.” Issue 21. Cites findings from Ceridian’s Health Care Compass reader poll, July 2011. Accessed February 1, 2012. http://www.ceridian.com/employee_benefits_article/1,6266,15766- 79463,00.html. 12. Fidelity Investments. Fidelity Investments Survey Finds Majority of Employers Rethinking Health Care Strategy Post Health Care Reform. Boston, Mass.: July 2010. Accessed March 6, 2012. http://www.fidelity.com/inside-fidelity/employer-services/fidelity- survey-finds-majority-of-employers-rethinking-health-care-strategy- post-health-care-reform. 13. GfK Custom Research North America. Employers Skeptical of Health Reform, But Few Project Dropping Health Insurance Coverage.” New York, N.Y.: December 2011. Accessed March 29, 2012. http://www.gfkamerica.com/newsroom/press_releases/single_sites/00 9103/index.en.html. 14. HighRoads. “HighRoads Study Shows Employers Will Not Eliminate Benefits Coverage Due to Health Care Reform.” December 2011. Accessed February 1, 2012. http://newsroom.highroads.com/hr- compliance-connection/highroads-study-shows-employers-will-not- eliminate-benefits-coverage-due-to-health-care-reform. 15. HR Policy Association. 2011 Annual Chief Human Resource Officer Survey. Washington, D.C. 2010 Summer Chief Human Resource Officer Survey: Questions on the New Health Care Law. Washington, D.C. 16. International Foundation of Employee Benefit Plans. Health Care Reform: Employer Actions One Year Later; Survey Results: May 2011. Brookfield, Wis.: 2011. Health Care Reform: What Employers Are Considering; Survey Results: May 2010. Brookfield, Wis.: 2010. 17. Kaiser Family Foundation and Health Research & Education Trust. Employer Health Benefits 2011 Annual Survey. Menlo Park, Calif., and Chicago, Ill.: September 2011. 18. Lockton Companies, LLC. Employer Health Reform Survey Results, June 2011. Kansas City, Mo.: 2011. 19. Market Strategies International. Many Companies Intend to Drop Employer Coverage in 2014 as Health Care Reform Takes Full Effect. Livonia, Mich.: January 2011. Accessed May 1, 2012. http://www.marketstrategies.com/news/1902/1/Many-Companies- Intend-to-Drop-Employee-Coverage-in-2014-as-Health-Care-Reform- Takes-Full-Effect.aspx. 20. McKinsey & Company. How US Health Care Reform Will Affect Employee Benefits. 2011. 21. Mercer, LLC. National Survey of Employer-Sponsored Health Plans: 2011 Survey Report. New York, N.Y.: 2012. National Survey of Employer-Sponsored Health Plans: 2010 Survey Report. New York, N.Y.: 2011. 22. Midwest Business Group on Health. Financial Impact of Health Reform on Employer Benefits Not as Significant as Anticipated. Chicago, Ill.: March 2012. Accessed March 29, 2012. http://www.mbgh.org/mbgh/news/2012pressreleases/go.aspx?navigati onkey=a4956928-cca2-495a-94fc-ed56ce991fcd. 23. National Business Group on Health. Large Employers’ 2011 Health Plan Design Changes. Washington, D.C.: 2010. Majority of Employers Revamping Health Benefit Programs for 2012, National Business Group on Health Survey Finds. Washington, D.C.: August 2011. Accessed January 1, 2012. http://www.wbgh.org/pressrelease.cfm?ID=179. 24. National Federation of Independent Business. Small Business and Health Insurance: One Year After Enactment of PPACA. Washington, D.C.: 2011. 25. PricewaterhouseCoopers LLP. Health and Well-Being Touchstone Survey Results, May 2011. New York, N.Y.: May 2011. 26. Towers Watson. Health Care Changes Ahead: Survey Report. New York, N.Y.: October 2011. Health Care Reform: Looming Fears Mask Unprecedented Employer Opportunities To Mitigate Costs, Risk, and Reset Total Rewards. New York, N.Y.: May 2010. 27. Willis Group Holdings plc. Willis. The Health Care Reform Survey, 2011-2012. New York, N.Y.: 2011-2012. Diamond Management Technology Consultants and Willis North America. The Health Care Reform Survey, 2010. New York, N.Y.: 2010. In addition to the contact named above, Randy DiRosa (Assistant Director), Iola D’Souza, Yesook Merrill, Laurie Pachter, and Priyanka Sethi made key contributions to this report.
The share of employers offering health coverage has generally declined in the last decade. Researchers believe that certain provisions of PPACA could affect employers’ future willingness to offer health coverage, such as the availability of subsidized coverage through new health insurance marketplaces called “exchanges” and an “individual mandate,” which will require most people to obtain health coverage or pay a tax penalty. Certain PPACA provisions are scheduled to take effect in 2014. Researchers have provided various estimates of the effect PPACA may have on employer-sponsored coverage. GAO was asked to review the research on this topic. GAO examined (1) estimates of the effect of PPACA on the extent of employer-sponsored coverage; (2) factors that may contribute to the variation in estimates; and (3) how estimates of coverage vary by the types of employers and employees that may be affected, as well as other changes employers may be considering to the health benefits they offer. GAO reviewed studies published from January 1, 2009, through March 30, 2012 containing an original numerical estimate of the prevalence of employer-sponsored coverage at the national level. These included 5 microsimulation models and 19 employer surveys. Microsimulation models can systematically estimate the combined effects of multiple PPACA provisions in terms of both gains and losses of coverage; their results are based on multiple data sets and assumptions. Surveys reflect employer perspectives; they have limits as a predictive tool in part based on varied survey methodologies and respondent knowledge of PPACA. The five studies GAO reviewed that used microsimulation models to estimate the effects of the Patient Protection and Affordable Care Act (PPACA) on employer-sponsored coverage generally predicted little change in prevalence in the near term, while results of employer surveys varied more widely. The five microsimulation study estimates ranged from a net decrease of 2.5 percent to a net increase of 2.7 percent in the total number of individuals with employer-sponsored coverage within the first 2 years of implementation of key PPACA provisions, affecting up to about 4 million individuals. Two of these studies also indicated that the majority of individuals losing employer-sponsored coverage would transition to other sources of coverage. In contrast to the microsimulation studies, which estimate the net effect on individuals, most employer surveys measure the percentage of employers that may drop coverage in response to PPACA. Among the 19 surveys, 16 reported estimates of employers dropping coverage for all employee types. Among these 16, 11 indicated that 10 percent or fewer employers were likely to drop coverage in the near term, but estimates ranged from 2 to 20 percent. Most surveys were of employers currently offering coverage and therefore did not also address whether other employers may begin to offer coverage in response to PPACA; however, 3 that did found that between 1 and 28 percent would begin offering coverage as a result of PPACA. Longer-term predictions of prevalence of employer-sponsored coverage were fewer and more uncertain, and four microsimulation studies estimated that from about 2 million to 6 million fewer individuals would have employer-sponsored coverage in the absence of the individual mandate compared to with the mandate. Differences in key assumptions and consideration of PPACA provisions likely contributed to some variation among estimates from the five microsimulation studies and the 16 employer surveys. Variation among the microsimulation studies may have stemmed from differences in assumptions about employer and employee decision making, the time frames of the estimates, and assessments of potential compliance with the individual mandate. Variation among the employer surveys may be related to differences in survey sampling techniques, the number and types of employer respondents, and the framing of survey questions. For example, some surveys used a random sampling methodology, allowing their results to be generalized across all employers, while others did not. Also, some referred to specific PPACA provisions or provided specific information about provisions to respondents, while others did not. Some of the 19 employer surveys indicated that PPACA may have a larger effect on small employers and certain populations and may prompt some employers to change benefit designs. For example, 4 surveys found that smaller employers were more likely than other employers to stop offering health coverage in response to PPACA, and 5 found that employers in general were more likely to drop coverage for retirees than for all employees. Nine surveys also indicated that employers are considering key changes to benefit design, some of which may result in greater employee cost for health coverage. GAO provided a draft of this report to two researchers with expertise in employee health benefits issues. The experts agreed with GAO’s report and provided technical comments, which were incorporated as appropriate.
I would like to begin my testimony by briefly describing the missions and activities of each of the GSEs, and the risks they pose to taxpayers. Then I will describe the current GSE regulatory structure. Fannie Mae and Freddie Mac’s mission is to enhance the availability of mortgage credit across the nation during both good and bad economic times by purchasing mortgages from lenders (banks, thrifts, and mortgage lenders), which then use the proceeds to make additional mortgages available to home buyers. Most mortgages purchased by Fannie Mae and Freddie Mac are conventional mortgages, which have no federal insurance or guarantee. The companies’ mortgage purchases are subject to a conforming loan limit that currently stands at $359,650 for a single-family home in most states. Although Fannie Mae and Freddie Mac hold some mortgages in their portfolios that they purchased, most mortgages are placed in mortgage pools to support mortgage-backed securities (MBS). MBS issued by Fannie Mae or Freddie Mac are either sold to investors (off- balance sheet obligations) or held in their retained portfolios (on-balance sheet obligations). Fannie Mae and Freddie Mac guarantee the timely payment of principal and interest on MBS that they issue. The 12 FHLBanks that constitute the FHLBank System traditionally made loans—also known as advances—to their members (typically banks or thrifts) to facilitate housing finance and community and economic development. FHLBank members are required to collateralize advances with high-quality assets such as single-family mortgages. More recently, the FHLBanks initiated programs to purchase mortgages directly from their members and hold them in their retained portfolios. This process is similar to Fannie Mae and Freddie Mac’s traditional business activities, although the FHLBanks do not currently have the authority to securitize mortgages. The housing GSEs’ activities have generally been credited with enhancing the development of the U.S. housing finance market. For example, when Fannie Mae and the FHLBank System were created during the 1930s, the housing finance market was fragmented and characterized by regional shortages of mortgage credit. It is widely accepted that the housing GSEs’ activities helped develop a unified and liquid mortgage finance market in this country. While the housing GSEs have generated public benefits, their large size and activities pose potentially significant risks to taxpayers. As a result of their activities, the GSEs’ outstanding debt and off-balance sheet financial obligations were about $4.6 trillion as of year-end 2003. The GSEs face the risk of losses primarily from credit risk, interest rate risk, and operational risks. Although the federal government explicitly does not guarantee the obligations of GSEs, it is generally assumed on Wall Street that assistance would be provided in a financial emergency. In fact, during the 1980s, the federal government provided financial assistance to both Fannie Mae and the Farm Credit System (another GSE) when they experienced difficulties due to sharply rising interest rates and declining agricultural land values, respectively. The potential exists that Congress and the executive branch would determine that such assistance was again necessary in the event that one or more of the GSEs experienced severe financial difficulties. Because the markets perceive that there is an implied federal guarantee on the GSEs’ obligations, the GSEs are able to borrow at interest rates below that of private corporations. The GSEs also pose potential risks to the stability of the U.S. financial system. In particular, if Fannie Mae, Freddie Mac, or the FHLBank System were unable to meet their financial obligations, other financial market participants depending on payments from these GSEs may in turn become unable to meet their financial obligations. To the extent that this risk, called systemic risk, is associated with the housing GSEs, it is primarily based on the sheer size of their financial obligations. For example, as discussed in OFHEO’s 2003 report on systemic risk, if either Fannie Mae or Freddie Mac were to become insolvent, financial institutions holding the enterprise’s MBS could be put into a situation where they could no longer rely on those securities as a ready source of liquidity. Depending on the response of the federal government, the financial health of the banking segment of the financial services industry could decline rapidly, possibly leading to a decline in economic activity. As another example, derivatives counterparties holding contracts with a financially troubled GSE could realize large losses if the GSE were no longer able to meet its obligations. If such an event were to occur, widespread defaults could occur in derivatives markets. The current regulatory structure for the housing GSEs is divided among OFHEO, HUD, and FHFB, as described below: OFHEO is an independent office within HUD and is responsible for regulating Fannie Mae and Freddie Mac’s safety and soundness. OFHEO oversees the two GSEs through its authority to examine their operations, determine capital adequacy, adopt rules, and take enforcement actions. Although OFHEO’s financial plans and forecasts are included in the President’s budget and are subject to the appropriations process, the agency is not funded with tax dollars. Rather, Fannie Mae and Freddie Mac pay annual assessments to cover OFHEO’s costs. HUD is responsible for ensuring that Fannie Mae and Freddie Mac are accomplishing their housing missions. HUD is to accomplish this responsibility through its authority to set housing goals, and to review and approve new programs, and through its general regulatory authority. HUD is funded through appropriations. FHFB is responsible for regulating the FHLBank System’s safety and soundness as well as its mission activities. The agency has a five-member board, with the President of the United States appointing four members— each of whom serves a 7-year term—subject to Senate approval. The fifth member is the Secretary of HUD. The President also appoints FHFB’s chair subject to Senate approval. Like OFHEO, FHFB carries out its oversight authorities through examinations, establishing capital standards, rule making, and taking enforcement actions. FHFB is funded through assessments of the 12 Federal Home Loan Banks and is not subject to the appropriations process. As I stated previously, OFHEO has moved aggressively over the past year to identify and address risk management and accounting deficiencies at Fannie Mae and Freddie Mac, and FHFB has entered into written agreements with two FHLBanks to correct interest rate risk management deficiencies. Nevertheless, we continue to believe that the current fragmented regulatory structure for the housing GSEs is inadequate to monitor these large and complex financial institutions and their mission activities. Establishing a single housing GSE regulator with a board structure and equipping the agency with adequate authorities would better ensure that the GSEs operate in a safe and sound manner and fulfill their housing missions. The current fragmented structure of federal housing GSE regulation does not provide for a comprehensive and effective approach to safety and soundness regulation. Although the housing GSEs operate differently, their business activities and risks are becoming increasingly similar. As I described previously, the FHLBank System has established mortgage purchase programs over the past several years and FHLBank System mortgage holdings were $113 billion at year-end 2003. While still small compared with Fannie Mae and Freddie Mac’s combined retained mortgage portfolios of $1.3 trillion for the same time period, the FHLBank System now operates more like Fannie Mae and Freddie Mac and is increasingly incurring interest rate risks. Management of interest rate risk for mortgage holdings involves the application of sophisticated risk- management techniques, including the use of financial derivatives. Although such strategies are appropriate for risk management, they require specialized expertise, sophisticated information systems, and an understanding and application of sometimes complex accounting rules. In my view, it simply does not make sense for the federal government to entrust regulation of large and complex GSEs that are incurring similar risks to two different regulators, which have different approaches to examinations and setting capital standards. Moreover, OFHEO, and FHFB to a lesser degree, lack key authorities to fulfill their safety and soundness responsibilities, as described below: Unlike with bank regulators and FHFB, (1) OFHEO’s authority to issue cease and desist orders does not specifically list an unsafe and unsound practice as grounds for issuance and (2) OFHEO’s powers do not include the same direct removal and prohibition authorities applicable to officers and directors. Bank regulators have prompt corrective action authorities that are arguably more robust and proactive than those of OFHEO and FHFB. These authorities require that bank regulators take specific supervisory actions when bank capital levels fall to specific levels or provide the regulators with the option of taking other actions when other specified unsafe and unsound actions occur. Although OFHEO has statutory authority to take certain actions when Fannie Mae or Freddie Mac capital falls to predetermined levels, the authorities are not as proactive or broad as those of the bank regulators. OFHEO has also established regulations requiring specified supervisory actions when unsafe conditions are identified that do not include capital, but OFHEO’s statute does not specifically mention these authorities. FHFB’s statute does not establish a prompt corrective action scheme that requires specified actions when unsafe conditions are identified. Although FHFB officials believe they have all the authority necessary to carry out their safety and soundness responsibilities, the agency has significant discretion in resolving troubled FHLBanks. Consequently, there is limited assurance that FHFB would act decisively to correct identified problems. Unlike bank regulators—-which can place insolvent banks into receivership—and FHFB, which can take actions to liquidate an FHLBank, OFHEO is limited to placing Fannie Mae or Freddie Mac into a conservatorship. Thus, it is not clear that OFHEO has sufficient authority to fully resolve a situation in which Fannie Mae or Freddie Mac is unable to meet its financial obligations. Finally, we have significant concerns about HUD’s capacity as the mission regulator for Fannie Mae and Freddie Mac. As I stated in my testimony last year, HUD officials we contacted said the department lacked sufficient staff and resources necessary to carry out its GSE mission oversight responsibilities. HUD officials said that although the GSEs’ assets had increased nearly sixfold since 1992, HUD’s staffing had declined by 4,200 positions and GSE oversight—which consisted of about 13 full-time positions—must compete with other department priorities for the limited resources available. While HUD’s ability to ensure adequate resources for its GSE oversight responsibilities is limited, its mission oversight responsibilities are increasingly complex. For example, as we have noted in the past, it is not clear that HUD has the expertise necessary to review sophisticated financial products and issues, which may be associated with the department’s program review and approval and general regulatory authorities. In addition, without the authority to impose assessments on Fannie Mae and Freddie Mac to cover the costs associated with their mission oversight, it would appear that HUD will always be challenged to fulfill its GSE mission oversight responsibilities. To address the deficiencies in the current GSE regulatory structure that I have just described, we have consistently supported and continue to believe in the need for the creation of a single regulator to oversee both safety and soundness and mission of the housing GSEs. A single housing GSE regulator could be more independent, objective, efficient, and effective than separate regulatory bodies and could be more prominent than either one alone. We believe that valuable synergies could be achieved, and expertise in evaluating GSE risk management could be shared more easily, within one agency. In addition, we believe that a single regulator would be better positioned to oversee the GSEs’ compliance with mission activities, such as special housing goals and any new programs or initiatives any of the GSEs might undertake. This single regulator should be better able to assess these activities’ competitive effects on all three housing GSEs and better able to ensure consistency of regulation for GSEs that operate in similar markets. Further, a single regulator would be better positioned to consider potential trade-offs between mission requirements and safety and soundness considerations, because such a regulator would develop a fuller understanding of the operations of these large and complex financial institutions. Some critics of combining safety and soundness and mission have voiced concerns that doing so could create regulatory conflict for the regulator. However, we believe that a healthy tension would be created that could lead to improved oversight. The trade-offs between safety and soundness and compliance with mission requirements could be best understood and accounted for by having a single regulator that has complete knowledge of the GSEs’ financial condition, regulates the mission goals Congress sets, and assesses efforts to fulfill them. In determining the appropriate structure for a new GSE regulator, I note that Congress has authorized two different structures for governing financial regulatory agencies: a single director and board. Among financial regulators, single directors head the Office of the Comptroller of the Currency, the Office of Thrift Supervision and OFHEO, while boards or commissions run FHFB, the Securities and Exchange Commission, and the Board of Governors of the Federal Reserve, among others. The single director model has advantages over a board or commission; for example, the director can make decisions without the potential hindrance of having to consult with or obtain the approval of other board members. In our previous work, however, we have stated that a “stand-alone” agency with a board of directors would better ensure the independence and prominence of the regulator and allow it to act independently of the influence of the housing GSEs, which are large and politically influential. A governing board may offer the advantage of allowing different perspectives, providing stability, and bringing prestige to the regulator. Moreover, including the secretaries of Treasury and HUD or their designees on the board would help ensure that GSE safety and soundness and housing mission compliance issues are considered. I would note that in other regulatory sectors—-besides financial regulation—-Congress has established alternative board structures that could be considered as potential models for the new GSE regulator. One such alternative structure would be the hybrid board/director governance model. Under such an approach, there would be a presidentially appointed and Senate-confirmed agency head who would report to a board of directors composed of secretaries from key executive branch agencies, such as Treasury and HUD. Having board members from the same political party could lessen some of the tensions and conflicts observed at boards purposefully structured to have a split in membership along party lines. A board composed of members from the same political party, however, may not benefit from different perspectives to the same extent as a board with members from different political parties. Therefore, an advisory committee to the regulator could be formed to include representatives of financial markets, housing, and the general public. This advisory committee could be required to have some reasonable representation from different political parties. It is also essential that the new GSE regulator have adequate powers and authorities to address unsafe and unsound practices, respond to financial emergencies, and ensure that the GSEs comply with their public missions. These authorities include (1) cease and desist authority related to unsound practices, (2) removal and prohibition authority related to officers and directors, (3) prompt corrective action authority, and (4) authority to resolve a critically undercapitalized GSE, which may include placing it into receivership. Additionally, the new housing GSE regulator should have the authority to adjust as necessary the housing enterprises’ minimum and risk-based capital requirements to help ensure their continued safety and soundness. I would also like to comment on an area of recent debate concerning discussions of GSE regulatory reform, i.e., restrictions on Fannie Mae’s and Freddie Mac’s retained mortgage portfolios, which were approximately $1.3 trillion as of year-end 2003. In testimony before this committee on April 6, 2005, Federal Reserve Chairman Greenspan stated that the GSEs’ large retained mortgage portfolios do not necessarily benefit housing finance, are primarily intended to increase the GSEs’ profitability, and increase the potential for systemic financial risks. To address these concerns, Chairman Greenspan called for limits on the GSEs’ mortgage portfolios to be phased in over time. Moreover, Treasury Secretary Snow also expressed concern about the GSEs’ mortgage portfolios and called for limits on their size. We also have commented that the GSEs’ housing portfolios raise potential risks, and their benefits to housing finance markets are not clear. In my view, providing the new regulator with strong criteria to evaluate the costs and benefits of the GSEs’ mortgage portfolios and the authority to limit them, if necessary, is essential. The criteria could include the extent to which the mortgage portfolios enhance the GSEs’ housing mission, increase financial risks, and raise financial stability concerns. Further, the new housing GSE regulatory agency should be provided with explicit authority to oversee the GSEs’ corporate governance and management compensation practices. As I stated in my previous testimony, while the GSEs should have been leaders with respect to corporate governance, in many respects they were not. For example, unlike leading organizations, the chairman of Fannie Mae’s board also served as the GSE’s chief executive officer (CEO). I note that both Fannie Mae and Freddie Mac have formally agreed with OFHEO to separate the positions of chairperson of the board and CEO, thereby helping to ensure that the GSE boards independently establish company policies that their CEOs are responsible for carrying out. OFHEO also found that Fannie Mae’s compensation system provided managers with financial incentives to take actions—such as accounting irregularities—that increased the GSE’s reported short-term profitability. Without the authority to police such practices, the new regulator would not be able to fully carry out its oversight responsibilities. I also believe that the new GSE regulator should be tasked with the responsibility to conduct research on the extent to which the housing GSEs are fulfilling their housing and community development missions. As I described earlier, there are already questions about the extent to which the housing GSEs’ mortgage holdings benefit housing finance markets. Moreover, federal agencies, academics, and the GSEs have initiated studies that have estimated the extent to which Fannie Mae’s and Freddie Mac’s activities generate savings to home buyers, which have reached differing conclusions. Additional studies may be needed to more precisely estimate the extent to which the GSEs’ activities benefit home buyers. Further, there is virtually no empirical information on the extent to which FHLBank advances lower mortgage costs for home buyers or encourage lenders to expand their commitment to housing finance. Without better information, Congress and the public cannot judge the effectiveness of the GSEs in meeting their missions or whether the benefits provided by the GSEs’ various activities are in the public interest and outweigh their financial and systemic risks. Finally, I would now like to comment on issues surrounding the potential funding arrangements for a new housing GSE regulator. Exempting the new GSE regulator from the appropriations process would provide the agency with the financial independence necessary to carry out its responsibilities. More importantly, without the timing constraints of the appropriations process, the regulator could more quickly respond to budgetary needs created by any crisis at the GSEs. However, being outside the appropriations process can create trade-offs. First, while the regulator will have more control over its own budget and funding level, it will lose the checks and balances provided by the federal budget and appropriations processes or the potential reliance on increased appropriations during revenue shortfalls. As a result, the regulator would need to establish a system of budgetary controls to ensure fiscal restraint. Second, removing the regulator from the appropriations process could diminish congressional oversight of the agency’s operations. This trade-off could be mitigated through increased oversight by the regulator’s congressional authorizing committees, such as a process of regular congressional hearings on the new GSE regulator’s operations and activities. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions that you or other Members of the Committee may have. For further information regarding this testimony, please contact Thomas J. McCool, Managing Director, at (202) 512-8678 or mccoolt@gao.gov; or William B. Shear, Director, at (202) 512-4325 or shearw@gao.gov. Individuals making contributions to this testimony include Allison M. Abrams, Marianne E. Anderson, Wesley M. Phillips, and Karen C. Tremba. Federal Home Loan Bank System: An Overview of Changes and Current Issues Affecting the System. GAO-05-489T. Washington, D.C.: April 13, 2005. Government-Sponsored Enterprises: A Framework for Strengthening GSE Governance and Oversight. GAO-04-269T. Washington, D.C.: February 10, 2004. Federal Home Loan Bank System: Key Loan Pricing Terms Can Differ Significantly. GAO-03-973. Washington, D.C.: September 8, 2003. Financial Regulation: Review of Selected Operations of the Federal Housing Finance Board. GAO-03-364. Washington, D.C.: February 28, 2003. OFHEO’s Risk Based Capital Stress Test: Incorporating New Business Is Not Advisable. GAO-02-521. Washington, D.C.: June 28, 2002. Federal Home Loan Bank System: Establishment of a New Capital Structure. GAO-01-873. Washington, D.C.: July 20, 2001. Comparison of Financial Institution Regulators’ Enforcement and Prompt Corrective Action Authorities. GAO-01-322R. Washington, D.C.: January 31, 2001. Capital Structure of the Federal Home Loan Bank System. GAO/GGD-99- 177R. Washington, D.C.: August 31, 1999. Federal Housing Finance Board: Actions Needed to Improve Regulatory Oversight. GAO/GGD-98-203. Washington, D.C.: September 18, 1998. Federal Housing Enterprises: HUD’s Mission Oversight Needs to Be Strengthened. GAO/GGD-98-173. Washington, D.C.: July 28, 1998. Risk-Based Capital: Regulatory and Industry Approaches to Capital and Risk. GAO/GGD-98-153. Washington, D.C.: July 20, 1998. Government-Sponsored Enterprises: Federal Oversight Needed for Nonmortgage Investments. GAO/GGD-98-48. Washington, D.C.: March 11, 1998. Federal Housing Enterprises: OFHEO Faces Challenges in Implementing a Comprehensive Oversight Program. GAO/GGD-98-6. Washington, D.C.: October 22, 1997. Government-Sponsored Enterprises: Advantages and Disadvantages of Creating a Single Housing GSE Regulator. GAO/GGD-97-139. Washington, D.C.: July 9, 1997. Housing Enterprises: Investment, Authority, Policies, and Practices. GAO/GGD-91-137R. Washington, D.C.: June 27, 1997. Comments on “The Enterprise Resource Bank Act of 1996.” GAO/GGD-96- 140R. Washington, D.C.: June 27, 1996. Housing Enterprises: Potential Impacts of Severing Government Sponsorship. GAO/GGD-96-120. Washington, D.C.: May 13, 1996. Letter from James L. Bothwell, Director, Financial Institutions and Markets Issues, GAO, to the Honorable James A. Leach, Chairman, Committee on Banking and Financial Services, U.S. House of Representatives, Re: GAO’s views on the “Federal Home Loan Bank System Modernization Act of 1995.” B-260498. Washington, D.C.: October 11, 1995. FHLBank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness. GAO/T-GGD-95-244. Washington, D.C.: September 27, 1995. Housing Finance: Improving the Federal Home Loan Bank System’s Affordable Housing Program. GAO/RCED-95-82. Washington, D.C.: June 9, 1995. Government-Sponsored Enterprises: Development of the Federal Housing Enterprise Financial Regulator. GAO/GGD-95-123. Washington, D.C.: May 30, 1995. Federal Home Loan Bank System: Reforms Needed to Promote Its Safety, Soundness, and Effectiveness. GAO/GGD-94-38. Washington, D.C.: December 8, 1993. Improved Regulatory Structure and Minimum Capital Standards Are Needed for Government-Sponsored Enterprises. GAO/T-GGD-91-41. Washington, D.C.: June 11, 1991. Government-Sponsored Enterprises: A Framework for Limiting the Government’s Exposure to Risks. GAO/GGD-91-90. Washington, D.C.: May 22, 1991. Government-Sponsored Enterprises: The Government’s Exposure to Risks. GAO/GGD-90-97. Washington, D.C.: August 15, 1990. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Serious concerns exist regarding the risk management practices and the federal oversight of the housing government-sponsored enterprises (GSE)--Fannie Mae, Freddie Mac, and the Federal Home Loan Bank System (FHLBank System), which had combined obligations of $4.6 trillion as of year-end 2003. In 2003, Freddie Mac disclosed significant accounting irregularities. In 2004, the Office of Federal Housing Enterprise Oversight (OFHEO) cited Fannie Mae for accounting irregularities and earnings manipulation. Fannie Mae has to restate its financial statements for 2001-2004 and OFHEO has required the GSE to develop a capital restoration plan. Also in 2004, the FHLBanks of Chicago and Seattle entered into written agreements with their regulator, the Federal Housing Finance Board (FHFB), to implement changes to enhance their risk management. To assist Congress in its housing GSE oversight, this testimony provides information on GSEs' missions and risks, the current regulatory structure, and proposed regulatory reforms. While the GSEs provide certain public benefits, they also pose potential risks. Fannie Mae and Freddie Mac's primary activity involves purchasing mortgages from lenders and issuing mortgage-backed securities that are either sold to investors or held in the GSEs' retained portfolio. The 12 FHLBanks traditionally made loans to their members and more recently instituted programs to purchase mortgages from their members and hold such mortgages in their portfolios. While not obligated to do so, the federal government could provide financial assistance to the GSEs if one or more experienced financial difficulties that could result in significant costs to taxpayers. Due to the GSEs' large size, the potential also exists that financial problems at one or more of the GSEs could have destabilizing effects on financial markets. The current housing GSE regulatory structure is fragmented and not well-equipped to oversee their financial soundness or mission achievement. For example, although all the GSEs face increasingly similar risks (particularly potential losses in their mortgage portfolios resulting from fluctuations in interest rates), OFHEO is responsible for Fannie Mae and Freddie Mac's safety and soundness oversight while FHFB is responsible for the safety and soundness and mission oversight of the FHLBanks. OFHEO also lacks key regulatory authorities necessary to fulfill its oversight responsibilities. Moreover, the Department of Housing and Urban Development (HUD), which has housing mission oversight responsibility for Fannie Mae and Freddie Mac, faces a number of challenges in carrying out its responsibilities. In particular, HUD may not have sufficient resources and technical expertise to review sophisticated financial products and issues. Creating a single housing GSE regulator could better ensure consistency of regulation among the GSEs. With safety and soundness and mission oversight combined, a single regulator would be better positioned to consider potential trade-offs between these sometimes competing objectives. To ensure the independence and prominence of the regulator and allow it to act independently of the influence of the housing GSEs, this new GSE regulator should have a structure that consists of a board or a hybrid board and director model. To be effective, the single regulator must also have all the regulatory oversight and enforcement powers necessary to address unsafe and unsound practices, respond to financial emergencies, monitor corporate governance and compensation practices, assess the extent to which the GSEs' activities benefit home buyers and mortgage markets, and otherwise ensure that the GSEs comply with their public missions.
The UN system includes the General Assembly, the Secretariat, peacekeeping missions throughout the world, and separately administered funds, programs, and specialized agencies that have their own governing bodies. The General Assembly established the separately administered funds and programs with responsibility for particular issues, such as children (United Nations Children’s Fund) or the environment (United Nations Environment Program), that are funded mainly by voluntary contributions. While separately administered, these entities are under the authority of the Secretary-General, who appoints the heads of each entity, but they have their own governing bodies instead of being governed by the General Assembly. In contrast, the heads of the specialized agencies are elected by their own governing bodies, and these autonomous agencies do not fall under the authority of the Secretary-General and therefore are not within OIOS’s purview. OIOS is part of the Secretariat and is under the authority of the Secretary- General, who reports to the General Assembly. According to its mandate, OIOS’s purpose is “to assist the Secretary-General in fulfilling his internal oversight responsibilities in respect of the resources and staff of the Organisation,” and OIOS’s chief executive, the Under-Secretary- General for Internal Oversight Services, reports directly to the General Assembly. The Secretary-General has therefore stated that OIOS is mandated to provide oversight only of activities that fall under the Secretary-General’s authority. These include activities of the Secretariat in New York, Geneva, Nairobi, and Vienna; the UN’s five regional commissions; peacekeeping missions and humanitarian operations; funds and programs administered separately under the authority of the Secretary-General (including the Office of the United Nations High Commissioner for Refugees, the United Nations Environment Program, the United Nations Human Settlements Program, and the Office of the United Nations High Commissioner for Human Rights); and other entities that have requested OIOS services such as the United Nations Convention to Combat Desertification and the United Nations Framework Convention on Climate Change. In addition to the OIOS mandate, the Financial Regulations and Rules of the United Nations designate OIOS as the financial management internal auditor for the UN. OIOS is composed of the Office of the Under-Secretary-General, an Executive Office, and three divisions, namely, Internal Audit, Investigations, and Inspection and Evaluation. Figure 1 provides the number and location of staff for each of these divisions as of September 2011. Appendix II provides additional information on OIOS’s locations and staffing. The Internal Audit Division provides assurance and advice designed to improve and add value to the UN’s operations. Internal audits bring a systematic approach to evaluating and improving the effectiveness of risk management, control, and governance processes. The Inspection and Evaluation Division assists UN intergovernmental bodies and program managers in assessing the relevance, efficiency, effectiveness, and impact of UN Secretariat programs. The division’s role is twofold: to help assure that these programs follow their mandates and to foster institutional learning and improvement through reflection by program officials and UN member states on performance and results. The majority of OIOS funding comes from two budgets approved by the General Assembly: one for normal, recurrent activities such as the core functions of the Secretariat (regular budget), and the other for peacekeeping activities (peacekeeping account). Both the regular and peacekeeping budgets are financed largely through assessed contributions from member states. A small portion of the peacekeeping account, the peacekeeping support account, provides funds for OIOS to conduct audits, investigations, inspections, and evaluations of peacekeeping activities. In addition to funding from the regular budget and peacekeeping account, OIOS receives funds from “extrabudgetary” sources. These are voluntary contributions from member states that pay for the activities of UN funds, programs, and other entities. The United States contributes a fixed percentage to the regular budget, which was 25 percent prior to 2000 and 22 percent thereafter, and which funds the UN Secretariat and its various activities and functions, including OIOS (shown in figs. 2 and 3, respectively). For example, the United States contributed about $1.2 billion to the UN Secretariat regular budget in the current biennium (2010-2011). The United States also contributes annually to peacekeeping operations and to extrabudgetary items. For example, in 2010, the United States contributed about $2.6 billion to peacekeeping operations (about 27.3 percent of the total peacekeeping budget) and about $4.4 billion to other UN activities, including those funded through extrabudgetary sources. As shown in figure 4, OIOS funding from all three sources—regular budget, peacekeeping, and extrabudgetary—has generally increased over time. The peacekeeping portion has been the fastest growing component over the last 10 years due to the rapid rise in peacekeeping activities around the world, while the regular and extrabudgetary portions have grown more slowly. OIOS’s total appropriations for the 2010-2011 biennium were over $100 million, approximately five times what they were when OIOS was established in 1994. OIOS’s authorized staffing levels have also increased, due in part to the expansion of UN peacekeeping activities (see fig. 11 in app. II). We have previously reported that OIOS has had difficulty filling its authorized staff positions. The UN General Assembly strengthened internal oversight of the Secretariat and peacekeeping missions by creating the IAAC, which is responsible for advising the General Assembly on the scope, results, and effectiveness of audit and other oversight functions, especially OIOS. The IAAC is also responsible for advising the General Assembly on measures to ensure management’s compliance with audit and other oversight recommendations, as well as with various risk management, internal control, operational, and accounting and disclosure issues. The committee examines OIOS’s work plans, taking into account the work plans of other UN oversight bodies, reviews OIOS’s proposed budget, and makes recommendations to the General Assembly through the Advisory Committee on Administrative and Budgetary Questions. (See app. III for a timeline showing the preparation, approval, and execution of OIOS’s regular budget and the IAAC’s role in that process.) The committee also advises the General Assembly on the quality and overall effectiveness of risk management procedures, on deficiencies in the internal control framework of the UN, and on steps to increase and facilitate cooperation among UN oversight bodies. The General Assembly in 2007 appointed three members to serve a 3- year term and two members to serve a 4-year term on the IAAC, all beginning on January 1, 2008; the committee, which generally meets four times a year, held its first session in February 2008. It has issued 11 reports, including one on vacant positions at OIOS. The IAAC held its 15th session in July 2011, during which it discussed with the Under- Secretary-General for Internal Oversight Services a wide range of issues, including funding arrangements, risk assessments conducted, value provided by OIOS, and performance audits. In addition, the committee covered standard agenda items with OIOS, such as relationships with management, high risks identified by OIOS, coordination with various oversight bodies, implementation of oversight recommendations, and OIOS staff vacancies. The committee is scheduled to hold its next meeting in December 2011. The U.S. Mission to the UN strongly supported the establishment of the IAAC and has also supported other initiatives to improve transparency and accountability in the UN system. For example, it has endorsed a new UN effort to consolidate the management of all financial, human, and physical resources, including for peacekeeping and field missions, under one integrated information management system. The UN General Assembly is addressing some previously identified impediments to OIOS’s ability to provide independent oversight, but certain UN funding arrangements and oversight relationships continue to limit the independence and authority of OIOS. In January 2003, the General Assembly reaffirmed the prerogatives of separately administered funds and programs to decide their own oversight mechanisms and their relationship with OIOS. The UN Secretariat’s Office of Legal Affairs stated that this action clarified the role OIOS plays in the internal oversight of separately administered funds and programs, with these entities deciding their own oversight mechanisms and their relationship with OIOS. An independent review of UN oversight commissioned by the Secretary- General noted, however, that the arrangements used to fund OIOS’s audits of those separately administered entities that choose to utilize its audit services do not meet Institute of Internal Auditors (IIA) standards for independence. OIOS also reported that it is not able to issue consolidated audit reports for joint UN activities that included entities over which OIOS does not have oversight authority, even when directed to do so by the General Assembly. The General Assembly has addressed OIOS’s oversight authority several times since the creation of the office (see app. IV), and the new Under- Secretary-General for Internal Oversight Services requested a legal opinion from the UN Secretariat’s Office of Legal Affairs regarding OIOS’s oversight responsibility for funds and programs. OIOS’s founding mandate states that OIOS’s purpose is to assist the Secretary-General in fulfilling his internal oversight responsibilities with respect to the resources and staff of the organization. The Secretary-General has stated that the resources and staff of the organization include separately administered organs. The General Assembly also stated that OIOS has the authority to initiate, carry out, and report on any action that it considers necessary to fulfill its responsibilities with regard to monitoring, internal audit, inspection and evaluation, and investigations. In January 2003, the General Assembly adopted a resolution that reaffirmed the prerogatives of separately administered funds and programs to decide their own oversight mechanisms and their relationship with OIOS. In May 2011, in response to her request, the UN’s Office of Legal Affairs issued a memorandum to the Under-Secretary-General stating that the General Assembly, through the 2003 resolution, clarified OIOS’s jurisdiction over the funds and programs, which suggested that the involvement of OIOS in their internal oversight functions is contingent on the consent of the funds and programs. According to OIOS’s website and audit manual, OIOS provides internal oversight to UN organizations that are under the direct authority of the Secretary-General, including departments and offices within the Secretariat and peacekeeping missions and related offices, and to funds, programs, and other organizations under the authority of the Secretary- General, but administered separately, that have requested OIOS’s audit services (see fig. 5). The amounts these organizations pay for internal oversight are based on negotiated fees for services, sometimes defined in a memorandum of understanding (MOU). Some UN funds and programs, including, for example, the United Nations Development Program and the World Food Program, have their own internal oversight offices, which they use to oversee their activities instead of using the services of OIOS. Others, such as the International Trade Center and the Office of the United Nations High Commissioner for Human Rights, have partial or no internal audit capacity. Funds or programs with their own internal oversight capacity may also use certain OIOS services, for example, if they determine that they need outside experts to conduct a sensitive investigation. (App. V provides a more detailed listing of UN organizations, their relationship with OIOS, and their oversight capacity.) According to OIOS, as of September 2011, it provided oversight to a number of separately administered entities, including seven funds and programs that have partial or no internal oversight capacity. The Under- Secretary-General for Internal Oversight Services told us that she is conducting a review of all separately administered entities under the authority of the Secretary-General to determine their internal oversight capacities, which conforms with her mandate to support the Secretary- General in his oversight responsibilities. The UN General Assembly has supported OIOS’s independence in audits of organizations within the Secretariat and peacekeeping missions by creating the Independent Audit Advisory Committee (IAAC), which reviews OIOS’s audit plans and budget requests, and compares them to the Secretary-General’s proposed budgets for oversight to ensure that they reflect the resources OIOS needs to audit identified risks. In a July 2006 report to the General Assembly, OIOS noted that a main obstacle to the independence of its audits was that it was responsible for auditing departments in the Secretariat, such as the Department of Management, which reviews its budget. While OIOS did not report any specific examples of budget restrictions that had been imposed, the IAAC mitigates the potential impairment to OIOS’s independence caused by its dependence on funding from entities it audits by making recommendations that would ensure that OIOS has sufficient resources, and keeping the General Assembly apprised of issues related to OIOS’s operational independence. The IAAC became operational in 2008 and serves some of the functions of an independent audit committee, which the IIA considers critical to ensuring strong and effective processes related to independence, internal control, risk management, compliance, ethics, and financial disclosure. The IAAC advises the General Assembly in accordance with terms of reference adopted by the General Assembly in 2007. The IAAC reviews a proposed budget for internal oversight prepared by the Secretary-General and compares that to the resources requested by OIOS. The IAAC then provides independent comments directly to the budget committee of the General Assembly on the resources OIOS will need (see app. III for a timeline showing the preparation, approval, and execution of OIOS’s regular budget). An IAAC official reported that, while part of its function is to ensure OIOS’s independence, the committee does not automatically take OIOS’s side in disputes over resources with the Secretary-General. In some instances, the IAAC has advised the General Assembly that OIOS needed more resources and independence; in others, it has advised that OIOS resources were sufficient or excessive. Managing the internal audit activity: The chief audit executive must effectively manage the internal audit activity to ensure it adds value to the organization. Risk management: The internal audit activity must evaluate the effectiveness and contribute to the improvement of risk management processes. Definition of risk management: A process to identify, assess, manage, and control potential events or situations to provide reasonable assurance regarding the achievement of the organization’s objectives. The UN also strengthened OIOS’s independence by supporting its efforts to improve its risk-based planning and budgeting process in accordance with IIA standards, but OIOS is still working to improve its risk assessments in response to IAAC concerns. As part of the outcome of a 2005 World Summit gathering at the UN, the General Assembly requested that the Secretary-General submit an independent external evaluation of the auditing and oversight system of the UN, with recommendations for improving these processes. The external review commissioned by the Secretary-General recommended that OIOS improve its annual risk-assessment methodology, and specified several improvements, including building an inventory of risks in consultation with its clients, and ranking the risk of each item in OIOS’s audit universe. As the Secretary-General reported, in addition to ensuring that oversight resources are prioritized for high-risk areas, a risk-based approach also provides the General Assembly with a basis for determining the level of risk it is willing to accept for the organization. In its 2006 report to the General Assembly, OIOS committed to having fully risk-based work plans by 2008. OIOS was able to meet this schedule, completing risk assessments of approximately 90 percent of its clients from July 2007 to September 2008. An OIOS official also reported that its separately administered, extrabudgetary clients were included in its risk assessments. In 2008, its first year of operation, the IAAC reported that OIOS’s risk- assessment methodology provided a reasonable basis for establishing preliminary work plans. However, in 2009, the IAAC reported that OIOS’s risk assessments were not practical for determining OIOS’s resource requirements because they did not take into account its clients’ efforts to mitigate these risks and therefore gave an inflated estimate of risks and oversight needs. The IAAC recommended that OIOS modify its risk assessments to include the effect of controls that its clients have already put in place, and in February 2011, OIOS officials reported that they are working to change OIOS’s methodology in accordance with the recommendations. The Secretary-General found that OIOS’s funding arrangements with separately administered organizations do not meet IIA standards for independence because OIOS must negotiate oversight agreements with these organizations without an independent review to ensure that the oversight resources provided are sufficient. These negotiations include discussions of the number and level of staff and resources that will be used for an audit based on an amount of funding that the individual fund or program is able to provide OIOS. The IAAC mitigates this potential impediment to OIOS’s independence within the Secretariat and peacekeeping missions; however, the IAAC Chairman stated that although the IAAC reviews OIOS’s budget requests for separately administered organizations, it does not have the authority to work with the governing bodies of these entities to resolve funding issues, and thus potential impediments to OIOS’s ability to provide independent oversight remain. Threats to independence must be managed at the individual auditor, engagement, functional, and organizational levels. OIOS officials stated that some of these clients have provided limited audit resources to assist OIOS in its efforts. OIOS officials emphasized that these resource limitations have not impeded the office’s audit activities because improvements to risk-based planning have allowed it to better prioritize audit work and manage resources more effectively and economically. However, OIOS officials reported that several smaller entities that have adopted the UN financial regulations and rules (and therefore fall under OIOS’s audit authority for financial management audits) have not provided OIOS with resources for conducting audits. These entities include the United Nations Convention to Combat Desertification, the United Nations Interregional Crime and Justice Research Institute, the United Nations Institute for Training and Research, the United Nations Research Institute for Social Development, the United Nations System Staff College, and the United Nations University. Compounding the potential for limitations to OIOS’s ability to provide independent oversight, developing oversight relationships on a case-by- case basis has also created inconsistent funding arrangements, and the IAAC has recommended that these funding arrangements be revised. OIOS has MOUs formalizing its relationships with only seven of the separately administered entities it lists as clients (see table 1). These MOUs describe OIOS’s oversight activities and resources, but the IAAC does not review the MOUs and they do not necessarily ensure independent oversight. OIOS has not established formal MOUs with its other separately administered clients, including three funds and programs—the United Nations Environment Program, the United Nations Human Settlements Program, and the United Nations Conference on Trade and Development. In its report pending with the Secretariat, the IAAC is recommending that the General Assembly reconsider OIOS’s current funding arrangements with separately administered entities. Since OIOS does not have oversight authority over all separately administered UN entities, it may not be able to provide sufficient oversight of crosscutting activities undertaken jointly by multiple UN entities even when it is directed to do so by the General Assembly. UN humanitarian, reconstruction, and development program activities can involve multiple entities not covered by OIOS’s existing mandate, which have their own internal oversight offices. For example, the United Nations Development Group Iraq Trust Fund has 22 separate participating UN organizations, and some of these entities also have their own internal oversight capacity. In August 2006, an external review commissioned by the Secretary- General found that OIOS could not fully assess risks in joint activities involving entities not covered in its mandate, and it recommended that OIOS be given audit authority over joint activities that include entities within its mandate, with support from other audit organizations. In 2007, OIOS and other internal audit offices in the UN system adopted a framework for auditing multi-donor trust funds, in part to address this issue. The framework established that a summary report of all internal audit work would be prepared after the completion of the individual audits, and OIOS officials stated that OIOS has subsequently participated in a summary report of a joint audit of the Common Humanitarian Fund for Sudan that was issued by the United Nations Development Program. While OIOS is not responsible for coordinating the internal oversight of all joint activities, the General Assembly has previously directed it to prepare the consolidated report of the audit and investigative reviews undertaken by other UN organizations. However, in December of 2006, OIOS reported that it had been unable to issue a consolidated report on the audits of tsunami relief efforts, as directed by the General Assembly, because the internal auditors of funds, programs, and specialized agencies were unable to share their audit reports with OIOS. In 2010, OIOS reemphasized its recommendation that the Secretary-General, in collaboration with the heads of funds, programs, and specialized agencies, specify in a single policy document the applicable rules and regulations, coordination mechanisms, and reporting systems for oversight of interagency activities. OIOS officials noted in the summer of 2011 that it still would not be possible for OIOS to issue a consolidated audit report because funds and programs, and specialized agencies cannot share their audit reports. High vacancy rates for authorized positions, for both rank-and-file and senior staff, have historically hindered OIOS’s ability to provide sufficient oversight. In addition, the Under-Secretary-General for Internal Oversight Services reported that she has insufficient staff in the Office of the Under-Secretary-General to manage OIOS’s operations. The UN Secretariat and OIOS are taking steps to address these staffing issues. OIOS has had staffing shortages in its three divisions, and the UN’s external auditors (the Board of Auditors) found that these shortages hampered the Internal Audit Division’s completion of its work plans. The UN Secretariat and OIOS have prioritized filling vacant positions, particularly since the start of the new Under-Secretary-General’s term in 2010. The IAAC also expressed concern that vacancies at the senior management level would make it difficult for OIOS to accomplish its work, but OIOS has recently filled the two director-level positions that had been vacant for more than a year. Further, the Under-Secretary-General has begun an initiative within OIOS to strengthen OIOS’s management and coordination; this involves a comprehensive review of the office’s responsibilities and capabilities, and may result in requests for additional management resources. To facilitate this effort, the Under-Secretary- General has requested staff and additional consultant positions through the end of 2011, and the Secretary-General has concurred with this request. Since our last report on OIOS in 2006, OIOS has had staffing shortages due to authorized but unfilled positions that have limited the office’s ability to provide sufficient oversight. (See app. VI for the status of our 2006 recommendations.) According to the UN Office of Human Resources Management, OIOS’s vacancy rate for professional service staff was 21 percent, as of September 2010, an increase from the period between 2006 and 2009 when rates were between 12 and 17 percent. As of the end of July 2011, OIOS data indicated that 19 percent of its approved staff positions were unfilled and that, as shown in figure 6, the vacancy rates were highest in the Internal Audit and Investigations Divisions, with the highest rate (30 percent) for investigations of peacekeeping activities. According to the Board of Auditors, staffing shortages hampered the Internal Audit Division’s completion of its planned audits in 2008 and 2009. In 2009 and again in 2010, the IAAC also expressed concern that the high rate of unfilled positions in OIOS would make it difficult for OIOS to accomplish its work. The Under-Secretary-General for Internal Oversight Services noted that this is because OIOS is required to submit a work plan based on 100 percent of its authorized positions, rather than filled positions. She further stated that OIOS should be allowed to submit a work plan based on anticipated staffing shortages. As reasons for high vacancy rates, OIOS officials cited complexities in the hiring process, difficulty filling oversight positions in peacekeeping missions, and a new online system for human resources management that was unfamiliar to OIOS staff. OIOS officials stated that the human resource policies of the Secretariat require that vacancies be posted individually, preventing OIOS from conducting a single hiring process for multiple positions, and that this requirement makes reducing the vacancy rate more difficult. Compounding this problem, OIOS officials added that when a high-level position becomes vacant, it is often filled internally, which creates a new vacancy at a lower level. Thus, the filling of one position can result in the creation of a new vacancy, which requires another extended recruitment period. OIOS officials also stated that it is difficult to fill vacant positions in peacekeeping missions due to challenging working and living conditions. According to a high-level official in the Investigations Division, OIOS staff in peacekeeping missions feel isolated by their remote locations and by the fact that they are seen as outsiders by the peacekeeping staff. Finally, OIOS officials said the Secretariat had difficulty implementing the new online human resources recruitment tool, and that this contributed to delays in filling vacancies in the most recent biennium. UN Office of Human Resources Management officials confirmed that there had been some technical problems with the rollout of the new system and that vacancy rates had increased systemwide. The UN and OIOS have made reducing staffing shortages a priority. The new Under-Secretary-General for Internal Oversight Services stated that she has hired 82 new staff since the beginning of her term in September 2010 and has received clearance to bring in consultants to work on recruitment through December 2011. The Under-Secretary-General noted that the UN financial regulations and rules do not provide her the flexibility to redeploy funds to hire consultants, as may be necessary. To reduce the number of unfilled positions, the Under-Secretary-General requested an exemption from Secretariat hiring policies in order to conduct mass recruitment to identify qualified candidates. She said that a key to this effort would be the ability to interview and prequalify candidates at the appropriate level in order to be able to fill multiple vacancies at once. She reported that she did not have to use the exemption because, in the final analysis, she was able to work within current policies to permit selection of prequalified candidates more expeditiously. Officials from the UN Office of Human Resources Management stated that in April 2010, the UN revised its recruitment policy to expedite the process for departments with high vacancy rates. The revised policy allows department managers (including the Under-Secretary-General for Internal Oversight Services) to place qualified candidates that are not selected for a particular position onto a roster, which they or other department managers can then use to fill similar vacancies without repeating the full recruitment process. The new policy provides the Office of Human Resources Management with incremental resources to fully verify credentials and references of rostered candidates (with their permission), rather than waiting until they have been selected for a position to complete this verification process, as this can delay their placement for up to 6 months. This revision is expected to expedite future placement of prequalified candidates. OIOS also reported that the difficulties with the online human resources recruitment tool are being resolved, and that the office expects vacancy rates to decline over the next year. OIOS had prolonged vacancies at the director level in two of its three divisions, one of which persisted for 5 years, but both positions have now been filled. In its 2008-2009 report, the IAAC expressed concern that these vacancies would make it difficult for OIOS to accomplish its work. Vacancies at the director level differ from other vacancies because OIOS cannot fill director-level vacancies without the approval of the Secretary- General, in accordance with a Secretariat-wide human resources policy. According to this policy, the head of a department or office must submit at least three candidates—one of which must be a woman—to the Secretary-General, who ultimately decides which candidate to appoint. This process was a point of contention between the previous Under- Secretary-General and the Secretary-General and resulted in prolonged vacancies at the director level in two of OIOS’s three divisions. In 2009, the IAAC proposed a definition of operational independence for OIOS that includes the ability to select staff for appointment and promotion, and the General Assembly will consider this proposal during its 66th session starting in September 2011. However, the new Under-Secretary-General was able to nominate candidates in accordance with the Secretariat’s policy. She stated that the process was not overly restrictive and that it was appropriate for an internal auditor to follow the policies of the Secretariat. In the spring of 2011, the Secretary-General approved the candidates she had recommended to fill both the Director of Investigations and the Director of Inspection and Evaluation positions, who assumed their positions in August 2011. Since assuming her position in September 2010, the new Under- Secretary-General for Internal Oversight Services stated that she has not had the ability to sufficiently oversee OIOS activities because the Office of the Under-Secretary-General is under resourced. She said she has made reviewing all of OIOS’s reports prior to release a priority for quality control, and that OIOS issues about 300 reports per year. In 2011, the Office of the Under-Secretary-General has been reviewing all reports before they are released, but this has strained available resources. This office is authorized seven staff, including the Under-Secretary-General, and all of these positions are currently filled. In May 2011, the IAAC endorsed a new Assistant Secretary-General position, which OIOS had included in its budget submission. OIOS reported that it will request additional staff as needed after completing a comprehensive review of OIOS’s responsibilities and capabilities. The Under-Secretary-General also stated that additional management staff could improve collaboration between the divisions to better share information on risk assessments and internal control shortfalls—such as risk of fraud identified by the Internal Audit Division, or systemic control weaknesses found by the Investigations Division—and is developing a team to identify potential areas for collaboration among OIOS divisions. The United States and other member states have long advocated a wide range of UN management reforms that have included a call for greater transparency and accountability throughout the UN system. As part of its efforts to advance UN reforms, the U.S. Mission to the UN has included among its priorities strengthening the UN’s main internal oversight body— OIOS—to better identify, obtain, and deploy the resources needed to ensure that the billions in U.S. and international contributions are spent wisely and that UN programs are managed effectively. Although OIOS plays a vital role in improving the UN’s effectiveness, OIOS’s ability to provide sufficient oversight of UN entities under the authority of the Secretary-General is limited due to impediments to its operational independence in providing full oversight of funds and programs and high rates of unfilled staff positions. The UN General Assembly has taken steps to help strengthen OIOS— most notably, by creating the IAAC to review OIOS’s budgets and work plans for audits of entities within the Secretariat and peacekeeping missions to ensure that OIOS resources are sufficient to address risks in the UN. However, in order to provide essential internal oversight services, OIOS still has to negotiate individual agreements with funds, programs, and other clients under the authority of the Secretary-General but administered separately and funded with extrabudgetary resources. This practice may unduly limit the scope of OIOS’s oversight. As the United States and other member states place new demands for fiscal discipline and cost-effective management on the UN and the myriad funds and programs under it, strengthening OIOS oversight will help the UN be more responsive to these demands. Improvements in these areas can help OIOS address some of the difficulties it faces in effectively carrying out its mandate. We recommend that the Secretary of State and the Permanent Representative of the United States to the United Nations work with the General Assembly and member states to address remaining impediments to OIOS’s ability to provide independent oversight resulting from its relationships with certain UN funds and programs and other clients. OIOS and State provided written comments on a draft of this report. We have reprinted their comments in appendixes VII and VIII, respectively. These agencies also provided technical comments and updated information, which we have incorporated throughout this report, as appropriate. OIOS agreed with the overall conclusion of the report that progress has been made in addressing independence and staffing issues and that further actions are needed in some areas. OIOS stated that it has developed a comprehensive plan to address the issues we identified and is currently working to systematically examine options and implications for their resolution within the scope of OIOS’s authority and responsibility as mandated by the General Assembly. OIOS also stated that the report fairly reflected its efforts and current views and noted that the efforts invested by GAO, OIOS, and others have contributed to the usefulness of the reported results. State endorsed most of our main findings and conclusions, noting that it agreed that OIOS’s budgetary and operational independence could be strengthened further. State also accepted our recommendation that impediments to OIOS’s ability to provide independent oversight be addressed. However, State appears to have misinterpreted our discussion of OIOS oversight authority over the separately administered UN funds and programs that have opted to use OIOS as their internal auditor. State attributed to GAO the assertion that OIOS’s involvement in the internal oversight of funds and programs is contingent on the consent of the funds and programs. This interpretation was made instead by the UN Secretariat’s Office of Legal Affairs. We have added language to make this distinction clearer. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of State, and the Permanent Representative of the United States to the United Nations. This report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-9601 or melitot@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IX. Our objectives were to examine actions being taken to address (1) impediments to OIOS’s ability to provide independent oversight and (2) staffing issues that may have hindered its performance. To address our objectives, we reviewed relevant United Nations (UN) and Office of Internal Oversight Services (OIOS) reports, policies and procedures manuals, and other documents, as well as internationally recognized standards such as those of the Institute of Internal Auditors (IIA). We met with Department of State (State) officials in Washington, D.C., and officials in New York from the U.S. Mission to the UN. In New York, we also met with the Under-Secretary-General for Internal Oversight Services, OIOS management officials and staff in each of the Office’s divisions (the Internal Audit Division, the Investigations Division, and the Inspection and Evaluation Division), and with staff from the Office of the Under-Secretary-General. In addition, we met with representatives of UN Secretariat departments and UN funds and programs, and the members of the UN Board of Auditors, which carries out external audits of the accounts of the UN organization and the funds and programs that are under the authority of the Secretary-General. Through in-person interviews, videoconference, and teleconference, we spoke with senior OIOS audit and investigations officials based in Geneva and Vienna; with the Independent Audit Advisory Committee (IAAC) Chairman in Washington, D.C.; and with an official from the UN’s Joint Inspection Unit in Geneva, which conducts evaluations and inspections of the UN system. To assess the reliability of UN and OIOS funding and staffing data, we reviewed the office’s budget reports for fiscal biennia 1994-1995 through 2010-2011 and vetted the data with relevant OIOS and UN budget officials and interviewed an international relations specialist at the Congressional Research Service who reports on U.S. contributions to the UN; however, we did not independently verify the underlying source data. We determined that UN and OIOS budget data were sufficiently reliable to present trends of the regular, peacekeeping, and extrabudgetary appropriations for the biennia 1994-1995 through 2010-2011. We used staffing data provided by OIOS, which we determined were reliable for our purposes of presenting staffing levels as of July 31, 2011. To assess OIOS’s consistency with key international auditing standards, we reviewed relevant internationally accepted standards for oversight such as the International Standards for the Professional Practice of Internal Auditing issued by the IIA, which OIOS adopted in 2002. The IIA standards apply to internal audit activities—not to investigations, inspections, or evaluation activities. However, we applied these standards OIOS-wide, as appropriate. We also reviewed the International Standards of Supreme Audit Institutions issued by the International Organization of Supreme Audit Institutions, as well as guidelines for oversight such as the Uniform Guidelines for Investigations issued by the Conference of International Investigators, and the Norms for Evaluation in the UN System issued by the United Nations Evaluation Group. Finally, we examined documentation for OIOS’s risk-based planning methodology and annual work plans, recommendations tracking, and ethics practices. We conducted our work from October 2010 to September 2011 in accordance with generally accepted U.S. government auditing standards. Those standards require that we plan and perform our work to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. Figure 7 displays the locations of OIOS staff, as of September 2011, including the UN’s peacekeeping missions. The Internal Audit Division is by far the largest OIOS division, comprising 57 percent of the office’s authorized staff; the Investigations Division is second in size, at 29 percent (see fig. 8). More staff positions are tied to oversight of peacekeeping activities (47 percent overall, or 153 of 327) than activities funded by the regular budget (37 percent, or 121 of 327) or extrabudgetary sources (16 percent, or 53 of 327). The Internal Audit Division is the only division with a significant number of staff positions financed by extrabudgetary sources (26 percent, or 49 of 186). (See fig. 9.) OIOS’s authorized staff positions have increased slightly (by 12 percent, or 34 new positions) from 5 years ago, largely due to a 30-percent increase (43 new positions) in Internal Audit Division positions, whereas the positions in the Investigations Division dropped by 15 percent (17 positions). (See fig. 10.) Total authorized staff positions have generally increased over the past nine UN fiscal biennium budget cycles, from 1994-1995 through 2010- 2011, growing from just over 100 positions in 1994-1995 to over 300 in 2010-2011, due largely to an increase in authorized positions for overseeing the UN’s expanding peacekeeping activities around the world (see figs. 11 and 12). Figure 13 is a timeline showing the process of preparing, approving, and executing OIOS’s regular budget. As shown in this figure, we have updated this timeline since our 2006 report to reflect IAAC’s creation and its role in that budget process. Table 2 lists the UN resolutions and administrative issuances affecting OIOS, including its establishment in 1994, a 2003 resolution reaffirming the prerogative of the funds and programs to decide on their own oversight mechanisms and their relationship with OIOS, and the 2006 resolution establishing the IAAC. Table 3 lists the organizations that comprise the UN system and information about their relationship with OIOS and their internal oversight capacity. Our 2006 report made seven recommendations to State and the U.S. Mission to the UN—of these, we closed three recommendations as implemented and four as not implemented. (GAO practice is to close out all recommendations within a 4-year period.) Table 4 summarizes the status of each of our recommendations as of July 31, 2011—including the three recommendations that were implemented within the GAO 4-year time frame, and four that have not been fully implemented, but actions have been taken. We believe that State misinterpreted our discussion of OIOS oversight authority over the separately administered UN funds and programs that have opted to use OIOS as their internal auditor. Our report did not conclude that OIOS may only exercise oversight authority over those separately administered UN funds and programs that have opted to use OIOS as their internal auditor; we reported that this was the interpretation of the UN Secretariat’s Office of Legal Affairs. We have added language in the report to make this distinction clearer. In addition to the individual named above, Joy Labez, Assistant Director; Kay Halpern; Jeremy Conley; David Dayton; Etana Finkler; Jack Hufnagle; Marya Link; Grace Lui; and Steven Putansu made key contributions to this report. Other contributors to this report include Kirsten Lauber, Jeremy Sebest, Barbara Shields, and Phillip J. Thomas. This glossary of abbreviations and acronyms contains the full names of the United Nations entities referred to in figure 5 and tables 1 and 3. United Nations Organizations: Oversight and Accountability Could Be Strengthened by Further Instituting International Best Practices. GAO-07-597. Washington, D.C.: June 18, 2007. United Nations: Management Reforms Progressing Slowly with Many Awaiting General Assembly Review. GAO-07-14. Washington, D.C.: October 5, 2006. United Nations: Weaknesses in Internal Oversight and Procurement Could Affect the Effective Implementation of the Planned Renovation. GAO-06-877T. Washington, D.C.: June 20, 2006. United Nations: Oil for Food Program Provides Lessons for Future Sanctions and Ongoing Reform. GAO-06-711T. Washington, D.C.: May 2, 2006. United Nations: Internal Oversight and Procurement Controls and Processes Need Strengthening. GAO-06-701T. Washington, D.C.: April 27, 2006. United Nations: Funding Arrangements Impede Independence of Internal Auditors. GAO-06-575. Washington, D.C.: April 25, 2006. United Nations: Lessons Learned from Oil for Food Program Indicate the Need to Strengthen UN Internal Controls and Oversight. GAO-06-330. Washington, D.C.: April 25, 2006. United Nations: Procurement Internal Controls Are Weak. GAO-06-577. Washington, D.C., April 25, 2006. United Nations: Preliminary Observations on Internal Oversight and Procurement Practices. GAO-06-226T. Washington, D.C.: October 31, 2005. United Nations: Sustained Oversight Is Needed for Reforms to Achieve Lasting Results. GAO-05-392T. Washington, D.C.: March 2, 2005. United Nations: Oil for Food Program Audits. GAO-05-346T. Washington, D.C.: February 15, 2005.
The United States has long advocated for strong oversight of the United Nations (UN). In 2005, GAO raised long-standing concerns that the ability of the UN's Office of Internal Oversight Services (OIOS) to carry out its mandate was constrained in scope and authority, and in 2006, GAO found that funding arrangements impeded OIOS's ability to operate independently. The U.S. Mission to the UN also expressed concern that OIOS's independence is limited in that it cannot make final hiring decisions for senior staff. In response to such concerns, the UN General Assembly in 2006 created an Independent Audit Advisory Committee (IAAC). GAO was asked to examine actions taken to address (1) impediments to OIOS's ability to provide independent oversight and (2) staffing issues that may have hindered its performance. GAO assessed OIOS's independence based on internationally recognized auditing standards, analyzed OIOS and other UN documents and data, and interviewed agency officials. The UN has taken worthwhile steps to enhance OIOS's independence, but certain UN funding and oversight arrangements continue to impede OIOS's ability to provide independent oversight. The General Assembly has supported OIOS's independence by creating the IAAC, which reviews OIOS's budgets and work plans for audits of the Secretariat and peacekeeping missions, and by recommending that OIOS base its planning and budget requests on risk in accordance with standards of the Institute of Internal Auditors (IIA). The General Assembly also clarified the role OIOS plays in internal oversight of funds and programs by adopting a resolution that reaffirmed the prerogatives of separately administered funds and programs to decide their own oversight mechanisms and relationship with OIOS. However, an independent review found that the arrangements for funding OIOS audits of those entities that choose to utilize its audit services do not meet IIA standards for independence. OIOS also remains constrained in its ability to issue consolidated audit reports for joint UN activities that include entities over which it does not have oversight authority, even when directed to do so by the General Assembly. High vacancy rates for authorized positions have hindered OIOS's ability to provide sufficient oversight, but the UN and OIOS are taking steps to address this issue. As of July 2011, 19 percent of OIOS staff positions were unfilled, and 30 percent were vacant for investigations of peacekeeping activities--the most challenging positions to fill. The UN's external auditor found that OIOS's staffing shortages hampered its Internal Audit Division's completion of its work plans. The UN and OIOS have made filling vacant positions a priority, and OIOS has hired 82 staff members since the start of the term of the new Under-Secretary-General for Internal Oversight Services in September 2010. The IAAC also expressed concern that vacancies at the senior management level would make it difficult for OIOS to accomplish its work. In August 2011, OIOS filled two director-level positions that had been vacant for more than a year. Further, the Under-Secretary-General has begun an initiative to strengthen OIOS's management and coordination and has requested an additional staff position for her front office. The Secretary-General has concurred with this request. GAO recommends that the Secretary of State and the U.S. Permanent Representative to the UN work with the General Assembly and member states to address remaining impediments to OIOS's ability to provide independent oversight resulting from its relationships with certain UN funds and programs and other clients. State and OIOS generally concurred with GAO's findings and recommendation. However, State misinterpreted the report's discussion of OIOS's oversight authority. GAO added language to the report to clarify this discussion.
In ending the AFDC program and establishing TANF block grants to states, PRWORA built upon and expanded state-level reforms and significantly changed welfare policy for low-income families with children. The legislation ended the legal entitlement to cash assistance for eligible needy families with children and focused on helping needy families move toward economic independence. As specified in PRWORA, the goals of TANF are providing assistance to needy families so that children may be cared for in their own homes or in the homes of relatives; ending the dependence of needy parents on government benefits by promoting job preparation, work and marriage; preventing and reducing the incidence of out-of-wedlock pregnancies; and encouraging the formation and maintenance of two-parent families. PRWORA places a 5-year time limit (or less at state option) on federal cash assistance for most families—and requires states to impose federally established work and other program requirements on most adults receiving aid. Otherwise, the Act gives states broad flexibility to establish their own eligibility rules and the types of services provided. In fiscal year 2000, total TANF expenditures equaled almost $23.6 billion, more than half in federal dollars. About 5.8 million recipients received TANF cash assistance in September of that year. In addition to establishing a key TANF goal related to reducing out-of- wedlock pregnancies, the Congress, through PRWORA, established a “Bonus to Reward Decrease in Illegitimacy Ratio” so that HHS could reward states that showed the greatest reduction in out-of-wedlock births while decreasing their abortion rates. States win this bonus based on these reductions for the general population, not just the welfare population. In each of fiscal years 1999 through 2002, up to $100 million is available to the five states that achieve the largest reductions. HHS also requires states to set goals to reduce out-of-wedlock pregnancies. Each state has to submit a TANF plan that includes the state’s strategy for preventing and reducing the incidence of out-of-wedlock pregnancies. This plan must also include an explanation of how the state intends to establish numerical goals for reducing out-of-wedlock births. PRWORA also authorized the National Strategy to Prevent Teen Pregnancy and the Abstinence Education Grant Program to help meet the TANF goal of reducing out-of- wedlock births. PRWORA authorized federal expenditures of $50 million annually, beginning in fiscal year 1998, to support state efforts promoting abstinence education. PRWORA’s emphasis on reducing out-of-wedlock childbearing, among other goals, results from congressional concerns about the negative consequences of out-of-wedlock births on the mother, the child, and the family. The percentage of out-of-wedlock births among the total population has increased dramatically, from 3.8 percent in 1940 to 32.6 percent in 1994, although from 1994 to 1999 it remained around 33 percent. While there are still questions about the extent of the consequences of out- of-wedlock childbearing, research shows that children born out of wedlock are much more likely to be poor and receive welfare than children born to married parents. More specifically, among children living with single mothers, children born outside of wedlock are 1.7 times more likely to be poor than are those born to married parents. In addition, research shows that women aged 17 and under who give birth outside of marriage are more likely to go on public assistance and spend more years on assistance once enrolled. For example, over three-quarters of all unmarried teenage mothers began receiving cash benefits from the AFDC program within five years of the birth of their first child. The potential link between welfare receipt and non-marital childbearing has been of interest to policymakers and researchers for several years, particularly as the number of never-married mothers receiving welfare has increased, both in absolute numbers and as a percentage of all families on welfare. Studies have been conducted to understand what role, if any, the amount of welfare benefits plays in a woman’s decision to have a child. The results of studies examining the effects of various welfare benefit amounts on fertility and marriage have been mixed. A recent summary of this research found that a slight majority of studies have concluded that receiving welfare has led to a decrease in marriage and an increase in childbearing. In the early to mid-1990s, before federal welfare reform, some states responded to concerns about links between welfare and childbearing by seeking waivers from federal AFDC rules to implement their own policies to eliminate increased cash welfare grants for families who have additional children while on welfare. This “capping” of cash grants became known as the family cap. During the 2 years of debate over how to reform welfare and address the issue of rising rates of out-of-wedlock births, one version of the welfare legislation contained a family cap. PRWORA, the final welfare reform legislation passed in 1996, emphasized decreasing out-of- wedlock birth rates but remained silent on the family cap, leaving implementation decisions regarding family cap policy to the states. Although PRWORA devolved considerable authority to the states to design and implement their own welfare programs, HHS retains some program oversight and research responsibilities. Generally, the law narrowed HHS’ regulatory authority, as compared to its authority under AFDC. For example, PRWORA specifically limits HHS from regulating the conduct of states, except as expressly provided in the law. HHS is responsible for administering statutory penalties for states’ noncompliance with the law, and for developing and administering the high performance bonuses and Bonus to Reward Decrease in Illegitimacy Ratio established in PRWORA to reward states that achieve certain goals of the law. In addition, HHS is responsible and receives funding for conducting research on the benefits, effects, and costs of state TANF programs and for disseminating information among states and localities. This research role differs from the one HHS played in the past, when a state wishing to experiment with any departure from federal AFDC rules had to request a waiver from HHS, and HHS had to ensure that the waiver request included plans for a rigorous evaluation of the state’s experiment. In fact, some of the effectiveness studies of the family cap that we reviewed in this report were conducted as required evaluations of policies implemented under waiver of AFDC rules. Because of the increased flexibility states have under TANF, states no longer need to apply for waivers from federal rules. Twenty-three states, representing about half of the nation’s TANF caseload, have implemented a family cap policy, with most of these states providing no cash increase when a mother on welfare gives birth. States implement the family cap in one of three ways: a full family cap on benefits, a partial family cap on benefits, or a flat grant that applies to all TANF families. States implemented the family cap policy to reduce out-of- wedlock births and encourage self-sufficiency, with 15 states implementing the family cap policy under a waiver to the AFDC program and eight states implementing the policy after welfare reform. Most states with a family cap policy exempt families from the cap if special circumstances exist. For example, children born as the result of sexual assault, rape, or incest are exempt from the family cap in 18 states. Several states have additional support policies designed specifically for capped- benefit families, such as vouchers for food and diapers in lieu of the cash benefit increase they would have received without the family cap policy. Twenty-three states, covering approximately 52 percent of the TANF caseload nationwide, have adopted the family cap policy in some form. As Figure 1 shows, 19 states employ a full family cap. In these states a family’s cash grant is not increased by any amount with the birth of an additional child. For example, in Arizona, if a family of two receiving a monthly maximum grant of $275 per month had an additional child, their cash benefit would be capped at this amount. The family would not receive the $72 increase in benefits, which would have otherwise raised their grant amount to $347—the maximum grant for three person families. Two states have a family cap that is implemented as a partial increase in cash benefits with the birth of an additional child. For example, one state with the partial family cap increases by 50 percent the cash benefits the family would have received without the family cap for only the first child born to a family after they enrolled in assistance; the benefit cap becomes a full family cap for any subsequent children born into the family. Two other states use a flat grant to provide cash benefits to families. This policy creates an implicit benefit cap because cash benefits are the same for all families on assistance regardless of family size or the birth of an additional child. For example, in Wisconsin families receive either $628 or $673 per month. Their benefit amount depends on the work program component to which they have been assigned, rather than on family size. States implemented the family cap policies before and after federal welfare reform, often with the goals of decreasing out-of-wedlock births and encouraging self-sufficiency. For example, one state implemented the family cap based on the goal of encouraging parents to plan for security and assume responsibility for their children. The family cap policy developed out of state-based initiatives, starting with New Jersey in 1992. Fifteen states implemented the family cap as a waiver to the AFDC program. Eight states implemented the family cap policy following the passage of PRWORA as a part of their TANF state plans. In their policies, all states with a full or partial family cap include exemptions to the family cap for families in specific circumstances. Most states have the same exemptions, as shown in figure 2. For example, to account for a pregnancy that occurred before the family started receiving assistance, 20 states exempt families with children born less than 10 months after the family’s initial receipt of benefits. In addition, states commonly exempt children not living with their biological parents. This exemption typically occurs when the custody of the child is legally transferred or the parent is deceased, incarcerated or incapacitated. Most states exempt families who leave assistance for a specified period of time, give birth to a child, and return to the rolls, that is, they become pregnant “between spells.” Some state officials we spoke with expressed concern about this exemption because of the potential that families are circumventing the family cap on benefits by leaving assistance and reapplying once a new child is born. Six states have other exemptions to the family cap. For example, one state exempts children conceived as a result of the failure of certain contraceptive methods. Another state exempts children born with substantial physical or mental disabilities. All 23 states with the family cap policy have procedures in place to enroll eligible children in the Medicaid and food stamps programs even when their families’ benefits have been capped. Many states’ documents— including policy manuals, codes of regulations, personal responsibility agreements that clients must sign, and TANF program brochures—explain to families and caseworkers the availability of these services to newborn children. In addition to written notification, one state told us they have an outreach worker who is responsible for checking the Medicaid eligibility of all children whose families are receiving TANF. All states told us that when the birth of a child is reported to the TANF caseworker, there are procedures in place to enroll the child in Medicaid and food stamps. One state official expressed concern, however, that families whose benefits might be affected by the family cap may not report a birth to a TANF caseworker, making it less likely that the family would enroll the child in other support services. Some states have tailored other welfare-related policies for families with capped benefits. Almost all states in the nation have policies that ignore a portion of a family’s earned income when determining the family’s eligibility for benefits and the amount of benefits they will receive. These policies increase the amount of cash income for the households with a working adult. Three of the states with a family cap policy ignore an even larger portion of a family’s income after the family’s benefits are capped. In addition, some states make exceptions to their child support policies for capped-benefit families. Typically, states keep any child support collected on behalf of TANF families in order to reimburse the government for its welfare costs. Four states provide a portion, or the entire amount, of any child support collected for a capped-benefit family to the family instead of retaining it as they do for non-capped families. Four states with a family cap on benefits give families vouchers equal to the traditional cash benefit increase they would have received in the absence of a cap. These vouchers can be used to purchase basic goods and services for the newborn child, such as diapers and formula, from participating vendors, although data on the extent to which families used these vouchers were not readily available. Conditions under which states provide the vouchers vary. For example, one state placed a time limit of three years on the voucher, while another state provides a voucher only if the family requests one each month. Another state provides alternative resources by offering a cash payment to a third party in the amount of the incremental increase that would have been paid on behalf of the child to purchase goods or services for the newborn child. The third party can be a non-profit organization offering such goods and services, a family member not included in the assistance unit, or a caseworker not directly connected to the family receiving the benefit. During an average month in 2000, about 108,000 families received less in cash assistance than they would have if their benefits had not been capped. Capped-benefit families represented about 9 percent of the average monthly TANF caseload in the 20 states that provided us data. The proportion of the monthly caseload of families whose benefits were affected by the cap varied from 1 to 20 percent across these states. The actual effect of the family cap on cash benefits is difficult to determine because cash benefit levels are influenced by other factors, including family earnings and child support. We estimated, however, that in an average month, families whose benefits were affected by the cap due to the birth of an additional child received about 20 percent less than they would have received in the absence of a cap; we based this estimation on two person families that had an additional child while on welfare.Additional data show that 12 percent of families had more children after their TANF benefits were already affected by the family cap. The percentage increase in benefits not received by these particular families is likely to be greater than our estimate of 20 percent. Finally, families with capped benefits receive more in food stamps than families of a comparable size whose benefits are not capped. Based on responses of 20 states with family cap policies, about 108,000 families receiving TANF had their benefits affected by the family cap, in an average month during 2000. This number represented about 9 percent of the total number of TANF families in these 20 states, and is a minimum number of families who may have been affected during 2000. Table 1 shows the number of families whose benefits were affected by the family cap in an average month by state and the percentage of all TANF families whose benefits were affected by the family cap in each state. The percentage affected by the family cap varies among the states, ranging from about 1 percent in South Carolina and Tennessee to 20 percent in Illinois. These variations could be caused by differences in state’ various exemption policies and practices, or whether states have their own time- limits for cash assistance. Variations could also be due to differences in when states implemented the family cap. For example, because California only recently implemented a family cap policy, the number of TANF families with benefits affected by the cap is likely to increase, according to California state officials. In general, families that have had their benefits capped are larger than families whose benefits were not limited by the cap. Three or four person families make up about two-thirds of families whose benefits are affected by a cap. Such families might consist, for example, of a parent with two or three children, one of whom does not receive TANF benefits because of the cap. Families with five members or more make up a little over one- quarter of families whose benefits are affected by a cap. Two person families and one person families (child-only cases) make up the remaining families whose benefits are affected by a cap. Table 2 below gives family size information for capped-benefit families by state. We estimated that in a given month, the amount of cash assistance received by families whose benefits had been capped was, on average, about 20 percent less than it would have been in the absence of a cap. We estimated the cap’s effect on benefits because states were unable to report the actual amount by which families’ benefits changed as a result of the cap. Several factors, including family earnings and other resources, such as receipt of child support, influence the amount of cash benefits a family receives. Our estimate represented an average dollar amount of approximately $100 per month—representing a range from $20 in Wyoming to $121 in California. These estimates may somewhat overstate the amount by which families benefits are affected because the estimates are based upon the maximum cash benefit a TANF family is eligible to receive, which is generally greater than the average amount of cash assistance families actually receive. Overall, the average level of cash assistance for three person families whose benefits were affected by the family cap was $394 per month in federal fiscal year 2000. Depending on the state and based on maximum benefit levels, we estimated that families received between six percent (in Wyoming) to 26 percent (in Illinois) less in cash assistance due to the family cap. Table 3 shows our calculations of the amount not received by three person capped-benefit families across family cap states. For more information, see table 5 in appendix I. While we were able to estimate how the family cap affects the monthly cash benefit amount for a family, the family’s income may be affected in other ways. Consequently, the cap’s total effect on household income is difficult to determine. To some extent, the family cap’s financial impact is offset by an increase in food stamp benefits. Because food stamp benefit calculations take into account unearned income (i.e., TANF cash assistance) and family size, capped-benefit families would receive more in monthly food stamp benefits than they would if they were not capped. In addition, a capped-benefit family may receive more child support or retain more of its earnings than it would without the cap, due to the state level policies specific to capped-benefit families discussed previously. However, because the majority of TANF families do not engage in work activities or receive child support, the effect of the cap on their income is unlikely to be offset by the benefits these policies offer. While the majority of families receiving TANF have not had additional children after their benefits were limited by the cap, about 12 percent of capped-benefit families did. These families had more than one child whose benefits were affected by the cap. For these families, the estimated cash increase not received is likely to be greater than our average estimate of 20 percent. For example, because of the family cap, a TANF recipient who gives birth to two additional children while on TANF would receive from 21 percent in North Carolina to 38 percent in Oklahoma less in cash assistance. As states have varying cash benefit amounts, the actual dollar amount that would not have been received ranges from $48 a month in Mississippi to $241 in California. Due to limitations of the existing research, we cannot conclude that family cap policies reduce the incidence of out-of-wedlock births, affect the number of abortions, or change the size of the TANF caseload. There are several major difficulties in obtaining conclusive evidence on the family cap. These include appropriately measuring the number of out-of-wedlock births and separating the impact of the family cap from the impact of other major policy and program changes that took place simultaneously and from the impact of broader social, cultural, or economic changes. We identified five studies that examined the relationship between the family cap and the incidence of out-of-wedlock births. Due to their methodological limitations, none of these studies can be used to cite conclusive evidence about the effect of the family cap on out-of-wedlock births. The studies we reviewed that examined the relationship between the family cap, abortions, and caseloads also had limitations that precluded conclusions about the effect of the family cap. (See a description of the studies and their limitations in app. II). One of the major difficulties in studying the effect of the family cap is that major welfare policy changes at the state and federal levels have occurred over the past decade. These changes make it difficult to distinguish the effect of the family cap (or any other welfare policy) from the effect of other reforms, or from the impact of major changes in messages being sent to welfare recipients about self-sufficiency through welfare reform. For example, PRWORA placed more emphasis on work requirements to encourage recipients to be self-sufficient and also allowed states to have more flexibility in implementing policies such as family caps, which also encourage self-sufficiency through the goal of reducing out-of-wedlock births. In such cases, it would be difficult to separate the combined effects of the various policies into the individual effects of each on the number of out-of-wedlock births. Another major difficulty with studying the effects of family caps on the number of out-of-wedlock births is separating the family caps’ effect from societal changes occurring between 1991 and 1997. Specifically, the birth rate among teens declined and the birth rate for second children declined among women ages 15 to 24. The period in which these declining birth rates occurred overlapped with the period in which family caps were implemented. Since this overall decline began before any family caps were in effect, we can safely assume that this trend began independent of family cap implementation. Therefore, it is difficult to disentangle the true effect that the family cap policies may have had on declining birth rates from the effect of these national declines. Another barrier to understanding the effects of the family cap is the limited availability of needed information from national data sets for studying specific welfare policies relating to out-of-wedlock births. HHS only recently began collecting data on out-of-wedlock births for welfare recipients. This data will be helpful for analyzing state-level effects of particular welfare policies, such as the family cap. None of the five studies we reviewed was conducted in a way that would permit us to draw firm conclusions about the effect of the family cap on childbearing. Four of the studies we reviewed could not isolate the effect of the family cap from the effects of other welfare reform policies and had other shortcomings. For example, in one study, the participants did not understand whether their benefits were affected by the family cap. The fifth study was more successful at isolating the effects of the family caps, but had other limitations. While this fifth study was strongest in terms of the methods it used to examine the effects of the family cap, it was limited by the way it measured the occurrence of non-marital births. This study’s strengths included controlling for the effects of other factors—broader social and economic changes, differences across states over time, and other welfare reforms implemented at, or around, the same time as the family cap. However, this study evaluated the effect of the family cap by using a ratio of non-marital births to all births (marital and non-marital). Using this ratio is problematic because even if the number of non-marital births remained constant, the ratio could still decrease or increase because of changes in the number of marital births. For example, the ratio would decrease if marital births increased and non-marital births remained constant. Because of this limitation, the study cannot conclusively show the effect of the family cap on the number of non-marital births. As was the case with studies examining non-marital births, other studies we reviewed were not conducted in a way that would permit us to draw firm conclusions about the effect of the family caps on abortions, family planning, or the TANF caseload. The studies we reviewed had various limitations. The most common were limitations involving the inadequate measurement of family caps and the inability to isolate the effect of the family cap from other concurrent welfare reforms. We did not identify any studies that evaluated the impact of the family cap on poverty. While HHS’ research efforts cover a broad range of issues, including some related to reducing out-of-wedlock pregnancies, most of the studies have focused on TANF’s employment-related goal. Since the enactment of PROWRA, HHS has used its research authority and resources to encourage and support evaluations of various welfare program approaches and features. As described in its Third Annual Report to the Congress on the TANF program, August 2000, HHS’ research agenda has two main goals: (1) to increase the probability of success of welfare reform by providing timely, reliable data to inform policy and program design, especially at the state and local level where decision making has devolved; and (2) to inform the nation of policies chosen and their effects on children, families, communities and social well-being. Within this research agenda, as shown in table 4, in fiscal year 2000, HHS spent about $26 million on research and technical assistance projects. These projects include studies on the relative effectiveness of various approaches to moving welfare recipients into employment, the well-being of children of parents enrolled in welfare- to-work programs, and the effectiveness of job retention strategies for welfare recipients who become employed. Although most of the studies and research focus on employment-related issues, HHS does support some research related to the TANF goal of reducing out-of-wedlock pregnancies. For example, some of the evaluations begun under waivers as well as a few studies of families who have left welfare have gathered some information relating to the family cap and out-of-wedlock pregnancies among welfare recipients. In addition, HHS has been involved with a significant initiative aimed at reducing teenage pregnancy, has funded research projects related to helping young adults avoid premature sexual activity and unintended pregnancies, and has just recently begun a project involving interventions for unwed parents at the time of their child’s birth. HHS is also involved in a major on-going evaluation of abstinence education programs designed to strengthen the research base and public knowledge about promoting abstinence among youth and the benefits of various approaches. Moreover, in fiscal year 2000, HHS has taken steps to increase the availability of information related to out-of-wedlock childbearing activities by requiring states to include information on their strategies for reducing out-of-wedlock pregnancies in their TANF annual reports. HHS also requires states to report data on the incidence of out-of-wedlock births among the TANF caseload. This new information may help HHS, states, and researchers share information on promising approaches to reducing out-of-wedlock pregnancies and contribute to a state’s ability to qualify for the Bonus to Reward Decrease in Illegitimacy Ratio. Even with the research under way and the steps HHS has recently taken, additional efforts may prove useful in, for example, improving data availability, conducting implementation studies, and striving to improve effectiveness studies. In our recently completed comprehensive review of the data available to assess states’ progress in meeting TANF’s goals, we found limited information regarding the goal of reducing out-of-wedlock pregnancies, particularly in comparison to the more widely available information related to helping welfare parents reduce their dependence on welfare through job preparation and employment. In some ways this is not surprising, given that states have focused their efforts on helping welfare recipients find employment and become economically independent. One expert said that states have focused their efforts on employment because much more is known about effective strategies for moving welfare recipients into work than is known about strategies for reducing births. Another expert believes that states have focused more on employment goals because more consensus exists about the role of government in helping welfare recipients become employed than about its role in influencing people’s childbearing decisions. HHS could play an important role in encouraging and supporting additional research in this area to support states’ efforts in meeting TANF goals. In the new and evolving welfare environment created by PRWORA, states have tremendous flexibility to design and implement strategies to meet four key TANF goals: providing assistance to needy families; ending dependence on government aid through job preparation, employment, and marriage; preventing and reducing out-of-wedlock pregnancies; and promoting two-parent families. States have moved ahead with strategies designed to move welfare recipients into employment, an area where much research exists providing useful information about what works best and for whom. While states have been much less active in implementing strategies to reduce out-of-wedlock pregnancies, the use of family cap policies does show interest among states in meeting this congressionally- established goal. Yet, policymakers and program administrators have limited information available to help in understanding the effectiveness of the family cap or to aid in devising and implementing other strategies that may prove effective in reducing out-of-wedlock births. While overcoming the inherent difficulties in assessing the effectiveness of family cap policies and other approaches in the new welfare environment may be challenging, taking steps to improve data availability, conduct implementation studies, and improve effectiveness studies would be useful. State, local, and federal program administrators and policymakers would be well served by a stronger research base upon which to draw information on a range of effective strategies for reducing out-of-wedlock pregnancies. Even though the new welfare system is highly decentralized, PRWORA explicitly charged HHS with conducting research on state TANF programs, and HHS has played an important role in identifying and disseminating information on effective strategies for meeting welfare reform goals, with a particular focus on TANF’s employment-related goal. HHS also has supported some research that addresses effective strategies for accomplishing the goal of reducing out-of-wedlock pregnancies. In addition, HHS has taken steps to ensure that more data will be available from states on births to welfare recipients and on strategies that states have implemented to reduce out-of-wedlock pregnancies. Still, if HHS strengthened its efforts in this area and improved the research base, it could enhance states’ efforts to address this TANF goal. Moreover, if HHS submits information on its research agenda and efforts with estimated resource needs to the Congress, the Congress will have useful information to use as it considers TANF reauthorization and related research needs. We recommend that the Secretary of HHS review its research agenda and, if appropriate, take steps to identify, encourage, and support additional studies that would increase the availability of information on how best to prevent and reduce out-of-wedlock pregnancies and more fully support the goals of TANF. This additional work could include improving the availability of data to support studies, working with states to identify and disseminate information on relevant promising practices, and supporting rigorous evaluation studies. Having additional research in this area would provide important information to administrators and policymakers and support the Congress’ efforts to reward states for strategies that succeed in reducing out-of-wedlock births. We also recommend that HHS provide its research agenda, with estimated resource needs, to the Congress for its use as it considers TANF reauthorization, including decisions about the role of HHS in conducting research and the resources HHS needs to fulfill that role. This will help to ensure that the key research and technical assistance needs of this $16.5 billion federal program are met. We provided HHS with an opportunity to comment on the report. HHS agreed with our conclusion that available research does not address the effect of family cap policies and said that the report addressed an important topic. A copy of HHS’ response is in appendix III. We also incorporated technical comments we received from HHS where appropriate. Regarding our recommendation about the need for more research on effective strategies for reducing out-of-wedlock births, HHS agreed that more research is needed, but noted that additional detail in the report on the methodological limitations of existing research would be helpful for understanding the significance of the existing studies and for individuals thinking about additional research. We believe the level of discussion on limitations in the report, including the appendix, is sufficient to address the focus of this report—what conclusions can be drawn about the effectiveness of the family cap from existing research--and to point to ways to improve studies on the family cap. Regarding our second recommendation that HHS provide its research agenda to the Congress for its use as it considers TANF reauthorization, HHS noted that it already has in place several mechanisms for keeping the Congress informed about its welfare research activities. These include annual reports to the Congress on its study of the outcomes of welfare reform with brief descriptions of welfare outcomes projects planned for funding each year, a chapter describing its research agenda in the annual TANF report to the Congress, and briefings to interested congressional staff upon request. We are aware of these information sources and believe they provide important information to the Congress. However, we continue to believe that it would be useful for HHS to provide its research agenda to the key authorizing committees for the TANF program, with estimated resource needs, in a form designed specifically to aid the Congress in TANF reauthorization would serve a useful purpose. HHS also expressed concern that readers might get an unnecessarily narrow view of the strategies available to address the goal of reducing out- of-wedlock pregnancies because the report focuses solely on family cap policies and research. As HHS noted, the TANF goal of preventing and reducing out-of-wedlock pregnancies addresses the overall population, not just welfare families, and only a modest portion of out-of-wedlock births is attributable to the welfare population. Yet, family cap policies focus by definition on welfare families. We agree that this is an important point and that other strategies can address the goal of reducing out-of-wedlock pregnancies, such as the National Campaign to Prevent Teen Pregnancy, which we mentioned in the report. While a comprehensive review of the research related to strategies to reduce out-of-wedlock pregnancies was not the focus of this report, we did look beyond research on the family cap to assess whether HHS’ welfare-related research agenda supported the broad goal of preventing and reducing out-of-wedlock pregnancies and concluded that more research on strategies for addressing this goal, including those beyond the family cap, could play an important role in encouraging states efforts in this area. As suggested by HHS, we added a reference to the recently completed review of evaluation research related to reducing teen pregnancy. We also provided the draft report to two experts on welfare research, who agreed with our findings and overall conclusions. They also provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Honorable Max Baucus, Chairman, and the Honorable Charles Grassley, Ranking Minority Member, Senate Committee on Finance; the Honorable Bill Thomas, Chairman, House Committee on Ways and Means; the Honorable Wally Herger, Chairman, and Honorable Benjamin Cardin, Ranking Minority Member, Subcommittee on Human Resources, House Committee on Ways and Means; the Honorable Tommy Thompson, Secretary of Health and Human Services; appropriate congressional committees; and other interested parties. We will also make copies available upon request. If you have any questions about this report, please contact me on (202) 512-7215. Other GAO contacts and staff acknowledgments are listed in appendix IV. This appendix provides more detail on how we (1) assessed the number of families whose benefits were affected by the family cap in an average month and the amount of cash benefits not received by capped-benefit families and (2) identified studies on the impact of the family cap and analyzed the content of those studies. We conducted our work between July 2000 and September 2001, in accordance with generally accepted government auditing standards. To determine the number of families whose benefits were affected by the family cap in an average month, we requested federal fiscal year 2000 data from TANF programs in all states with family cap policies. Twenty states provided us with the information that was used to determine the average monthly number of families in 2000 whose benefits were affected by the family cap in these states. Not all states were able to give us this information by family size. We requested the same caseload data for all TANF families in the family cap states. In order to estimate the monthly amount of cash assistance capped-benefit families did not receive due to the family cap, we used information provided by the Congressional Research Service. We estimated how much less families would receive because of the family cap policy by calculating the difference between the maximum benefit for two and three person families. (See table 5.) To determine the average monthly cash benefit for three person capped- benefit families, we asked states for the average monthly cash benefit level for families affected by the cap by family size. We then weighted this average number for three person families by the three person family cap caseload in each state. Not all states were able to provide benefit levels for families affected by the cap, nor were all states able to provide information by family size. Many states were able to give us estimates based on the twelve months of state TANF data from federal fiscal year 2000, however some were only able to give us data for state fiscal year 2000. Most states collected data on the universe or population to respond to our request, while a few states used a sampling methodology. We collected, reviewed, and analyzed information from available published and unpublished research on the effect of the family cap. To identify the literature, we followed three procedures: 1. interviewing experts to find out what studies were completed or in the process of being completed on the impact of the family cap; 2. conducting library and internet searches; and 3. reviewing bibliographies of studies that focused on family cap issues. Our final list consisted of nine studies, as listed in the bibliography, which evaluated the impact of the family cap on the incidence of out of wedlock- births and abortions and the impact on TANF caseloads. We were unable to identify any studies evaluating the effect of the family cap on poverty. For the studies in our review, we recorded the quantitative results, summarized the methodologies used, and summarized the authors’ conclusions about the effect of the family cap. We used social science research principles to assess the methodological adequacy of these studies, and to assess the degree to which the study was able to isolate the effect of the family cap from other, concurrent welfare reform initiatives. At least two social scientists or statisticians with specialized training in evaluation research methodology reviewed each study. Conclusions in this report are based on our assessment of the evidence presented in these studies. We sent the list of research articles and summaries of our reviews of the studies to several experts who have conducted, or been involved in summarizing, extensive research in the field of welfare reform to confirm the comprehensiveness of our list of articles and the thoroughness of our reviews. We also conducted a second search in June 2001 to ensure that no new research articles or reviews had been published since our original search. We identified one new article on the effect of the family cap and other variables on child maltreatment. We did not have adequate time to incorporate an analysis of this study into our final report. Appendix II: Studies on Family Cap Effect “Lead” or “advertising” effects refer to the idea that welfare recipients may hear about, and react to, potential welfare policy changes that are advertised through popular media but have not yet actually taken effect. “Lead” or “advertising” effects refer to the idea that welfare recipients may hear about, and react to, potential welfare policy changes that are advertised through popular media but have not yet actually taken effect. In addition to those named above, the following individuals made important contributions to this report: Sara Schibanoff, Mary Abdella, and Shannah Wallace. Wendy Ahmed, Doug Sloane, Rudy Chatlos, Laura Shumay, Jim Wright, and Patrick DiBattista also provided key technical assistance. Blank, Rebecca M. What Causes Public Assistance Caseloads to Grow? Cambridge, Mass.: National Bureau of Economic Research, December 1997. http://www.nber.org/papers/w6343 (cited Oct. 17, 2000). Camasso, Michael J. and others. A Final Report on the Impact of New Jersey’s Family Development Program: Experimental-Control Group Analysis. Trenton, N.J.: New Jersey Department of Human Services, Oct. 1998. Camasso, Michael J. and others. A Final Report on the Impact of New Jersey’s Family Development Program: Results from a Pre-Post Analysis of AFDC Case Heads from 1990-1996. Trenton, N.J.: New Jersey Department of Human Services, July 1998. Council of Economic Advisers. The Effects of Welfare Policy and the Economic Expansion on Welfare Caseloads: An Update. Washington, D.C., 1999. Horvath-Rose, Ann and H. Elizabeth Peters. “Welfare Waivers and Nonmarital Childbearing,” For Better and For Worse: Welfare Reform and the Well-Being of Children and Families. Edited by Greg Duncan and P. Lindsay Chase-Lansdale. New York: Russell Sage, forthcoming 2001. Mach, Traci. Measuring the Impact of Family Caps on Childbearing Decisions. Albany, N.Y.: University at Albany-SUNY Working Paper 00-04 (cited Mar. 2001). Moffitt, Robert A. The Effect of Pre-PRWORA Waivers on AFDC Caseloads and Female Earnings, Income and Labor Force Behavior. Baltimore, Md.: Johns Hopkins University, 1999. Stapleton, David, Gina Livermore and Adam Tucker (The Lewin Group). Determinants of AFDC Caseload Growth. Washington, D.C.: the Department of Health and Human Services, the Office of the Assistant Secretary for Planning and Evaluation, July 1997. Turturro, Carolyn, Brent Benda, and Howard Turney. Arkansas Welfare Waiver Demonstration Project: Final Report. Little Rock, Ark.: The University of Arkansas, 1997. Welfare Reform: Moving Hard-to-Employ Recipients Into the Workforce (GAO-01-368, Mar. 15, 2001). Welfare Reform: Progress in Meeting Work-Focused TANF Goals (GAO- 01-522T, Mar. 15, 2001). Welfare Reform: Data Available to Assess TANF’s Progress (GAO-01-298, Feb. 28, 2001). Welfare Reform: Work-Site-Based Activities Can Play an Important Role in TANF Programs (GAO/HEHS-00-122, July 28, 2000). Welfare Reform: Improving State Automated Systems Requires Coordinated Federal Effort (GAO/HEHS-00-48, Apr. 27, 2000). Welfare Reform: State Sanction Policies and Number of Families Affected (GAO/HEHS-00-44, Mar. 31, 2000). Welfare Reform: Assessing the Effectiveness of Various Welfare-to-Work Approaches (GAO/HEHS-99-179, Sep. 7, 1999). Welfare Reform: Information on Former Recipients’ Status (GAO/HEHS- 99-48, Apr. 28, 1999). Welfare Reform: States’ Experiences in Providing Employment Assistance to TANF Clients (GAO/HEHS-99-22, Feb. 26, 1999). Welfare Reform: Status of Awards and Selected States’ Use of Welfare-to- Work Grants (GAO/HEHS-99-40, Feb. 5, 1999). Welfare Reform: Child Support an Uncertain Income Supplement for Families Leaving Welfare (GAO/HEHS-98-168, Aug. 3, 1998). Welfare Reform: States Are Restructuring Programs to Reduce Welfare Dependence (GAO/HEHS-98-109, June 18, 1998). Welfare Reform: HHS’ Progress in Implementing Its Responsibilities (HEHS-98-44, Feb. 2, 1998).
To reduce out-of-wedlock pregnancies among welfare recipients, some states have imposed family caps on welfare benefits. One factor that determines the amount of cash benefits a family receives is the family's size--larger families receive more benefits. In states with a family cap policy, however, no additional cash benefits are provided with the birth of another. Twenty-three states have implemented some variation of a family cap, breaking the traditional link between a family's size and the amount of its monthly welfare check. Generally, these states implemented family cap policies as part of their welfare reforms to reduce out-of-wedlock births and to encourage self-sufficiency. During an average month in 2000, 20 of the 23 family cap states reported that about 108,000 families received less in cash benefits than they would have in the absence of state-imposed family cap policies. In an average month, about nine percent of welfare families in these states had their benefits affected by the family cap. A family's welfare benefits are affected by several factors, including earnings and receipt of child support. Therefore, states were unable to report the precise effect of the family cap on benefits. Because of limitations of the existing research, GAO cannot conclude that family cap policies reduce the incidence of out-of-wedlock births, affect the number of abortions, or change the size of the welfare caseload.
Located within the Department of Defense, the Corps has both military and civilian responsibilities. Through its Civil Works program, the Corps plans, designs, constructs, operates, and maintains a wide range of water resources infrastructure projects for purposes such as flood control, navigation, and environmental restoration. The Civil Works program is organized into three tiers: a national headquarters in Washington, D.C.; eight regional divisions that were established generally according to watershed boundaries; and 38 districts nationwide (see fig. 1). Corps headquarters primarily develops policies and provides oversight. The Assistant Secretary of the Army for Civil Works, appointed by the President, establishes the policy direction for the Civil Works program. The Chief of Engineers, a military officer, oversees the Corps’ civil works operations and reports on civil works matters to the Assistant Secretary of the Army for Civil Works. The eight divisions, commanded by military officers, coordinate civil works projects in the districts within their respective geographic areas. Corps districts, also commanded by military officers, are responsible for planning, engineering, constructing, and managing water resources infrastructure projects in their districts. Districts are responsible for coordinating with the nonfederal sponsors, which may be state, tribal, county, or local governments or agencies. Each project has a project delivery team of civilian employees that manages the project over its life cycle. The team is led by a project manager and comprises members from the planning, engineering, construction, operations, and real estate functions. In addition, the Civil Works program maintains a number of centers of expertise and research laboratories to assist the Corps divisions and districts in the planning, design, and technical review of civil works projects. The Corps established these centers to consolidate expertise, improve consistency, reduce redundancy, and enhance institutional knowledge, among other things. Unlike many other federal agencies that have budgets established for broad program activities, most Corps civil works funds are appropriated for specific projects. In general, the Corps receives “no-year” appropriations through the Energy and Water Development Appropriations Act—that is, there are no time limits on when the funds may be obligated or expended, and the funds remain available for their original purposes until expended. The conference report accompanying the annual Energy and Water Development Appropriations Act generally lists individual projects and specific allocations of funding for each project. Through this report, the appropriations committees essentially outline their priorities for the Corps’ water resources projects. Congress directs funds for many individual projects in increments over the course of several years. The Corps is responsible for planning, designing, and operating much of the nation’s water resources infrastructure. To do so, the Corps generally goes through a series of steps involving internal and external stakeholders. Usually, the Corps becomes involved in water resources infrastructure projects when a local community perceives a need or experiences a problem that is beyond its ability to solve and contacts the Corps for assistance. If the Corps does not have the statutory authority required for studying the problem, the Corps must obtain authorization from Congress before proceeding. Studies have been authorized through legislation, typically a Water Resources Development Act (WRDA), or, in some circumstances, through a committee resolution by an authorizing committee. Next, the Corps must receive an appropriation to study the project, which it seeks through its annual budget request to Congress. After receiving authorization and an appropriation, a feasibility study is to be conducted. A Corps district office is to conduct a feasibility study, the cost of which is generally shared by a nonfederal sponsor, which may be a state, tribal, county, or local government, or agency. The feasibility study investigates the problem and makes recommendations on whether the project is worth pursuing and how the problem should be addressed. The district office is to conduct the study and any needed environmental studies and document the results in a feasibility report. At specific points within the feasibility stage, a new infrastructure project is to undergo a series of technical reviews at the district, regional, and national level to assess the project’s methodology and to ensure that all relevant data and construction techniques are considered. At the district level, all decision documents and their supporting analysis for a new project are to undergo a district quality control review by district leadership. This review is to assess the science and engineering work products to ensure that they are fulfilling project quality requirements. At the regional level, decision documents are to undergo an agency technical review by Corps officials from districts outside of the one conducting the study. This review verifies the district quality control review, assesses whether the analyses presented are technically correct and comply with published Corps guidance, and determines whether the documents explain the analyses and results in a reasonably clear manner for the public and decision makers. In some instances, a new project meeting certain criteria may also undergo an Independent External Peer Review. For these Independent External Peer Reviews, the Corps is required by law to contract with the National Academy of Sciences, a similar independent scientific and technical advisory organization, or an “eligible organization” to establish a panel of experts that will review a project study. Several criteria are used for selecting peer review panel members, including assessing and balancing members’ knowledge, experience, and perspectives in terms of the subtleties and complexities of the particular scientific, technical, and other issues to be addressed. After going through various levels of review, depending on the project, the Chief of Engineers is to review the report and decide whether to sign a final decision document, known as the Chief’s Report, recommending the project for construction. The Chief of Engineers is to transmit the Chief’s Report and the supporting documentation to Congress through the Assistant Secretary of the Army for Civil Works and the Office of Management and Budget. Congress may authorize the project’s construction in a WRDA or other legislation. Most infrastructure projects are authorized during the preconstruction engineering and design phase, which begins after the feasibility study is complete. The purpose of this phase is to complete any additional planning studies and all of the detailed technical studies and designs needed to begin construction of the infrastructure project. Once the construction project has been authorized, the Corps seeks funds to construct the infrastructure project through the annual budget formulation process. As part of the budget process, the Army, with input and data from Corps headquarters, division, and district offices, develops a budget request for the agency. In fiscal year 2006, the Corps introduced what it refers to as performance-based budgeting, which uses performance metrics to evaluate projects’ estimated future outcomes and gives priority to those it determines have the highest expected returns for the national economy and the environment, as well as those that reduce risk to human life. Congress directs funds for individual projects in increments over the course of several years. If the infrastructure project has been appropriated funds, the district enters into a cost-sharing agreement with the nonfederal sponsor. Section 2035 of WRDA 2007 as amended (Pub. L. No. 110-114, § 2035 (2007), as amended by Pub. L. No. 113-121, § 3028 (2014)) requires the Chief of Engineers to ensure that the design and construction activities for hurricane and storm damage reduction and flood damage reduction projects have a safety assurance review by independent experts if the Chief of Engineers determines that such a review is necessary to assure public health, safety, and welfare, prior to initiation of physical construction and periodically thereafter until construction activities are completed. authorized purposes in consultation with interested stakeholders in the area of the project that may be impacted by its operations. In addition to water control manuals for individual projects, the Corps may also have master water control manuals that outline the operations of a system of projects. The Corps may also develop operational guidance for non- Corps projects if, for example, the Corps has responsibility for flood control or other operations at that project. Water control manuals typically outline the operating criteria and guidelines for varying conditions and specifications for water storage and releases from a reservoir, including instructions for obtaining and reporting appropriate hydrologic data. The Corps uses a variety of hydrologic data—data relating to the movement and distribution of water—and forecasting data in its planning, designing, and operation of water resources infrastructure that can help it plan for extreme weather events. Much of these data are collected by other federal agencies as part of nationwide efforts to gather weather and hydrologic data. Table 1 shows examples of the type of hydrologic data collected by various federal agencies and used by the Corps. Corps officials and reports by federal agencies have highlighted limitations in some of the data the Corps uses in its planning, design, and operations of water resources infrastructure. Examples include the following: Streamflow. The Corps uses streamflow information from the National Streamflow Information Program in its planning, designing, and daily operations. However, according to Corps officials and USGS data, loss of streamgages due to funding constraints has reduced the available information about streamflows. According to USGS, from 1995 to 2008, 948 critical streamgages with 30 or more years of records were discontinued. Further, a USGS report noted that the loss of long-record streamgages reduces the potential value of streamflow information for infrastructure operations and design applications. Streamgage data are also used to produce climate change information upon which the Corps bases its adaptation planning. Despite these losses, the Corps has a formal agreement to provide funding to USGS to operate streamgages that provide data for the Corps’ water management activities and in fiscal year 2013 provided USGS with $18 million. Precipitation related to extreme storms. Until 1999, the Corps used NWS Hydrometeorological Reports (site-specific probable maximum precipitation studies) for its designs. However, NWS discontinued providing these services in 1999 due to lack of funding, and some Corps officials said they have been using outdated data since that time. In response, the Corps has worked with the Interagency Federal Work Group on Extreme Storm Events since 2008 and established its own Extreme Storm Team to address Corps data needs, as well as the needs of other agencies. Plains snowpack. The Corps uses plains snowpack data in its runoff forecasting for operations. The Corps and NWS have found limitations in this snowpack data. For example, a NWS report assessing the 2011 Missouri River flood found that modeled information on snow-water equivalent is available, but observational data are sparse and not always representative of basin-wide conditions. WRRDA 2014 included a requirement that the Secretary of the Army, in coordination with other specified agencies, carry out snowpack and soil moisture monitoring in the Upper Missouri Basin. As of June 2015, those agencies had not yet developed the monitoring system due to funding constraints according to agency officials. Under Executive Orders 13514 and 13653, agencies are to create and update climate change adaptation plans that integrate consideration of climate change into their operations and overall mission objectives. Specifically, Executive Order 13514, issued in 2009, directed agencies to participate in an existing Interagency Climate Change Adaptation Task Force. Based on the work of the task force, the Council on Environmental Quality (CEQ) issued implementing instructions for the executive order in March 2011. The instructions directed agencies to, among other things, issue an agency-wide climate change adaptation policy statement and submit their climate adaptation plans to CEQ and the Office of Management and Budget. Executive Order 13653, issued in 2013, directed agencies to continue developing and regularly updating their climate adaptation plans. In response to these executive orders, the Corps submitted its Climate Change Adaptation Plans in 2012, 2013, and 2014 (adaptation plan). The Corps’ adaptation plan is implemented, in part, through its Responses to Climate Change Program. This program is charged with developing the methods, tools, and guidance to improve the resilience of the Corps’ built and natural infrastructure through a collaborative, proactive, nationally consistent, and regionally sensitive framework and program of actions. According to the adaptation plan, these actions include improving the agency’s understanding of climate impacts to missions and operations, assessing vulnerabilities, and identifying specific actions to minimize risk and capitalize on opportunities to improve infrastructure resilience. According to Corps documents, infrastructure resilience is the ability to anticipate, prepare for, respond to, and adapt to changing conditions and to withstand and recover rapidly from disruptions with minimal damage. As directed by CEQ instructions and guidance implementing Executive Order 13514, the Assistant Secretary of the Army for Civil Works released the Corps’ policy regarding adaptation in June 2014.that “mainstreaming climate change adaptation means that it will be considered at every step in the project life cycle for all USACE [U.S. Army Corps of Engineers] projects, both existing and planned … to reduce vulnerabilities and enhance the resilience of our water resource infrastructure.” This policy also established the Corps’ Committee on Climate Preparedness and Resilience to oversee and coordinate the agency’s climate change adaptation planning and implementation. In January 2015, Executive Order 13690 was issued establishing a federal flood risk management standard, which applies to federal actions—including the construction of facilities with federal funds—in, and affecting, floodplains. Under the standard, certain new construction, substantially improved structures, and substantially damaged projects must meet a certain elevation level, among other things. Draft floodplain management guidelines were issued in February 2015 and were available for public comment through May 6, 2015. Within 30 days of the close of this public comment period, Executive Order 13690 directed agencies to submit an implementation plan to the National Security Council that contains milestones and a timeline for implementation of the executive order and standard. using appropriated funds to implement the standard until input from Governors, Mayors, and other stakeholders has been solicited and considered. According to the executive order, agencies should not issue or amend regulations and procedures to implement the executive order until after implementing guidelines are issued. Thus, it is unclear how the standard will affect the Corps’ operations. The National Security Council is the President’s principal forum for considering national security and foreign policy matters with his senior national security advisors and cabinet officials. The Corps addresses the potential impact of extreme weather events in its planning and operations of water resources infrastructure projects in various ways including updating and developing guidance to be used in the planning process; using tools, such as water control manuals, in its operation of projects; and through collaboration with key federal agencies and stakeholders. The Corps considers the potential impacts of extreme weather in its planning process by updating and developing guidance, as well as incorporating the uncertainties of extreme weather events in planning for new infrastructure projects, and through its Civil Works Transformation Initiative. For example, in 2009, the Corps issued guidance for incorporating sea level change in its planning, construction, and operation of water resources infrastructure projects impacted by the rise and fall of sea levels. This guidance, which was updated in 2011 and 2013, directs Corps districts to consider three scenarios of potential sea level change when designing and constructing new infrastructure, as well as managing existing water infrastructure. According to Corps documents, sea level change can have a number of impacts on coastal and estuarine zones, including more severe storm and flood damages. In 2014, the Corps issued additional guidance on how to evaluate the effects of projected future sea level change on Corps projects and what to consider when adapting projects to this projected change.incorporate sea level change into the planning process to improve the resilience of projects and maximize performance over time. This guidance is intended to In addition, in May 2014, the Corps issued guidance for how to incorporate potential impacts of extreme weather into the planning of inland infrastructure projects in accordance with Executive Order 13653 and the President’s Climate Action Plan.purpose and objective for incorporating this consideration into current and future studies as well as provides an example of how to incorporate new science and engineering in hydrologic analyses for new and existing Corps projects. Moreover, the guidance establishes a procedure to perform a qualitative analysis of potential climate threats and impacts to the Corps’ hydrology-related projects and operations. The guidance calls for districts to conduct an initial screening-level qualitative analysis to identify whether climate change is relevant to the project goals or design. If climate change is determined to be relevant to the project goals or design, the guidance directs districts to make an evaluation of information about climate change impacts such as changes in processes governing rainfall runoff or snowmelt. The information is intended to be used to help identify opportunities to reduce potential vulnerabilities and increase resilience as a part of the project’s authorized operations, as well as identify any limitations or issues associated with the data collected. The Corps also issued guidance in October 2014 on determining the appropriate use of paleoflood information in its planning and operation of water infrastructure. According to Corps guidance, useful information can be gained from paleohydrology, or the evidence of the movement of water and sediment in stream channels before continuous hydrologic records or direct measurements became available. For example, this information can be derived from high water marks, tree rings, and gravel deposits, among other things, and can help Corps districts estimate flood peak magnitudes, volumes and durations for flood damage assessments, or evaluate design criteria. This guidance also notes that paleoflood information may not be suitable for all projects such as, watersheds that have been altered through time, either by geologic processes or by human activity. In addition to updating and developing guidance for planning and operating water infrastructure, Corps headquarters officials told us that they also have taken steps to incorporate uncertainty, such as that associated with extreme weather, into their planning process through the Civil Works Transformation Initiative. According to Corps documents, the Civil Works Transformation Initiative began in 2012 to aid the Corps in meeting current and future challenges and addressing the water resources needs throughout the United States. As part of the Initiative, the Corps updated its planning process in 2012 to help strengthen the incorporation of risk into planning assumptions for feasibility studies on new infrastructure projects. For example, Corps headquarters officials told us that they have adopted a risk-informed approach to help address uncertainty, such as that associated with extreme weather, by defining the levels of risk associated with a variety of project designs. Corps officials said, beginning in 2012, feasibility studies for new projects have used this approach to identify risks, including extreme weather, which may occur throughout the life cycle of a water resources project. Headquarters officials told us that, under this approach, project delivery teams must address risks associated with climate change in their project planning documents. To help ensure that the appropriate weather and climate data are being used in the planning process, since 2012, the Corps’ external peer review process has asked experts to review the project plans and note whether appropriate data and information were used to respond to extreme weather risks. Corps officials told us that independent external review questions relating to climate change differ, depending on when they were prepared, as well as the type of input provided by the project delivery team and district officials. Because the Civil Works Transformation Initiative is not yet complete, it may be too early to evaluate the impact of this initiative. The Corps uses a variety of tools in its operations to help prepare for extreme weather, including water control manuals and an automated information system. Water control manuals, which outline the operation of water storage at individual projects, or a system of projects, are used by the Corps to prepare for extreme weather events. These manuals are to outline the various types of weather-related data the Corps uses in its daily operations, as well as when extreme weather events occur. The manuals are also to describe the automated processes used in a data exchange with USGS and the regional NWS center that provides weather forecasts, rainfall information, and streamflow data, among other information to the Corps to prepare for extreme weather events. In addition, water control manuals include a description of the historical information that is used for purposes of creating models to predict streamflow and reservoir stages. Corps guidance, in the form of engineer regulations, describes what is to be included in water control manuals, such as directing districts to establish and outline special operational practices during emergency situations, as well as a drought contingency plan. According to Corps officials, this Corps guidance ensures that Corps districts’ water control manuals are created in a standardized manner so all districts are prepared for extreme weather events. Corps guidance also directs districts to ensure that all authorized purposes of a project are addressed in its operations and notes that operations must strike a balance among those purposes, which often have competing needs. According to the Corps’ engineer regulations, any operational priorities among multiple authorized purposes during extreme conditions, such as drought or flooding events, may need to be defined in water control manuals. According to the Corps guidance, water control manuals also must contain provisions for the Corps to temporarily deviate from operations, when necessary, to alleviate critical situations. According to Corps officials, critical situations may include extreme weather events, such as a flood or drought. For example, in December 2014, the Corps approved a deviation from operations at Prado Dam in southern California, which allowed the Corps to temporarily retain water captured behind the dam following a rainstorm. This deviation, along with other deviations in the southern California region, was in response to the drought that California has experienced since 2011. According to Corps guidance, deviations are meant to be temporary and, if a deviation lasts longer than 3 years, the water control manual must be updated. Corps officials we spoke with were unaware of any deviations that, as of May 2015, have lasted more than 3 years. Corps headquarters and district officials we interviewed said that some water control manuals may need to be updated due to changing conditions in the watershed; however, they also said that some manuals in existence for many years may not necessarily need to be updated since, in part, they allow for flexibility with changing weather trends. Specifically, headquarters and district officials we spoke with said projects that have not experienced a change in land use around the basin, a change in climate patterns, or new weather-related information may not need to be revised. Furthermore, headquarters officials said water control manuals, including reservoir rule curves and drought contingency plans, have proved relatively robust to the climate changes already observed in According to these officials, when combined with the ability to the West.temporarily deviate from operations, when necessary, there is flexibility to respond to short-term and long-term needs based on the best available information and science. The Corps is currently working to develop and implement a strategy to update drought contingency plans to account for climate change. According to Corps officials, the agency will complete its strategy for updating these plans by fiscal year 2016. Corps guidance directs districts to periodically review and revise water control manuals, as necessary, to conform to changing requirements resulting from land development in the project area, improvements in technology, and the availability of new hydrologic data, among other things. Some district officials said water control manuals have not been consistently updated due to changing conditions in the watershed, primarily due to funding constraints. Corps headquarters officials said there is not a Corps-wide process in place to assess whether manuals should be updated; rather, it is up to the discretion of the districts to do so. Some district officials said that they had requested funding to update water control manuals but did not receive the requested funding to conduct such updates. We will continue to assess this issue. The Corps has also established the Corps Water Management System (CWMS), an automated information system supporting the Corps’ operations to, among other things, prepare for extreme weather. CWMS contains various data, such as weather conditions, soil moisture, snow accumulation, streamflow, and water level that can be used by the districts to develop models of watershed and channel processes and to forecast future availability of water. For example, CWMS allows the districts to simulate different operational scenarios to determine which one will more likely result in higher downstream water levels due to a large storm. According to Corps documentation, information from the simulation is intended to help the districts assess the economic, environmental, life safety, and other consequences, such as those from an extreme weather event, of different operational scenarios and lead to better-informed operational decisions. For example, Los Angeles district officials told us that CWMS models are being calibrated for expected maximum flood conditions which can allow them to better forecast runoff volumes in areas prone to extreme weather events. According to Corps documents, CWMS also will support rapid flood forecasting by the district and help reduce the potential for flooding in the basin. CWMS has been deployed to 35 of 38 districts since 2009 but has not yet been fully integrated into all Corps districts, and the watershed and channel models have not been fully implemented as of June 2015. The Corps plans to complete integrating CWMS into all districts by the end of 2015, and an effort is under way to have the watershed and channel models fully integrated by 2023 or earlier, depending on funding. The Corps has taken steps to prepare for extreme weather through its participation in various collaboration efforts with federal agencies and other stakeholders at both the regional and national levels. At the regional level, Corps district officials told us that their collaboration with federal agencies and local stakeholders is sufficient for effective planning and operation of water infrastructure. Corps officials told us they regularly collaborate with federal agencies and local stakeholders to help ensure that they have the weather and climate data needed to plan and operate water infrastructure and to address extreme weather in a coordinated manner. For example, Alaska district officials told us their district has a long history of collaborating with NWS and USGS to monitor data across the remote areas of Alaska and now collaborates with these agencies using a geostationary satellite. Little Rock district officials told us they participate with agency officials from USGS and NOAA, as well as other stakeholders, at Tri-Agency Fusion Team meetings to discuss ways to improve the accuracy of the data generated by the agencies and improve the accuracy and utility of rainfall observations and river forecasts. Savannah district officials told us they regularly communicate with NWS officials in advance of and during extreme weather events. Within certain regions, Corps district officials told us they regularly interact with state and federal officials through the Silver Jackets program to, among other things, identify gaps among agency programs, leverage information and resources, and provide access to national programs such as the Corps Levee Inventory and Assessment Initiative. The Corps is also conducting regional pilot studies nationwide to test different methods and frameworks for adapting to climate change in which they involve numerous stakeholders. For example, four Corps districts completed an Ohio River Basin pilot study in 2013 in which the districts worked with more than 70 stakeholders, including federal and state agencies, academia, and private entities. The pilot study considered the potential effects of climate change on future management of water resources, including 83 Corps dams, 131 levees and floodwalls, and 63 navigational locks in the 204,000 square miles of the basin. As a result of this pilot study, a consortium of basin interests convened the Ohio River Basin Alliance to address common interests in water resources and basin-wide climate change issues. Corps district officials told us that they may also interact with state agencies, universities, and private industry to collect data that may not be collected by federal agencies. For example, Little Rock district officials told us they have used the Community Collaborative Rain, Hail, and Snow Network, in which precipitation data are collected by volunteer citizens and published daily on the Internet by Colorado State University. Walla Walla district officials have been participating with the University of Washington since 2012 in support of the Columbia River Treaty analysis that involves information on data collection, modeling, and trends on future weather and climate changes predicted for the region. Some districts told us they also have gained valuable and up-to-date technical information on engineering and design techniques from private industry associations and made key contacts at industry conferences. However, all the districts we spoke with told us they face challenges in attending weather-related conferences sponsored by entities other than the federal government due to changes in Department of Defense conference policies. Corps officials also collaborate with other federal agencies and stakeholders at the national level to identify data gaps that may exist and disseminate critical water resource information and data. For example, Corps headquarters officials have participated in the Climate Change and Water Working Group, a working-level forum established to share information and accelerate the application of climate information in water management, among other things. Through this group, the Corps along with local, state, and federal water management agencies, have examined water user needs for climate and weather information for long- and short-term water resources planning and management and have We have previously reported the issued two reports on their findings. Corps along with NOAA, USGS, and other stakeholders developed the Federal Support Toolbox, a federal Internet portal, to provide current, relevant, and high-quality information on water resources and climate change data applications and tools for assessing the vulnerability of water programs and facilities to climate change.available online through the Integrated Water Resource Science and Services group and is maintained by the Corps with contributions from more than 16 federal agencies and nongovernment partners. According to agency officials, the Integrated Water Resource Science and Services group consists of four core agencies (USACE, NWS, USGS, and the Federal Emergency Management Agency) and is currently focused on improvements of water forecasting and integration of related models and databases. The Corps has assessed certain water resources infrastructure projects to determine whether they are designed to withstand extreme weather events. Specifically, the Corps has national programs in place to perform risk assessments on dams and levees, as required by law, but, unlike these programs, the Corps is not required to perform systematic, national risk assessments on other types of infrastructure, such as floodwalls and hurricane barriers and has not done so. However, Corps officials said they have been required to assess such infrastructure after an extreme weather event in response to statutory requirements. The Corps has also performed some preliminary vulnerability assessments for sea level rise on its coastal projects and is beginning to conduct vulnerability assessments of inland watersheds to determine how a changing climate is affecting those projects. The Corps performs risk assessments of its dams and levees through two national programs—the Dam Safety Program and the Levee Safety Program—but does not have similar programs in place for other types of infrastructure. As part of its Dam Safety Program, from 2005 to 2009, the Corps performed a screening of 706 of its 707 dams to determine which of its five risk classifications those dams fell under—very high urgency, high urgency, moderate urgency, low urgency, and normal.This risk classification addresses the probability of failure and resulting potential consequences due to failure. Part of the assessment determines whether the dams are designed and operated in such a way that, during a potential flood event, the downstream flooding would not be more severe than flooding if the dam did not exist. The risk assessment also takes into account the likelihood of an extreme weather event. According to Corps officials, all Corps-operated dams will undergo periodic assessments every 10 years because the risk at any given dam may change over time. The Corps has also established the Risk Management Center as a resource to manage and assess risks to dams and levee systems, and the Dam Safety Modification Mandatory Center of Expertise to provide technical advice, oversight, review, and production capability to districts performing any dam modifications in response to the risk assessment. The Dam Safety Modification Mandatory Center of Expertise also maintains a list of subject matter experts in the field of dam safety whose names are accessible via the Internet. The Corps assesses the risk of its dams through the Dam Safety Program, but not all dam safety modification projects have been funded. More specifically, the Corps’ initial screening of dams, completed in 2009, found that 18 dams fell under the very high urgency classification, 83 dams fell under the high urgency classification, 219 dams fell under the moderate urgency classification, and the remaining 386 dams fell under the low urgency classification. For those dams in the very high urgency, high urgency, and moderate urgency classifications, the Corps guidance directs that an Interim Risk Reduction Measures Plan be developed, which is a temporary approach to reduce dam safety risks while long-term solutions are being pursued. The Corps found that completing dam safety modifications on its dams in the three most urgent classifications would cost more than $23 billion. The cost for dam safety modifications for the very high urgency classification was about $4.2 billion; the high urgency classification cost was about $7 billion, and the moderate urgency classification cost was about $12 billion. According to Corps officials, from fiscal year 2009 through fiscal year 2014, the Corps received about $2.5 billion in appropriations to begin dam safety modification studies and construction on 15 very high urgency dams. As of June 2015, dam safety modification construction has been completed on seven very high urgency dams, and the Corps was working on the other eight. Corps officials we spoke with in two districts said that they recognize that dams in other districts may fall into the very high urgency classification for modifications, but the dams in their own district, which are in the high urgency classification, are also at a high risk of failure should an extreme weather event occur. The Corps also operates the Levee Safety Program, which began in 2006. According to the Corps, although the Dam and Levee Safety Programs are similar in their approach to risk assessments, the Levee Safety Program has not progressed as quickly, largely because the Corps owns and operates less than 20 percent of the 14,700 miles of levees that fall under the program. Until 2009, the Corps collected information on 14,700 miles of levees for inclusion in the National Levee Database. Since that time, the Corps has been conducting risk assessments of the 14,700 miles of levees that are included in the Levee Safety Program. The 14,700 miles of levees are divided into 2,887 segments, and risk assessments have been completed for about 43 percent of those segments as of April 2015. The Levee Safety Program risk assessments are to take into account the likelihood of an extreme weather event and how a levee will perform during that event. Based on the risk assessments that have been completed as of April 2015, 1 percent of those levees are classified as very high urgency, 8 percent are classified as high urgency, 27 percent are classified as moderate urgency, and 64 percent are classified as low urgency. Based on the risk assessments completed, as of June 2015, the Corps has not begun making improvements to any of the levees it owns and operates because it is still conducting the risk assessments and will prioritize any improvements once those assessments are complete. Improvements made to the non- Corps levees based on the results of the risk assessment are at the discretion of the local sponsor, with advice from the Corps on risk reduction measures. Unlike the requirements for the Dam Safety and Levee Safety Programs, the agency is not required to perform risk assessments on other types of existing infrastructure, such as hurricane barriers and floodwalls, and it has not yet conducted an inventory of other types of infrastructure. According to Corps officials, the agency has not performed systematic, national risk assessments on other types of existing infrastructure given funding limitations (see table 2). However, the Corps has received appropriations for and has been required to assess such infrastructure after an extreme weather event, such as in the aftermath of Hurricane Katrina in 2005 and Hurricane Sandy in 2012. Subsequent to Hurricane Sandy, for example, the Corps released, in November 2013, an assessment of the performance of specific projects and, in January 2015, a more general assessment of the North Atlantic coastline. The project-specific performance assessment evaluated 75 constructed coastal storm risk management projects in the Corps’ North Atlantic Division, which extends from Maine to Virginia, 31 projects in the Great Lakes and Ohio River Division, and 9 projects in the South Atlantic Division. For the more general assessment, the Corps looked at the risk along 31,000 miles of Atlantic Ocean shoreline from Virginia to New Hampshire as a system. The Corps divided the area into multiple areas of coastline that were hydraulically separate from one another, studying the risk of flood, as well as the exposure of the populations, exposure by population density, infrastructure density, vulnerability by socioeconomic factors, and vulnerability of environmental resources and cultural resources. This risk assessment identified, among other things, nine high- risk areas of the North Atlantic Coast that warrant additional analyses to address coastal flood risk. As of June 2015, the Corps has made no improvements to its projects based on the general risk assessment in the Corps’ study of Hurricane Sandy, which was made final in January 2015. However, according to Corps officials, many projects identified in the project-specific assessment received funding for and received repair and restoration through the Corps’ Flood Control and Coastal Emergencies Program. The Corps conducted these risk assessments following Hurricanes Katrina and Sandy after receiving an appropriation for and being required by law to conduct them, as the Corps does not generally receive funding for broad program activities, such as risk assessments on infrastructure other than dams and levees. However, the Corps has not worked with Congress to develop a more stable funding approach, as we recommended in September 2010, which could facilitate such risk assessments. That report found that a more stable funding approach could improve the overall efficiency and effectiveness of the Civil Works program. The department partially concurred with our recommendation, stating that it would promote efficient funding. As the frequency and intensity of some extreme weather events are increasing, without performing risk assessments on other types of existing infrastructure, such as hurricane barriers and floodwalls, before an extreme weather event (e.g., using a risk-based model), the Corps will continue to take a For this piecemeal approach to assessing risk on such infrastructure. reason, we continue to believe our recommendation is valid. The Corps has conducted two nationwide screening level assessments to assess its vulnerability to climate change in its management and operation of water infrastructure. According to the Corps’ 2014 Climate Adaptation Plan, these vulnerability assessments are necessary so the Corps can address a changing climate and successfully perform its missions, operations, programs, and projects in an increasingly dynamic environment. In 2013, the Corps began an initial project-level vulnerability assessment for coastal projects relating specifically to sea level change. Teams from 21 Corps districts with coastal projects reviewed more than 1,431 projects to determine the impact of sea level change at the 50- and 100-year planning horizons for coastal projects.score based on science-based parameters to categorize the level of impact that sea level change would have on each project. The Corps completed these initial vulnerability assessments for coastal projects in September 2014 and determined that 944 of the 1,431 projects appear to be able to withstand future changes resulting from sea level rise, 94 projects may experience high or very high impacts as a result of sea level rise, and 393 projects may experience a low or medium impact as a result of sea level rise. As of June 2015, the Corps had begun prioritizing the 94 projects that may experience high or very high impacts as a result of sea These projects were given a level rise for a more detailed assessment. Corps officials said they do not yet know when this prioritization will be completed. As of June 2015, the Corps was piloting methods to conduct the more detailed vulnerability assessment. In one pilot, through a vulnerability assessment of a hurricane barrier in New England that was designed in 1962 to provide navigation and flood risk reduction benefits for the area surrounding a harbor, the Corps found the project had experienced a 6- inch loss from its design elevation due to sea level rise. The hurricane barrier was listed as having potentially high impact from sea level rise in the screening assessment. The more detailed pilot assessment identified a potential future loss of elevation of between 6 inches and 2 feet 3 inches by 2065. Based on Corps data, the change in sea level has resulted in a reduction in the distance between the top of the water and the top of the hurricane barrier from 17 feet at its design to 16.5 feet currently, and potentially down to 14.25 feet within 50 years (see fig. 2). According to Corps officials, these future changes in distance between the top of the sea and the top of the hurricane barrier can result in a greater risk of floods and more operations of the navigation gate, which in turn reduces navigation reliability and increases maintenance costs. The Corps had initially planned to release a draft report on the initial coastal vulnerability assessment in December 2014 but, as of June 2015, the final report had not been released. Corps officials said the final report will likely be released in late summer 2015. Corps officials acknowledge that the science is not yet available to conduct project-level vulnerability assessments for inland projects. However, the Corps initiated a study in 2012 that focused on how hydrologic changes due to climate change may impact freshwater runoff in some watersheds. As of June 2014, the Corps had identified the top 20 percent of watersheds that were most vulnerable for each business line Corps officials said this is an initial through this initial watershed study.screening level assessment that will lead to more detailed assessments of the most vulnerable water resources infrastructure projects and those with the highest potential impact from extreme weather events. The Corps is working with an expert consortium of federal, academic, nongovernmental organizations, and others to develop the climate and hydrology information necessary to conduct project-level assessments. Corps officials said that the consortium will develop the information needed to perform the project assessments and that it is unclear how long developing the necessary science will take. According to the Corps’ 2014 Climate Change Adaptation Plan and Corps headquarters officials, the inland and coastal vulnerability assessments will be merged over the next several years and will be used to determine how the Corps needs to manage and plan for new water resource projects. As of May 2015, Corps officials told us that they did not have a timeline for merging these assessments, in part because the climate and hydrology information is not yet available. We provided a draft of this report for review and comment to the Departments of Agriculture, Commerce, Defense, and the Interior for comment. These agencies did not provide written comments. In an e-mail received on June 29, 2015, the audit liaison for NOAA at the Department of Commerce provided technical comments for our consideration. In addition, in oral comments received on July 2, 2015, the Corps’ point of contact on the engagement provided technical comments for our consideration. We incorporated these technical comments as appropriate. We are sending copies of this report to the appropriate congressional committees; the Secretaries of Agriculture, Commerce, Defense, and the Interior; and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or fennella@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. In addition to the individual named above, key contributors to this report included Vondalee R. Hunt (Assistant Director), Michael Armes, Cheryl Arvidson, Kendall Childers, Christopher Currie, Cindy Gilbert, Emily Pinto, Holly Sasso, Jeanette Soares, Colleen Taylor, and Patrick Ward.
The Corps plans, designs, and constructs water resources infrastructure, such as dams and levees. According to the U.S. Global Change Research Program, the frequency and intensity of extreme weather events are increasing. Much of the Corps' infrastructure was built over 50 years ago and may not be designed to operate within current climate patterns, according to the U.S. Geological Survey. The Water Resources Reform and Development Act of 2014 included a provision for GAO to study the Corps' management of water resources in preparation for extreme weather. This is the first in a series of reports GAO is issuing on this topic. GAO's other reports will examine operations and dam and levee safety, which GAO plans to issue in fiscal year 2016. This report explores (1) how the Corps prepares for and responds to extreme weather events in its planning and operation of water resources projects, and (2) the extent to which the Corps has assessed whether existing water resources infrastructure is prepared for extreme weather events. GAO reviewed Corps guidance on planning, operations, and assessments, and interviewed Corps officials from headquarters and eight districts— selected, in part, on number of projects. The U.S. Army Corps of Engineers (Corps) considers the potential impact of extreme weather events in its planning and operations of water resources infrastructure projects by, among other things, updating and developing guidance on how to incorporate different extreme weather scenarios in its planning of projects. For example, in 2014, the Corps issued guidance on how to evaluate the effects of projected future sea level change on its projects and what to consider when adapting projects to this projected change. In addition, Corps districts prepare water control manuals, guidance outlining project operations. The Corps can approve deviations from the manuals to alleviate critical situations, such as extreme weather events. For example, in December 2014, the Corps approved a deviation from operations at a southern California dam, which allowed the Corps to retain rainwater to help respond to the state's extreme drought conditions. The Corps has assessed certain water resources infrastructure projects to determine whether they are designed to withstand extreme weather events. Specifically, the Corps has national programs in place to perform risk assessments on dams and levees, as required by law. Unlike the requirements for dams and levees, the Corps is not required to perform systematic, national risk assessments on other types of existing infrastructure, such as hurricane barriers and floodwalls and has not done so (see table). However, the Corps has been required to assess such infrastructure after an extreme weather event in response to statutory requirements, as it did in November 2013 and in January 2015, after Hurricane Sandy. Also, the Corps has performed initial vulnerability assessments for sea level rise on its coastal projects and has begun conducting such assessments at inland watersheds. Unlike federal agencies that have budgets established for broad program activities, most Corps civil works funds are appropriated for specific projects. However, the Corps has not worked with Congress to develop a more stable funding approach, as GAO recommended in September 2010, which could facilitate conducting risk assessments. The Corps partially concurred with this recommendation, stating that it would promote efficient funding. As the frequency and intensity of some extreme weather events are increasing, without performing systematic, national risk assessments on other types of infrastructure, such as hurricane barriers and floodwalls, the Corps will continue to take a piecemeal approach to assessing risk on such infrastructure. GAO previously recommended that the Corps work with Congress to develop a more stable funding approach. The Corps has not taken action, but GAO continues to believe the recommendation is valid. Agencies had no comments on a draft of this report.
To obtain information on DOE’s contractor protective forces, we visited three of the sites with enduring Category I SNM missions—Pantex, the Savannah River Site, and Los Alamos National Laboratory—and met with protective force contractors, federal site office officials, and protective force union representatives at these sites. We selected these sites because each represented one of the three different types of protective force contracts currently in place. In addition, we distributed a data collection instrument to protective force contractors and federal site office officials at each of these sites and at the other three sites with enduring Category I SNM missions—Y-12, the Nevada Test Site, and the Idaho National Laboratory. From this instrument, we received site information about the protective forces, the status of TRF and DBT implementations, views on DOE options for managing the protective forces, and the reliability of site data. We conducted interviews and reviewed documents with NNSA and DOE’s offices of Environmental Management (EM); Nuclear Energy (NE); Science. We also met with several organizations within DOE’s Office of Health, Safety and Security (HSS), including the Office of Policy; the Office of Independent Oversight, which regularly performs inspections at Category I SNM sites; and the National Training Center, in Albuquerque, New Mexico, which is responsible for developing protective force training curricula and certifying site protective force training instructors and programs. To obtain comparative information on OST and its federal agents, we reviewed documents and met with officials from OST headquarters in Albuquerque, New Mexico. All data collected to describe contractor protective forces and OST federal agents were current as of September 30, 2008. To identify and assess options for the more uniform protective force management through federalization, we met with the NNSA Service Center in Albuquerque, New Mexico, and the Office of Personnel Management (OPM) on cost and job classification of protective forces. We developed criteria for options for more uniform management by reviewing past and ongoing DOE protective force and federal agent studies that HSS, NNSA, and OST provided. We also reviewed documents and met with officials from the National Council of Security Police, which is a coalition of unions that represent many of the protective forces at DOE’s Category I SNM sites. We conducted our work from April 2008 to January 2010 in accordance with generally accepted government auditing standards, which require us to plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. DOE’s HSS, which is the department’s central security organization, is responsible for developing the department’s security policies and overseeing their implementation. Specifically, HSS’s Office of Policy develops and promulgates orders and policies such as the DBT policy, as well as manuals such as Manual 470.4-3A, Contractor Protective Forces, which details protective force’s duties and requirements. Other DOE organizations with diverse program missions—EM, NE, and NNSA—are responsible for the six DOE sites in our review with enduring Category I SNM missions. In accordance with DOE policy, EM, NE, and NNSA must ensure that each of their sites has a safeguards and security program with the necessary protections to protect security interests against malevolent acts such as theft, diversion, sabotage, modification, compromise, or unauthorized access to nuclear weapons, nuclear weapons components, special nuclear material, or classified information. Each of these DOE organizations has site offices staffed by federal employees located at or near each site to oversee day-to-day operations, including security. The management and operations (M&O) contractors that manage the six sites we reviewed must develop effective programs to address DOE security requirements. In particular, each site with Category I SNM must prepare a Site Safeguards and Security Plan, which is a classified document that identifies known vulnerabilities, risks, and protection strategies for the site. The site’s protection measures are developed in response to site-specific vulnerability assessments and become the basis for executing and reviewing protection programs. Table 1 highlights some of the site differences in mission, topography, and size that may dictate the site-specific protection measures, including the protective forces’ size and equipment. Protective forces are one of the key elements in DOE sites’ layered “defense-in-depth” protective systems. Specific elements vary from site to site but almost always include, in addition to protective forces, a variety of integrated alarms and sensors capable of detecting intruders; physical barriers, such as fences and antivehicle barriers; numerous access control points, such as turnstiles, badge readers, vehicle inspection stations, radiation detectors, and metal detectors; operational security procedures, such as the “two-person” rule—which is designed to prevent only one person from having access to SNM ; and hardened facilities and storage vaults to protect SNM from unauthorized access. Increasing security at DOE sites since the terrorist attacks of September 11, 2001, has been costly and challenging. The complexwide funding for protective forces and physical security systems rose almost 60 percent (in constant dollars) from fiscal years 2001 through 2008, to $862 million. Protective forces—the single most costly element of DOE security, as well as one of the most important —have been a major focus of DOE security efforts. The need to increase security at DOE sites as rapidly as possible following the 2001 attacks meant that DOE protective forces worked large amounts of overtime for an extended period. DOE’s Inspector General and Office of Independent Oversight, as well as GAO, reported on the potential for extended overtime to increase fatigue and reduce readiness, and training opportunities for protective forces. Since then, DOE has sought to control protective force costs by increasing the use of security technology and advanced weaponry and by consolidating material into fewer and better protected locations. Since September 11, 2001, DOE security policies, including the DBT, have been under almost constant examination and have undergone considerable change. For example, DOE issued new DBTs in 2003, 2004, and 2005, and, most recently, in November 2008. In its latest iteration, the DBT was renamed the Graded Security Protection (GSP) policy. The GSP is conceptually identical to DOE’s previous DBTs. However, compared with the 2005 DBT, the GSP identifies a generally smaller and less capable terrorist adversary force for DOE sites with Category I SNM. DOE has also sought to increase the tactical effectiveness of protective force performance. Specifically, according to a 2004 classified DOE review, the then-current organization and tactics of DOE protective forces needed improvement to deal with possible terrorist threats. The review found that, historically, DOE protective forces had been more concerned with a broad range of industrial security and order-keeping functions than with preparing to conduct a defensive battle against a paramilitary attacker, as described in DOE’s previous DBTs and GSP. To address this situation, the review recommended shifting to an aggressive militarylike, small-unit, tactical defense posture, which included enhanced tactical training standards to allow protective forces to move, shoot, and communicate effectively as a unit in a combat environment. It also recommended more frequent, realistic, and rigorous force-on-force performance testing and training for the department’s protective forces. On the basis of this review, DOE has sought to transform DOE’s protective forces who safeguard special nuclear material into an “elite force”—a TRF—with training and capabilities similar to military units. To create TRFs at Category I SNM sites, in 2005 DOE’s policy for protective forces clarified which positions required more demanding physical fitness and firearms qualification standards, increased tactical training, and reorganized protective forces into tactically cohesive units. Although DOE and NNSA considered federalizing the contractor protective forces to better support the TRF, the department’s reviews of this issue predate its post September 11, 2001, concerns. Since the early 1990s, the department has intermittently considered federalization because of variety of security challenges, often involving actual or potential strikes by contractor protective forces: A 1992 DOE review concluded there was no clear evidence that federalization of protective forces would significantly save costs or improve security. DOE reviewed the issue of federalization in response to a 1990 GAO report that examined a protective force strike at Los Alamos National Laboratory in 1989. A 1997 DOE report raised concerns about the potential deterioration of an aging protective force’s physical and combat capabilities; the increasing difficulties in meeting the sudden demand for additional personnel in the event of a strike; and cost pressures, such as more overtime pay after the department had downsized the protective forces. The report considered federalization as a solution but recommended other options using existing contractor protective forces. A 2004 DOE study group, examining ways to strengthen DOE’s security posture after September 11, 2001, recommended federalization to better support tactical forces and to promote uniform, high-quality security across sites, but the department did not implement the recommendation. Two 2008 NNSA studies, which followed the 2007 strike at the Pantex Plant, compared contractor and federalized options for improving protective forces, but these studies did not make any firm recommendations. In 2009, partly in response to a union coalition calling for federalization, NNSA and DOE’s HSS started protective force initiatives to address some of the goals that federalization was meant to accomplish, such as improving efficiency and effectiveness. Contractor protective forces—including 2,339 unionized officers and their 376 nonunionized supervisors—are not uniformly managed, organized, staffed, trained, equipped, or compensated across the six DOE sites. These differences occur because protective forces operate under separate contracts and collective bargaining agreements at each site and because of DOE’s long-standing contracting approach of defining desired outcomes instead of detailed, prescriptive guidance on how to achieve those outcomes. As we have previously reported, DOE’s contract model may allow security to be closely tailored to site- and mission-specific needs. As of September 30, 2008, protective forces at the six sites we reviewed operated under the following three separate types of contracts: Direct contract with DOE. At Y-12, Nevada Test Site (NTS), and Savannah River Site (SRS), NNSA and DOE contract directly with private firms to provide protective forces. These contracts are separate from NNSA’s and DOE’s contracts with the site M&O contractors. Protective force managers report to officials from federal site offices. To coordinate site operations and protective force operations, managers from the M&O contractors meet regularly to discuss issues with managers from the protective force and site office. Within the M&O contract. For two sites, Pantex Plant (PX) and Idaho National Laboratory (INL), the M&O contractors provide the protective forces. The M&O contractor directly manages the protective forces, and DOE’s or NNSA’s site office oversees the protective force operations as part of the overall M&O contract. Subcontract to the M&O contractor: At Los Alamos National Laboratory (LANL), the M&O contractor subcontracts the protective force operations. The protective force manager reports to and is overseen by the M&O contractor. Since NNSA has no direct contractual relationship with the protective force manager, NNSA site office managers coordinate oversight direction through the M&O contractor. Protective force contractors at the six DOE sites have a management and support structure that includes training and physical fitness, human relations, legal and contract services, and procurement. Each protective force also has uniformed supervisors who are not part of the protective forces’ collective bargaining agreements. The duties, responsibilities, and ranks of these supervisors are generally site specific and not detailed in DOE’s protective force policies. According to DOE’s 2008 policy in Manual 470.4-3A, Contractor Protective Force, protective forces are composed of unarmed and armed positions. Security Officers (SO) are responsible for certain unarmed security duties, such as checking for valid security badges at entrances and escorting visitors. Security Police Officers (SPO), who are armed, are divided into three main categories: SPO-I: Primary responsibility is protecting fixed posts during combat. SPO-II: Primary responsibility is mobile combat to prevent terrorists from reaching their target but can also be assigned to fixed posts. SPO-III: Primary responsibilities are mobile combat and special response skills, such as those needed to recapture SNM (on site) and recover SNM (off site) if terrorists succeed in acquiring it. SPO-IIIs are usually organized into special response teams. As shown in table 2, the number of personnel and composition of protective forces vary considerably across sites. It should be noted that three sites—INL, LANL, and NTS—had few or no SPO-Is as of September 30, 2008. At that time, not all sites had incorporated this position into their collective bargaining agreements. In the interim, some SPO-IIs were performing the SPO-I-type duties at these sites. DOE policy mandates certain protective force training but allows sites some flexibility in its implementation. For example, DOE Manual 470.4-3A requires newly hired protective forces to complete the Basic Security Police Officer Training course that the sites tailor to meet their specific needs. The site-specific courses range in length from 9 to 16 weeks. Other required training includes annual refresher training in a wide variety of topics; tactical exercises, including force-on-force exercises; physical fitness training; and firearms training. The content and frequency of this training varies by site and, to some extent, by type of protective forces, with SPO-IIIs generally receiving more training than other protective forces because of their special response mission. To ensure some degree of equivalency, DOE’s National Training Center assesses sites’ training plans and, while most sites perform their own training, the National Training Center certifies instructors. Some training requirements are driven by the type of protective force equipment, such as firearms and vehicles, that are used at each site. The primary protective force weapon at most sites is the M4 rifle, a weapon that is widely used in the U.S. military. Other weapons, such as belt-fed machine guns, are generally versions of the M240 and M249 family, also widely used in the U.S. military. However, sites have variously adopted other equipment, including the following: three models of handguns with two different calibers of ammunition; four types of grenade launchers, although all use 40mm grenades; several types of precision rifles, capable of accurate long range fire, in three different calibers; and several different armored vehicles, but older vehicles are being replaced by a single type of vehicle across the six sites. Pay varies for protective forces, based on the site and the category of protective forces. Table 3 shows that top pay, as negotiated in collective bargaining agreements at each site, ranged from nearly $19 per hour to over $26 per hour. SOs received the lowest hourly pay, and SPO-IIIs received the highest. Overtime pay, accrued in different ways at the sites, and other premium pay, such as additional pay for night shifts and holidays, may significantly increase protective force pay. Table 4 shows the types of benefits by site. While all employers contributed to active protective force members’ medical, dental, and life insurance benefits, they differed in the amount of their contributions and in the retirement benefits they offered. In general, new hires were offered defined contribution plans, such as a 401(k) plan, that provides eventual retirement benefits that depend on the amount of contributions by the employer or employee, as appropriate, as well as the earnings and losses of the invested funds. At the time of our review, two sites offered new hires defined benefit plans that promised retirees a certain monthly payment at retirement. Two other sites had defined benefit plans that covered protective force members hired before a particular date but were not open to new hires. A coalition of unions has expressed its preference for defined benefit plans. Sites are at different stages in the implementation of TRF requirements. However, TRF implementation, coupled with broader DOE efforts to limit postretirement and pension liabilities, has raised concerns with DOE security officials, protective force contractors, and protective force unions about the longevity of protective forces’ careers and the adequacy of their personnel systems. DOE has identified the following important TRF requirements for protective forces: Improved tactical skills, so that protective forces “move, shoot, and communicate” as a unit. To better facilitate tactical training to meet a sophisticated terrorist attack, TRF calls for the development and implementation of TRF training curricula as well as the creation of training relief elements or shifts to allow protective forces to participate in unit-level training. Revised application of DOE’s offensive and defensive combatant standards for protective forces. DOE’s offensive combatant standard is more demanding than its defensive combatant standard. Nevertheless, prior to TRF, SPO-IIs hired before 2000 were allowed to meet DOE’s less demanding defensive combatant standard but could retain their SPO-II designation and fill some offensive combatant positions. TRF policy eliminated this approach, known as “grandfathering,” and restricted protective force members who meet only defensive combatant standards to serve as SPO-Is. That is, SPO-IIs that did not meet offensive combatant standards would be moved into SPO-I positions. Career longevity plans to assist with the shift to the new application of offensive and defensive combatant standards. TRF mandates that all newly hired protective forces meet DOE’s more demanding offensive combatant standard as SPO-IIs. Protective force members may advance to the SPO-III level, which requires qualifying at a higher level of firearms proficiency. However, under TRF policy, the forces who cannot maintain their current standards—perhaps as their years of service accumulate and they age—may “fall back” by applying for open protective force positions with less demanding standards. For example, protective forces may move from meeting offensive combatant standards to defensive combatant standards or unarmed SO positions, although they may lose pay with each “fall back.” Table 5 summarizes the physical fitness, firearms and medical qualifications protective forces must pass for DOE’s combatant standards. One site we visited had implemented most of TRF’s key elements. Since 2005, this site has constructed new training facilities, implemented a training cadre that allows unit-sized tactical training, increased the amount of tactical training its protective forces receive, and integrated protective force plans with other security elements and response plans. As of September 30, 2008, three sites were still using an older job classification of SPO-II (that is, allowing a defensive combat standard, rather than the offensive combatant standard), which is not a TRF classification. In addition, while some sites have created unarmed security officer positions to provide fallback positions for protective forces that can no longer meet DOE’s defensive combat standard, there are relatively few unarmed positions (110—less than 5 percent—of the protective forces at the sites we reviewed), and some of these positions, according to a protective force contract official and a union representative, were eliminated for budgetary reasons. We also found that TRF training was not uniform across the six sites: DOE’s National Training Center piloted a more tactically oriented basic training course (Tactical Response Force - 1) at one site in 2008, but according to a National Training Center official, this class will not replace its existing multiweek Basic Security Police Officer Training course for newly hired SPO-IIs until later in 2010. All sites have increased the amount of tactical training for protective forces but have been separately developing courses and training facilities. Some sites had purchased and deployed advanced weapons but had not adequately trained their protective forces to use these weapons and had not integrated these weapons into their response plans, according to DOE’s Inspector General and DOE’s Office of Independent Oversight. In 2007, DOE’s Inspector General reported that one site’s training program for the use of a weapon that was key to the site’s security strategy did not provide protective forces with the knowledge, skills, and abilities to perform assigned tasks. A follow-up inspection in 2008 found similar problems at several other sites. According to a NNSA official, NNSA sites did not receive dedicated TRF training funds until fiscal year 2009. Also, according to NNSA’s fiscal year 2010 budget submission, NNSA does not expect its sites to complete TRF activities until the end of fiscal year 2011. Since its inception in 2005, TRF has raised concerns in DOE security organizations, among protective force contractors, and in protective force unions about the ability of protective forces—especially older individuals serving in protective forces—to continue meeting DOE’s weapons, physical fitness, and medical qualifications. As we reported in 2005, some site security officials recognized they will have to carefully craft transition plans for currently employed protective force officers who may not be able to meet the new standards required for an elite force, which is now known as TRF. Adding to these concerns are DOE’s broader efforts to manage its long-term postretirement and pension liabilities for its contractors, which could have a negative impact on retirement eligibility and benefits for protective forces. In 2006, DOE issued its Contractor Pension and Medical Benefits Policy (Notice 351.1), which was designed to limit DOE’s long-term pension and postretirement liabilities. A coalition of protective force unions stated that this policy moved them in the opposite direction from their desire for early and enhanced retirement benefits. These concerns contributed to the 44-day protective force strike at the Pantex Plant in 2007. Initially the site designated all of its protective force positions as offensive positions, a move that could have disqualified a potentially sizable number of protective forces from duty. Under the collective bargaining agreement that was eventually negotiated in 2007, some protective forces are allowed to meet the less demanding defensive combat standards. DOE has also rescinded its 2006 Contractor Pension and Medical Benefits Policy. However, according to protective force union officials, tensions over TRF implementation and retirement benefits remain driving forces behind protective force unions’ drive to federalize. With the issuance of the new GSP policy in August 2008, most sites ceased 2005 DBT implementation efforts. However, unlike its practice with previous DBTs, DOE did not establish a deadline for GSP implementation. While sites study GSP requirements and develop implementation plans, the GSP directs that they continue to meet the requirements of the 2003 DBT. Under the 2003 DBT, most DOE sites are required to maintain denial protection strategies for Category I SNM. Under these strategies, DOE requires that adversaries be denied “hands-on” access to nuclear weapons and nuclear test devices at fixed sites, as well as all Category I SNM in transit. For other Category I SNM at fixed sites, DOE requires that adversaries be prevented from having enough time to complete malevolent acts. If adversaries gain access to Category I SNM, DOE requires that protective forces engage in recapturing the SNM on site or recovering the material if it leaves the site. As required by the Fiscal Year 2006 National Defense Authorization Act, DOE reported to Congress in 2007 that all its sites could meet the 2003 DBT. To verify the information DOE reported, we examined whether the sites had approved Site Safeguards and Security Plans and whether they had undergone an Office of Independent Oversight Inspection to test those plans. We found that all sites (except for the one DOE site that had implemented the 2005 DBT) had approved Site Safeguards and Security plans for the 2003 DBT, and almost all had undergone inspections by the Office of Independent Oversight to test those plans. In most cases, protective forces performed effectively in these inspections. However, in a 2008 inspection, one site’s protective forces received a “needs improvement” rating—that is, it only partially met identified protection needs or provided questionable assurance that identified protection needs were met. Although they are both responsible for protecting SNM, OST federal agents substantially differ from site protective forces in terms of numbers, organization, management, pay, benefits, mission, and training: OST forces totaled 363 as of September 30, 2008, or less than one-seventh the total number of protective forces members at DOE sites with enduring Category I missions. OST forces are geographically dispersed, but unlike protective forces, management is centralized. OST operations are organized into three commands, which are collocated at two DOE sites and a Department of Defense military base. These commands report to a central command in Albuquerque, New Mexico, which is under a single organization, NNSA. In contrast, the protective forces at six sites have decentralized management and are overseen by one of three DOE organizations. Federal managers directly operate the OST organization and supervise federal agents. Unlike protective forces, OST federal agents cannot collectively bargain and are covered by a single pay system. Effective in March 2008, the NNSA’s Pay Band Demonstration is a pay system for most NNSA federal employees—including OST federal agents. Table 6 shows the differences between the protective forces’ many negotiated pay rates and the nonsupervisory federal agents’ single pay band, which is linked to federal pay grades that are established governmentwide. In addition, while OST’s pay system is designed for more flexible pay, protective forces’ pay rules generally do not provide for any variation in a position’s pay rate after a few years of service. Specifically, OST agents’ pay rates can vary more when they are hired and in later years because the NNSA pay system is designed to give OST managers more flexibility to offer exceptional candidates higher entry salaries and to provide faster or slower annual pay progression, depending on individual performance. In contrast, fixed pay rules allow a contracted SPO to start at the top pay rate or to reach or closely approximate it after only about 1 to 3 years of service. However, as table 6 shows, both protective forces and federal agents receive significantly higher pay for overtime hours. Concerning benefits, OST federal agents generally receive those that are broadly available to other federal employees, such as through the Federal Employee Health Benefit program and the Federal Employee Retirement System (FERS), which has a defined benefit component and a defined contribution component. In contrast, at each site, protective force unions negotiate for benefits such as medical insurance and retirement plans, and new hires in protective forces generally do not receive defined benefits for retirement. In addition, in 1998, Congress made OST federal agents eligible to retire earlier (at age 50 after 20 years of service) with a higher monthly retirement annuity (defined benefit) than is typical for other federal employees. This early retirement provision contrasts with the provisions for the two defined benefit plans open to new protective force hires as of September 2008, which provides for retirement with more years served or at older ages. OST federal agents’ mobile mission also differs significantly from that of protective forces that guard fixed sites. OST agents operate convoys of special tractor trailers and special escort vehicles to transport Category I SNM. These agents travel on U.S. highways that cross multiple federal, state, tribal, and local law enforcement jurisdictions. They also travel as many as 15 days each month. Agents may also provide security for weapons components that are flown on OST’s small fleet of aircraft. In contrast to the public setting of agents’ work, protective forces that guard Category I SNM at fixed sites typically have elaborate physical defenses and tightly restricted and monitored public access. Finally, the training for OST federal agents and protective forces differs. Although both OST and protective force contractors must comply with DOE orders and regulations when developing and executing training, OST agents undergo longer, more frequent, and more diverse training than do most protective forces. For example, newly hired OST trainees undergo longer basic training, lasting 21 weeks at OST’s academy in Fort Chaffee, Arkansas. To operate OST’s fleet of vehicles, federal agents must also complete the requirements for a commercial driver’s license. In addition, all agents must meet DOE’s offensive combatant standard throughout their careers. Overall, OST officials estimate that OST federal agents spend about a third of their time in training, which, according to an NNSA official, is much more frequent than most contractor protective forces. Much of the training is tactically oriented, and OST convoy elements are organized into tactical units. In the performance of their official duties, both protective forces and OST federal agents have limited arrest authority for a variety of misdemeanors and felonies, though neither routinely exercises this arrest authority. Both protective forces and OST federal agents are also authorized to use deadly force to protect SNM and may pursue intruders in order to prevent their escape and to arrest those they suspect have committed certain misdemeanors or felonies or have obtained unauthorized control of SNM. DOE’s Federal Protective Force manual (DOE M 470.4-8) and DOE’s Contractor Protective Force manual (DOE 470.4-3A) set guidelines and direct DOE sites to develop policies for using deadly force and for fresh pursuit, which involves pursuing suspected criminals who flee across jurisdictional boundaries, such as leaving the property of a DOE site. These actions include developing memorandums of understanding that establish, among other things, fresh pursuit guidelines with other law enforcement agencies. DOE protective forces and OST federal agents have limited authority to make arrests for specific misdemeanors and felonies, such as trespassing on, or the theft or destruction of, federal property. Other offenses against government property subject to arrest include sabotage, civil disorder, conspiracy, and the communication of or tampering with restricted data. For the covered misdemeanors and felonies, protective forces and OST federal agents have authority to arrest if they observe the offenses while they are performing their official duties; for the covered felonies, they may also make arrests if they have reasonable grounds to believe that the person has committed a felony. If other federal law enforcement agencies, such as the Federal Bureau of Investigation (FBI), are involved in the apprehension of suspected criminals, even on DOE property, protective forces and OST federal agents must relinquish arresting authority to the other federal agencies. While both protective forces and OST federal agents receive initial and annual refresher training in law enforcement authorities and duties, we found that protective forces at the six sites last made an unassisted arrest using their federal authority more than 25 years ago. The protective forces at Pantex arrested nine individuals, six in 1981 and three in 1983, for trespassing on site property. In both instances, the offenders were convicted and sentenced to a federal detention facility. According to OST officials, federal agents do not routinely make arrests because they have not encountered individuals attempting to steal SNM from their shipments, which is the focus of their legal concerns. Protective forces do not routinely use their federal authority to make arrests for several reasons, in addition to limited authority. First, one contractor site official told us, federal courts, which have jurisdiction for all arrests made by protective forces using their federal authority, are reluctant to pursue what may be considered minor cases associated with a DOE site. Instead, this official said, the site had more success prosecuting crimes in state and local courts. In these cases, arrests are made by local and state law enforcement agencies. Second, DOE security officials told us that sites may be concerned about the legal liability of using contractor employees to make arrests and potential lawsuits that could ensue. Finally, both DOE and site contractor officials told us that routine law enforcement duties may distract protective forces from performing their primary duty to protect Category I SNM. Rather than make arrests when witnessing possible crimes, protective forces may gather basic facts, secure the crime scene, and notify management, which decides whether to refer the matter to local law enforcement agencies, DOE’s Inspector General, the U.S. Marshall, or the FBI for arresting and transporting suspects. However, we could not determine how often the forces take these actions because sites do not typically document detainments or have facilities in which to hold such detainees. While protective forces and OST federal agents seldom use their federal arrest authority, protective forces have used other legal authorities to make arrests. For example, specially designated protective force officers at the Savannah River Site (SRS), are authorized under South Carolina law to make arrests and investigate crimes. The SRS protective force includes 26 Special State Constables (about 5 percent of SRS’s total protective force) who have state law enforcement jurisdiction on the 310- square-mile SRS complex, which spans three counties and includes publichighways. These officers wear special uniforms and drive specially marked vehicles. In addition, they must complete and maintain state law complete and maintain state law enforcement qualification requirements, in order to retain their state law enforcement qualification requirements, in order to retain their state law enforcement authority. This additional authority, according to SRS enforcement authority. This additional authority, according to SRS officials, allows the remaining protective force personnel to focus on the officials, allows the remaining protective force personnel to focus on the other aspects of the site’s nation other aspects of the site’s national security mission. al security mission. al security mission. To manage its protective forces more effectively and uniformly, DOE has considered two principal options—improving elements of the existing contractor system or creating a federal protective force. We identified five major criteria that DOE, protective force contractors, and union officials have used to assess the advantages and disadvantages of these options. Overall, in comparing these criteria against the two principal options, we found that neither contractor nor federal forces seem overwhelmingly superior, but each has offsetting advantages and disadvantages. Either option could result in effective and more uniform security if well-managed. However, we identified transitional problems with converting the current protective force to a federalized force. Furthermore, while DOE has sought to improve protective force management by reforming protective forces, this effort is still at an early stage and budgetary limitations may constrain some changes. Table 7 summarizes the five criteria that DOE, protective force contractors, and union officials have used to discuss whether to improve the existing contractor system or federalize protective forces, as well as associated issues or concerns. Evaluating the two principal options against these criteria, we found that, for several reasons, either contractor or federal forces could result in effective and more uniform security if the forces are well-managed. First, both options—maintaining the current security force structure or federalizing the security force—have offsetting advantages and disadvantages, with neither option emerging as clearly superior. For example, one relative advantage of a contractor force is the perceived greater flexibility for hiring, disciplining, or terminating an employee; one relative disadvantage of a contractor force is that it can strike. In contrast, federalization could better allow protective forces to advance or laterally transfer to other DOE sites to meet protective force members’ needs or DOE’s need to resize particular forces. Second, key disadvantages, such as potential strikes, do not preclude effective operations if the security force is well-managed. According to one protective force manager, a well-managed protective force is less likely to strike. In addition, a 2009 memo signed by the NNSA administrator stated that NNSA had demonstrated that it can effectively manage strikes through the use of replacement protective forces. With respect to federal protective forces, a 2004 department work group on protective force issues observed that even federal operations like OST had experienced difficult labor-management relations that had to be carefully managed in order to ensure effective performance. Third, as can be seen in the following examples, distinctions between the two options, each of which could have many permutations, can be overstated by comparing worse- and best-case scenarios, when similar conditions might be realized under either option. While federalization might improve effectiveness and efficiency by driving standardization, NNSA recently announced initiatives to increase standardization among contract protective forces to achieve some of the same benefits, including cost savings. Federalization could potentially provide early and enhanced retirement benefits, which could help to ensure a young and vigorous workforce. However, such benefits might also be provided to contractor protective forces. Although more centralized federal control might impede both protective forces’ support of a site’s operations and the coordination between contractors and federal managers, this concern presumes a scenario in which the department would choose a highly centralized organization, whereas it might delegate responsibility for day-to-day operations to its site managers. Either option could be implemented with more or less costly features. For example, adding the early and enhanced retirement benefits would increase costs for either contractor or federal protective forces. Reliably estimating the costs of protective force options proved difficult and precluded our detailed reporting on it for two broad reasons. First, since contractor and federal forces could each have many possible permutations, choosing any particular option to assess would be arbitrary. For example, a 2008 NNSA-sponsored study identified wide-ranging federalization options, such as federalizing all or some SPO positions at some or all facilities or reorganizing them under an existing or a new agency. Second, DOE will have to decide on the hypothetical options’ key cost factors before it can reasonably compare costs. For example, when asked about some key cost factors for federalization, an NNSA Service Center official said that a detailed workforce analysis would be needed to decide whether DOE would either continue to use the same number of SPOs with high amounts of scheduled overtime or hire a larger number of SPOs who would work fewer overtime hours. Also, the official said that until management directs a particular work schedule for federalized protective forces, there is no definitive answer to the applicable overtime rules, such as whether overtime begins after 8 hours in a day. The amount of overtime and the factors affecting it are crucial to a sound cost estimate because overtime pay can now account for up to about 50 percent of pay for worked hours. If protective forces were to be federalized under existing law, the current forces might face a loss of pay or even their jobs. OPM told us that legislation would be required to provide these federalized protective forces with early and enhanced retirement benefits. However, provisions associated with these benefits could create hiring and retirement difficulties for current older members of the protective forces. According to officials at OPM and NNSA’s Service Center, if contractor SPOs were federalized under existing law, they would likely be placed into the security guard (GS-0085) federal job series. Although a coalition of unions has sought federalization to allow members to have early and enhanced retirement benefits, which allows employees in certain federal jobs to retire at age 50 with 20 years of service, security guards under the GS-0085 job series are not eligible for these benefits. Under the applicable rules for federal security guards, transitioning protective forces would not become eligible to retire with immediate federal annuities until at least age 55, and only after accruing sufficient years of federal service. For example, transitioning protective forces could begin receiving a federal annuity at age 62 with 5 years of service or, with reduced benefits, at age 55 to 57 (depending on birth year) with 10 years of service. In addition, transitioning force members may receive lower pay as federal employees, according to our analysis of two tentative federal pay levels for protective force positions at SPO levels of I, II, and III. As of September 30, 2008, contractors are generally paid higher top rates than the top rates for the applicable federal General Schedule (GS) pay grades. Only SPO- III positions at three sites and SPO-II positions at one site could have the ir top rates potentially matched by 2008 federal rates, but only under certain Also, to reach federal pay rates that better approximate the assumptions. contractor rates, transitioning contractor protective forces might have to wait many years. While most collective bargaining agreements allow protective forces to reach a position’s top pay rate after 3 years or fewer, federal guards could take much longer because the 10 steps within a GS pay grade have progressively longer periods of service between incremental increases. This step progression means reaching the top of a pay grade (step 10) could take up to 18 years. Finally, if protective forces are federalized, OPM officials told us that current members would not be guaranteed a federal job. According to those officials, current members would have to compete for the new federal positions, and thus they risk not being hired. Nonveteran members are particularly at risk because competition for federal security guard positions is restricted to those with veterans’ preference, if they are available. According to NNSA Service Center officials, veterans groups would likely oppose any waiver to this hiring preference. Thus, if the protective forces were to be federalized, the department might lose some of the currently trained and experienced personnel. According to OPM officials, legislation would be required to provide federal protective forces with early and enhanced retirement because their positions do not fit the current definition of law enforcement officers that would trigger such a benefit. For the same reason, DOE had to pursue legislation to extend early and enhanced retirement for OST federal agents in 1998. OPM had determined that OST federal agents did not meet the definition for law enforcement officer that would have made them eligible for early and enhanced retirement benefits. Consequently, at DOE urging, Congress enacted legislation to give OST federal agents the special 20-year retirement provisions. Although a coalition of unions has supported federalization to get early and enhanced retirement benefits, provisions associated with these benefits could create hiring and retirement difficulties for older force members. Older members might not be rehired because agencies are typically authorized to set a maximum age, often age 37, for entry into federal positions with early retirement. In addition, even if there were a waiver from the maximum age of hire, older protective forces members could not retire at age 50 because they would have had to work 20 years to meet the federal service requirement for “early” retirement benefits. These forces could retire earlier if they were granted credit for their prior years of service under DOE and NNSA contracts. However, OPM officials told us OPM would strongly oppose federal retirement benefits being granted for previous years of contractor service (retroactive benefits). According to these officials, these retroactive benefits would be without precedent and would violate the basic concept that service credit for retirement benefits is only available for eligible employment at the time it was performed. Moreover, retroactive benefits would create an unfunded liability for federal retirement funds. When the law changed to allow OST federal agents early retirement, these agents were already federal employees, and they received retroactive enhanced credit for service; DOE paid the extra liability (approximately $18 million over 4 years). In a joint January 2009 memorandum, the NNSA Administrator and DOE’s Chief Health Safety and Security (HSS) Officer rejected the federalization of protective forces as an option and supported the continued use of contracted protective forces—but with improvements. They concluded that, among other things, the transition to a federal force would be costly and would be likely to provide little, if any, increase in security effectiveness. However, these officials recognized that the current contractor system could be improved by addressing some of the issues that federalization might have resolved. In particular, they announced the pursuit of an initiative to better standardize protective forces’ training and equipment. According to these officials, more standardization serves to increase effectiveness and cost efficiency as well as to better facilitate responses to potential work stoppages. In addition, in March 2009, the Chief HSS Officer commissioned a study group, which included DOE officials and protective force union representatives and had input from protective force contractors, to recommend ways to overcome the personnel system problems that might prevent protective force members from working to a normal retirement age, such as 60 to 65, and building reasonable retirement benefits. Both of these initiatives might benefit the department and its programs. For example, the initiative to standardize protective forces has started focusing on the inefficiencies arising from having each contractor separately choose and procure security equipment and support services; one identified inefficiency is that smaller separate orders hinder contractors from negotiating better prices. In NNSA’s fiscal year 2010 budget request, NNSA predicted that standardizing procurement and security equipment, such as vehicles, weapons, and ammunition, could save NNSA, cumulatively, 20 percent of its costs for such equipment by 2013. With respect to the career and retirement initiative, the DOE study group reported, among other potential benefits, that improving career incentives for individuals to enter a protective force career and then remain in the DOE security community for a lifetime of service could help the department minimize the significant costs associated with hiring, vetting, and training protective force members. NNSA has established a Security Commodity Team—composed of security and procurement professionals from NNSA, some DOE sites, and other DOE organizations—to focus first on procuring ammunition and identifying and testing other security equipment that can be used across sites. According to NNSA officials, NNSA established a common mechanism in December 2009 for sites to procure ammunition. Another effort will seek greater standardization of protective force operations across sites, in part by HHS or NNSA clarifying protective force policies when sites do not have the same understanding of these policies or implement them in different ways. To move toward more standardized operations and a more centrally managed protective force program, NNSA started a broad security review to identify possible improvements. As one result of this security review, according to NNSA officials in January 2010, NNSA has developed a draft standard for protective force operations, which is intended to clarify both policy expectations and a consistent security approach that is both effective and efficient. For the personnel system initiative to enhance career longevity and retirement options, in June 2009, a DOE-chartered study group made 29 recommendations that were generally designed to enable members to reach a normal retirement age within the protective force, take another job within DOE, or transition to a non-DOE career. The study group identified 14 of its 29 career and retirement recommendations as involving low- or no-cost actions that could conceivably be implemented quickly. For example, some recommendations seek to ensure that protective force members are prepared for job requirements through expanding fitness and wellness programs and reviewing the appropriateness of training. Other recommendations call for reviews to find ways to maximize the number of armed and unarmed positions that SPOs can fill when they can no longer meet their current combatant requirements. Other recommendations focus on providing training and planning assistance for retirement and job transitions. (All 29 recommendations are described in app. I.) The study group recognized that some of its personnel system recommendations may be difficult to implement largely because of budget constraints. The study group had worked with the assumption that DOE security budgets will remain essentially flat for the foreseeable future, and may actually decline in real dollars. Nevertheless, it identified 15 of its 29 career and retirement recommendations as challenging because they involve additional program costs, some of which are likely to be substantial, and may require changes to management structures and contracts. For example, to provide some income security when protective officer members must take a lower-paying position because of illness, injury, or age, one recommendation would include provisions in collective bargaining agreements to at least temporarily prevent or reduce drops in pay. Among the more challenging recommendations is a call to enhance retirement plans and to make them more equivalent and portable across sites—the types of changes that a coalition of unions had hoped federalization might provide. Progress on the 29 recommendations has been limited to date. When senior department officials were briefed on the personnel system recommendations in late June 2009, they took them under consideration for further action but immediately approved one recommendation—to extend the life of the study group by forming a standing committee. They directed the standing committee to develop implementation strategies for actions that can be done in the near term and, for recommendations requiring further analysis, additional funding, or other significant actions, to serve as an advisory panel for senior department officials. According to a DOE official in early December 2009, NNSA and DOE were in varying stages of reviews to advance the other 28 recommendations. Later that month, NNSA achieved aspects of one recommendation about standardization, in part by formally standardizing protective force uniforms, as well as the uniforms’ cloth shields. In the Conference Report for the fiscal year 2010 National Defense Authorization Act, the conferees directed the Secretary of Energy and the Administrator of the National Nuclear Security Administration to develop a comprehensive DOE-wide plan to identify and implement the recommendations of the study group. Protective forces are a key component of DOE’s efforts to secure its Category I SNM, particularly after the September 11, 2001, terrorism attacks. Since the attacks, DOE has made multiple changes to its security policies, including more rigorous requirements for its protective forces. However, in making these changes, DOE and its protective force contractors through their collective bargaining agreements have not successfully aligned protective force personnel systems—which affect career longevity, job transitions, and retirement—with the increased physical and other demands of a more paramilitary operation. Without better alignment, in our opinion, there is greater potential for a strike at a site, and potential risk to site security, when protective forces’ collective bargaining agreements expire. In the event of a strike at one site, the differences in protective forces’ training and equipment make it difficult to readily provide reinforcements from other sites. Even if strikes are avoided, the effectiveness of protective forces may be reduced if tensions exist between labor and management. The potential for a strike and for declines in protective forces’ performance have elevated the importance of finding the most effective approach to maintaining protective force readiness, including an approach that better aligns personnel systems and protective force requirements. At the same time, DOE must consider its options for managing protective forces in a period of budgetary constraints. With these considerations in mind, DOE and NNSA, to their credit, have recognized that the decentralized management of protective forces creates some inefficiencies and that some systemic career and longevity issues are not being resolved through actions at individual sites. NNSA’s recent standardization initiatives and the 29 recommendations made by a DOE study group in June 2009 offer a step forward. The responsibility lies with DOE, working with protective force unions and contractors, to further develop and implement these initiatives and recommendations. However, if DOE decides not to take meaningful actions or if its actions will not achieve the intended goals, an examination of other options, including the federalization of protective forces, may be merited. To better align protective force personnel policies and systems with DOE’s security requirements for Category I SNM sites, we recommend that the Secretary of Energy promptly develop implementation plans and, where needed, undertake additional research for the DOE study group’s 2009 recommendations to improve career longevity and retirement options for protective force personnel. Specifically, we recommend the Secretary take the following two actions: For actions such as reviewing the appropriateness of training that the study group identified as low or no cost, unless DOE can state compelling reasons for reconsideration, it should develop and execute implementation plans. For actions that may involve substantial costs or contractual and organizational changes, such as enhancing the uniformity and portability of retirement benefits, DOE should plan and perform research to identify the most beneficial and financially feasible options. We provided DOE with a draft of this report for its review and comment. In its written comments for the department, NNSA generally agreed with the report and the recommendations. However, NNSA stated that the report does not sufficiently credit the department for its significant efforts taken to address protective force issues. We added some information to the report about the status of the department’s efforts that NNSA provided separately from its comment letter. Nevertheless, we continue to view DOE’s progress on its study group’s 29 recommendations as generally limited to date. The complete text of NNSA’s comments are presented in appendix II. NNSA also provided technical clarifications, which we incorporated into the report as appropriate. OPM also received a draft of this report for review and comment. It chose not to provide formal comments because it said our report fairly and accurately represented the facts and policy issues that OPM provided to us. We are sending copies of this report to congressional committees with responsibilities for energy issues; the Secretary of Energy; and the Director, Office of Management and Budget. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors are listed in appendix III. In March 2009, the Department of Energy’s (DOE) Chief Health, Safety and Security (HSS) Officer commissioned a study to examine “realistic and reasonable options for improving the career opportunities and retirement prospects of protective force (PF) members while maintaining, within current and anticipated budgetary constraints, a robust and effective security posture.” Under the leadership of HSS and with input from protective force contractors, a study group was formed consisting of senior leaders of the National Council of Security Police and senior technical staff from the National Nuclear Security Administration, the Office of Environmental Management, the Office of Nuclear Energy, and the Office of Fossil Energy. The study group’s report, Enhanced Career Longevity and Retirement Options for DOE Protective Force Personnel, released on June 30, 2009, included 29 recommendations to overcome the problems that prevent protective force members from working to a normal retirement age and building reasonable retirement benefits. Summaries of these recommendations follow. The study group thought the following 14 recommendations were achievable mostly within existing management structures and anticipated budgetary constraints. 1. PF deployment strategies should be re-examined to ensure that appropriate Security Police Officers’ (SPO) skill sets and response capabilities (e.g., offensive or defensive capabilities) are matched to current response plan requirements in a manner that maximizes reliance on defensive combatants. The intent is to maximize the number of defensive positions that could be filled by personnel who can no longer meet the higher offensive combatant requirements. 2. Anticipated requirements for security escorts and other security- related unarmed positions (including current outsourcing practices) should be reviewed and procedures implemented to maximize work opportunities for unarmed PF members (Security Officers). The intent of this recommendation and the next is to provide positions to be filled by PF members who can no longer meet either the offensive or the defensive combatant standards. 3. Unarmed PF-related work should be identified as part of the career path for PF members. 4. Measures should be adopted to minimize the impact of current physical fitness standards upon career longevity, and these standards should be reviewed against current job requirements. 5. Revisions to current medical requirements should be developed to ensure that existing medical conditions do not represent (given the current state of the medical arts) unreasonable barriers to career longevity. 6. So long as the department expects PF personnel to meet explicit medical and fitness standards, it should provide reasonable means to prepare for testing and evaluation. 7. Existing “fitness/wellness” programs should be expanded to help SPOs maintain and prolong their ability to meet physical fitness requirements and to achieve medical cost savings that result from maintaining a well-managed program. According to the study group, this recommendation is not cost-neutral. 8. Retirement/transition planning should be integrated into PF training. 9. The capabilities of the National Training Center should be used to facilitate career progression and job transition training. 10. PF organizations should be encouraged to appoint “Career Development/Transition” officers to assist personnel in career path and transition planning. 11. The Human Reliability Program (HRP) monitors employees to ensure they have no emotional, mental, or physical conditions that impede them from reliably conducting their work. Under this program, if a reasonable belief or credible evidence indicates that employees are not reliable, they should be immediately removed from their duties as an interim precautionary measure. The study group recommended taking strong actions to correct HRP administrative errors and to rigorously enforce existing prohibitions against using HRP in a punitive manner. This recommendation and the next arise from a concern that some protective force members may be punished without the opportunity for timely recourse. 12. Contractor policies and actions that lead to placing PF members in nonpaid status without appropriate review or recourse should be closely monitored (and, where necessary, corrected). 13. DOE M 470.4-3A, Contractor Protective Force, should be reviewed to ensure that requirements are supportable by appropriate training. 14. To encourage future communication on the issues considered in this study, the life of the present study group should be extended as a standing committee, and union participation in the DOE HSS Protective Force Policy Panel should be ensured. The study group thought the following 15 recommendations would require currently unbudgeted resources or changes to existing contracts. 15. Existing defined contribution plans should be reviewed in order to identify methods to improve benefits, to ensure greater comparability of benefits from one site to the next, and to develop methods to improve portability of benefits. This recommendation, and those through number 19, involve changes to retirement plans that could enhance benefits and allow protective force personnel to transfer benefits more easily when moving to other sites. 16. Consistency in retirement criteria should be established across the DOE complex (e.g., a point system incorporating age and years of service or something similar). 17. The potential for incorporating a uniform cost-of-living allowance into defined benefit retirement programs based on government indexes should be examined. 18. Portability of service credit between PF and other DOE contractors should be explored. This could be directed in requests for proposals for new PF contracts. 19. Potential actions should be explored to create a reasonable disability retirement bridge for PF personnel when alternate job placement is unsuccessful. 20. Job performance requirements (such as firearms proficiency) should be supported by training sufficient to enable PF members to have confidence in meeting those requirements. 21. A retraining fund should be created to assist personnel with job transitions and second careers. 22. A centralized job registry should be established to facilitate identification of job opportunities across the complex. 23. Consideration should be given to sponsoring a student loan program to assist PF members in developing second careers. 24. The department, as a matter of policy and line management procedure, should establish the position that SPOs be considered for job placement within each respective site’s organizational structure before a contractor considers hiring personnel from outside of the site. 25. “Save pay” provisions should be included in collective bargaining agreements to cover specified periods when a PF member must be classified to a lower-paying position because of illness, injury, or aging. 26. DOE should explore the potential for facilitating partnerships among the various contractor organizations in order to broaden employment opportunities for aging or injured personnel and to encourage PF personnel seeking alternative career paths to actively compete for those opportunities. 27. Where possible, the department should review its separate PF prime contracts and convert them to “total” security and emergency management contracts. The intent of this recommendation is to permit protective force personnel to better compete for emergency management positions when they lack the ability or desire to continue with their security positions. 28. PF arming and arrest authority should be reviewed with the objective of enhancing the capabilities of SPOs. The intent of this recommendation is to, among other things, ease SPOs’ postretirement path into law enforcement positions. 29. Where possible, equipment, including uniforms, weapons, and badges, should be standardized throughout the department. According to the study group, more standardized uniforms might improve protective forces’ morale and could offer some offsetting cost savings for the department. In addition to the contact named above, Jonathan Gill, Assistant Director; John Cooney; Don Cowan; Cindy Gilbert; Terry Hanford; Mehrzad Nadji; Cheryl Peterson; and Carol Herrnstadt Shulman made key contributions to this report. Other contributors include Carol Kolarik, Peter Ruedel, and Robert Sanchez.
The September 11, 2001, terrorist attacks raised concerns about the security of Department of Energy's (DOE) sites with weapons-grade nuclear material, known as Category I Special Nuclear Material (SNM). To better protect these sites against attacks, DOE has sought to transform its protective forces protecting SNM into a Tactical Response Force (TRF) with training and capabilities similar to the U.S. military. DOE also has considered whether the current system of separate contracts for protective forces at each site provides sufficiently uniform, high-quality performance across its sites. Section 3124 of PL 110-181, the fiscal year 2008 National Defense Authorization Act, directed GAO to review protective forces at DOE sites that possess Category I SNM. Among other things, GAO (1) analyzed information on the management and compensation of protective forces, (2) examined the implementation of TRF, and (3) assessed DOE's two options to more uniformly manage DOE protective forces. Over 2000 contractor protective forces provide armed security for DOE and the National Nuclear Security Administration (NNSA) at six sites that have long-term missions to store and process Category I SNM. DOE protective forces at each of these sites are covered under separate contracts and collective bargaining agreements between contractors and protective force unions. As a result, the management and compensation--in terms of pay and benefits--of protective forces vary. Sites vary in implementing important TRF requirements such as increasing the tactical skills of protective forces so that they can better "move, shoot, and communicate" as a unit. While one site has focused on implementing TRF requirements since 2004, other sites do not plan to complete TRF implementation until the end of fiscal year 2011. In addition, broader DOE efforts to manage postretirement and pension liabilities for its contractors have raised concerns about a negative impact on retirement eligibility and benefits for protective forces. Specifically, protective force contractors, unions, and DOE security officials are concerned that the implementation of TRF's more rigorous requirements and the current protective forces' personnel systems threaten the ability of protective forces--especially older members--to continue their careers until retirement age. Efforts to more uniformly manage protective forces have focused on either reforming the current contracting approach or creating a federal protective force (federalization). Either approach might provide for managing protective forces more uniformly and could result in effective security if well-managed. Although DOE rejected federalization as an option in 2009 because it believed that the transition would be costly and would yield little, if any, increase in security effectiveness, the department recognized that the current contracting approach could be improved by greater standardization and by addressing personnel system issues. As a result, NNSA began a standardization initiative to centralize procurement of equipment, uniforms, and weapons to achieve cost savings. Under a separate initiative, a DOE study group developed a number of recommendations to enhance protective forces' career longevity and retirement options, but DOE has made limited progress to date in implementing these recommendations.
Data limitations stem from the fact that performance data are reported by multiple agencies. the United States will admit for resettlement in a given year. The presidential ceiling has decreased from about 90,000 in fiscal year 1999 to 80,000 in fiscal year 2009. However, the number of refugees actually entering the United States has increased in recent years compared to the relatively low numbers entering after the terrorist attacks of September 11, 2001. In the aftermath of those attacks, a review of refugee-related security procedures was undertaken, refugee admissions were briefly suspended, and enhanced security measures were implemented. As a result of these and other factors, refugee admissions declined from 68,393 in fiscal year 2001 to 26,383 in fiscal year 2002 and 28,348 in fiscal year 2003. Admissions have rebounded since, gradually increasing to 74,652 in fiscal year 2009. The Refugee Act of 1980 provided a systematic and permanent procedure for admitting refugees to the United States and established comprehensive and uniform provisions to resettle refugees as quickly as possible and to encourage them to become self-sufficient. The Departments of State and Homeland Security handle the first part of the resettlement process by approving and processing refugees overseas. The Department of State then partners with 10 national voluntary agencies to determine where in the United States refugees will live. The national voluntary agencies consider a variety of factors when determining where refugees will live, including placing refugees where they may already have relatives and where the national agencies have offices to meet the needs of the refugees. Voluntary agencies use their network of some 350 affiliates to provide refugees with initial placement services, including meeting refugees at the airport when they first arrive in the United States and providing housing, food, clothing, and other necessities for the first 30 to 90 days. Also, during this time, staff from local voluntary agencies help refugees apply for federal assistance. After their initial month in the United States, many refugees are eligible for temporary resettlement assistance from ORR. All states, except Wyoming, administer an ORR-funded assistance program that provides up to 8 months of cash and medical assistance, as well as other social services, and states have the flexibility to choose among three program delivery models—the Publicly Administered, Wilson/Fish, or Public Private Partnership programs. These three delivery models were established over a 20-year period and give states options in how they provide refugee assistance: Publicly Administered: Refugee resettlement assistance is provided primarily through the Publicly Administered program. States are not required to administer this program, but those that do generally model the program after their Temporary Assistance for Needy Families (TANF) programs. Wilson/Fish: In 1984, Congress authorized ORR to implement the Wilson/Fish program, which gave states flexibility in how they provide assistance to refugees, including whether to administer assistance primarily through local voluntary agencies. One of the goals in developing this program was to expand the number of states that offered a refugee program so that an ORR-funded program could exist in every state that resettles refugees. Public Private Partnership: In 2000, ORR established the Public Private Partnership program, which promotes states’ partnerships with voluntary agencies to provide assistance, and gives states the flexibility to set refugees’ cash grants at levels higher than those authorized for the Publicly Administered program. (See fig. 1 for the geographical distribution of refugee assistance programs.) In addition, some refugees participate in the Matching Grant program, which is only partially funded by ORR. According to ORR, this program is administered by a network of national voluntary agencies, and is offered in 42 states and the District of Columbia. The Matching Grant program provides refugees with cash and other assistance for 4 to 6 months with the goal of helping them become self-sufficient without receiving cash benefits from a public assistance program. Of the refugees who received cash assistance from ORR in fiscal year 2009, just over 30 percent of them participated in the privately administered Matching Grant program, while most of the remaining percent participated in ORR’s other assistance programs. (See fig. 2.) All four programs fund cash and medical assistance as well as a broad range of social services, including employment services, English language instruction, case management, citizenship and naturalization preparation services, and social adjustment services. Eligible refugees may also receive other federal benefits, such as food assistance offered through the United States Department of Agriculture’s Supplemental Nutrition Assistance Program (SNAP, formerly the Food Stamp Program). Figure 3 shows the assistance offered to refugees who participate in one of ORR’s resettlement programs. Not all refugees receive assistance through ORR-funded programs. Refugees who are eligible for or receiving cash assistance from programs outside of ORR, such as TANF or Supplemental Security Income (SSI), are generally not eligible to receive cash assistance from ORR’s Publicly Administered, Wilson/Fish, or Public Private Partnership programs. Refugees who are eligible for TANF but who are not receiving TANF benefits may, however, receive cash assistance and other services offered by the Matching Grant program. See figure 4 for the general path of refugee resettlement in the United States. In addition to helping newly arrived refugees adjust to their surroundings and settle in the United States, the overall goal of ORR’s assistance programs is to help refugees attain self-sufficiency. Self-sufficiency is defined in ORR’s regulations as the refugee earning a total family income at a level that enables a family unit to support itself without receiving a cash assistance grant. ORR collects data on several employment-related outcomes to assess program performance. As part of the Government Performance and Results Act’s requirement for agencies to produce performance measures used to assess their progress toward meeting performance goals, the Publicly Administered, Public Private Partnership, and Wilson/Fish programs have six shared outcome measures. The Matching Grant program has its own measures—three of which are directly related to the program’s goal of helping refugees become economically self-sufficient. Table 1 lists the performance measures for the different types of refugee assistance programs. (For a description of these measures, see app. III.) Congress appropriates a fixed amount of funding each year to fund refugee assistance programs. ORR distributes this funding among seven budget activities—each with a specific purpose. (See table 2.) ORR’s largest budget activity, Transitional and Medical Services, primarily supports refugees’ cash and medical assistance offered through the Publicly Administered, Wilson/Fish, and Public Private Partnership programs as well as the federal contribution to the Matching Grant program. ORR is authorized to fully reimburse program providers for the cash and medical assistance they provide to refugees enrolled in the Publicly Administered, Wilson/Fish, and Public Private Partnership programs,even if the costs of serving all eligible refugees exceed ORR’s annual appropriation in a given fiscal year. The social services that state and voluntary agencies provide to refugees enrolled in these programs, such as employment services and case management, are primarily funded through ORR’s Social Services budget activity. ORR receives a fixed amount of Social Services funds each fiscal year and allocates these funds to states based on estimates of arriving refugees. These Social Services funds do not increase within a given year if the number of refugees served is greater than anticipated. Together Transitional and Medical Services and Social Services funding accounted for more than half of ORR’s total appropriations in fiscal year 2009 (about $436 million, including unobligated funds). Figure 5 shows the distribution of appropriations across ORR’s budget activities in fiscal year 2009. The Matching Grant program features several design elements that distinguish it from assistance offered through the Publicly Administered, Wilson/Fish, and Public Private Partnership programs. According to state officials and voluntary agency staff, Matching Grant providers select the refugees they want to participate in the program, and these refugees can opt to participate in the Matching Grant program or may choose to apply for and receive benefits from other programs if eligible. In contrast, providers of the Publicly Administered, Wilson/Fish, or Public Private Partnership programs enroll any eligible refugee. In interviews with providers, we learned that refugees who find employment while participating in the Matching Grant program may keep their earnings in addition to their cash grant. However, refugees enrolled in ORR’s other programs have their cash assistance reduced or terminated as a result of their employment earnings. In addition, according to ORR and providers, funding for the Matching Grant program is tied to refugees’ success in finding employment while enrolled in the program—that is, providers of the Matching Grant program who do not demonstrate that refugees have achieved specific employment-related outcomes may have their funding reduced the following program year. In contrast, funding for the Publicly Administered, Wilson/Fish, or Public Private Partnership programs is not affected by refugees’ employment outcomes, according to ORR officials. (For more information on the Matching Grant program, see table 3.) The extent to which each of the four ORR programs allow state and voluntary agencies the flexibility to develop or use various service delivery approaches differs. In the two states we visited that offered the Publicly Administered program, refugee assistance was modeled after the states’ TANF programs. In these states, refugees generally received their cash and medical benefits, employment assistance, and other social services from multiple public and private agencies, such as county social service offices and local community-based organizations, and typically met with several caseworkers to receive these services. In comparison, the Wilson/Fish, Public Private Partnership, and Matching Grant programs allow state and voluntary agencies flexibility in developing approaches that are different than those used in states’ TANF programs and at the same time focus on helping the refugee become economically self-sufficient. These approaches varied within and among programs and examples we observed included providers integrating their refugee services, providing intensive case management, and offering employment incentives: Service integration: Some providers we visited used a single government or voluntary agency to provide cash assistance, employment counseling and case management to refugees, often in one location, while other providers referred refugees to multiple agencies for different services. For example, a Public Private Partnership program in Texas used a single agency to provide refugees with most services while a Public Private Partnership in Minnesota utilized multiple service providers. Refugees enrolled in Minnesota’s Public Private Partnership program accessed their cash assistance and case management from a local voluntary agency but then often received other types of services, like employment counseling and English language instruction, from a combination of other private, nonprofit, and public agencies. Most of the Matching Grant and Wilson/Fish programs we visited used a single agency to provide most refugee services. Intensive case management: In some of the states we visited, ORR programs provided refugees with intensive case management using a single case manager to oversee most aspects of a refugee’s case whereas in other states, officials told us that providers spread responsibility for managing a refugee’s case between multiple case workers, often in different agencies. Guidelines state that refugees enrolled in the Matching Grant program be assigned a caseworker who provides intensive case management. Intensive case management can encompass a wide range of activities, including referring refugees to needed services, such as transportation, child care, English classes, employment-readiness training, and food and housing assistance; helping the refugee adapt to the new culture; and facilitating interactions between clients and employers or other service providers. In Florida, one voluntary agency case manager who provided intensive case management to his clients drove refugees enrolled in the Matching Grant program to and from work on their first day of employment and checked in with the employers to help resolve any employment-related issues that may have arisen. However, the extent to which refugees receive intensive case management can vary by program. The Public Private Partnership program does not receive dedicated funding from ORR to specifically support case management activities, and two providers of the Public Private Partnership program told us they could not always provide intensive case management services to refugees. Incentives for early employment: In addition to the Matching Grant program, the Wilson/Fish and Public Private Partnership programs allow states or voluntary agencies to offer financial incentives to encourage refugees to find employment quickly. While some providers may choose not to offer incentives, all the Wilson/Fish and Public Private Partnership providers we visited offered employment incentives to refugees. Providers of the Wilson/Fish program in Massachusetts and San Diego County, for example, offered refugees a cash bonus if they found full-time employment within the first 4 months of arrival. The four programs differ in others ways as well. Table 3 outlines some of the different characteristics of these programs. According to staff at voluntary agencies in four of the five states we visited, enrollment in the Matching Grant program is based primarily, though not necessarily exclusively, on the refugee’s readiness to work— including his or her level of motivation, English skills, education or previous work experience, and physical and mental health. Agency officials from one state we visited explained that because the program duration is shorter than the other three assistance programs, the Matching Grant program is best suited to those who are likely to obtain employment quickly. In addition, as funding for Matching Grant programs is based on the performance of voluntary agencies in helping refugees achieve employment-related outcomes, voluntary agency staff have an incentive to select refugees to participate in the program who they think are most likely to be successful in finding a job. Some voluntary agency officials we spoke with said that refugees with high motivation to work and high levels of English proficiency are more likely than those who do not have these qualities to find employment. In some instances, the amount of cash assistance provided by refugee assistance programs is another factor that can influence the placement of refugees in particular programs. Because cash assistance levels under the Publicly Administered, Public Private Partnership, and Wilson/Fish programs are based on the benefits provided under a state’s TANF program, the amount of cash assistance provided to families can vary by state and can be either higher or lower than the amounts available under the Matching Grant program. Some voluntary agency officials in the states we visited told us they encourage refugees to participate in the assistance program that offers the greatest monetary benefit to the refugee. In two states we visited, Massachusetts and Texas, voluntary agencies told us they preferred to enroll refugee families with children in the Matching Grant program because, based on the number of eligible members, the family could receive a higher cash benefit in the first few months after their arrival compared to what the family would receive from other assistance programs. In those states, refugees without children would receive more cash assistance overall from the Public Private Partnership or Wilson/Fish programs than from the Matching Grant program. In addition, some voluntary agencies told us they select families who may face relatively more obstacles than other refugees to participate in programs that provide integrated services and intensive case management because these families can benefit from these approaches. According to a director at one voluntary agency, intensive case management and integrated services tend to benefit refugees who might otherwise fall through the cracks in a traditional assistance program that provides assistance through multiple agencies and different case managers. Voluntary agency staff in Minnesota told us that when possible, they enrolled single-parent families who are ready to work into the Matching Grant Program instead of the TANF program because they believe the family will benefit from intensive case management and integrated services. One voluntary agency manager explained that refugees enrolled in the TANF program in Minnesota often have several case workers and need to access multiple government and nonprofit agencies to receive the type of services that are mostly offered through one voluntary agency through the Matching Grant program. State policies can also determine whether refugee families with children are placed in ORR refugee assistance programs or the state’s TANF program, which is generally available to eligible families with children. While refugees who are eligible for or receiving cash assistance from programs that are available to the general population—such as SSI and TANF—are generally prohibited from receiving cash assistance from Publicly Administered, Public Private Partnership, or Wilson/ Fish programs, some states determine TANF eligibility in a way that allows families with children—who in other states would likely be eligible for TANF—to participate in an ORR-funded refugee assistance program. For example, officials in Texas who administer the state’s Public Private Partnership program explained to us that refugees with children who apply for TANF soon after they arrive in Texas are often ineligible due to the income they receive during the initial resettlement process. According to Texas officials, families who are ineligible for TANF may participate in the state’s Public Private Partnership, which offers higher cash benefits than the state’s TANF program. In contrast, officials in Minnesota who also administer the Public Private Partnership program told us that refugees’ initial resettlement payments do not make families ineligible for TANF in their state. As a result, many families with children in Minnesota participate in TANF, not the state’s Public Private Partnership program. In addition, some states administering the Wilson/Fish program have the flexibility to allow families who would otherwise be eligible for TANF to participate in the Wilson/Fish program. According to ORR, four out of 13 Wilson/Fish programs provided cash assistance to TANF-eligible refugees in fiscal year 2010. Overall, fewer refugees found jobs within their first months in the United States in fiscal year 2009 than they did in fiscal year 2007. Before the economic recession, in fiscal year 2007, ORR’s performance data show that between 59 percent and 65 percent of all refugees receiving cash assistance from ORR’s four assistance programs entered employment within 4 to 8 months of coming to the United States. By fiscal year 2009, however, these employment rates decreased, ranging from 31 percent to 52 percent, depending on the program. (See fig. 6.) Several state officials and voluntary agency staff told us that refugees have struggled to find and keep full-time jobs during the economic downturn. Some explained that refugees today compared to 3 years ago have fewer employment options because jobs that used to be relatively easy for refugees to find, such as those in hospitality and construction sectors, are now being filled by non-refugees who have more training or experience. We also heard that of the refugees who do find work, an increasing number have only part-time or temporary jobs. For example, in reviewing the case file of a single Somali man who resettled in Minneapolis, we learned that he had found a part-time job, only to have his schedule reduced to 1 day per week. As a result, he continues to look for other work. Our analysis of ORR’s performance data shows that fewer refugees have been able to keep their jobs for at least 90 days in fiscal year 2009 than in fiscal year 2007. Specifically, in fiscal year 2007, the percentage of refugees in the Publicly Administered, Wilson/Fish, and Public Private Partnership programs who found work and kept their jobs for at least 90 days ranged from 77 percent to 84 percent, depending on the program. By fiscal year 2009, these rates decreased somewhat to between 67 percent and 80 percent. For more information, see appendix III. Performance data indicate that some refugees obtained employment while enrolled in an ORR assistance program, but no single refugee assistance program consistently outperformed the others across the various performance measures in fiscal year 2009. In comparing the three ORR programs that provide assistance for 8 months, we found, for example, that the Public Private Partnership program performed relatively well at helping refugees find jobs while the Wilson/Fish program had the most positive outcomes related to job retention. Table 4 below shows the actual measures for ORR’s 8-month programs in fiscal year 2009. For more information on ORR’s performance measures, see appendix III. The Matching Grant 4-to-6-month assistance program, with its own set of employment measures, performed well on some but not all of its measures in fiscal year 2009. (See table 5.) ORR performance data cannot be used to compare Matching Grant program outcomes with the outcomes from the other three programs because they do not share the same performance measures. While all four refugee assistance programs have three measures in common, the programs collect information for their common measures at different points in time. For example, the Matching Grant program reports the number of refugees who enter employment 4 months after the refugee arrives in the United States, while providers of the other three programs report entered employment rates for refugees receiving ORR-funded cash assistance within 8 months of their arrival in the United States. Because the approaches states and voluntary agencies use to provide assistance vary both within and between programs, ORR’s performance data provide little information on the relative effectiveness of specific approaches. The Wilson/Fish, Public Private Partnership, and Matching Grant programs were designed to allow providers to develop innovative approaches that are different than those used in states’ TANF programs, including integrated services, intensive case management, and employment incentives. Several providers we spoke with believe that the approaches they use to provide assistance play an important role in helping refugees find employment. One study published in 1999 (based on data from 1992 through 1994) compared a Wilson/Fish and a Publicly Administered program in San Diego and concluded that the Wilson/Fish program with integrated services, personal and flexible system of service delivery, and intensive support services helped refugees find employment more quickly than the Publicly Administered program that provided services through multiple agencies and case workers, but we found no other studies that were published recently and have reliably assessed the effectiveness of the various approaches used by refugee assistance programs. In addition, the way these approaches are implemented varies significantly both within and across programs. For example, in Texas, the voluntary agencies that administer both the Matching Grant and the Public Private Partnership programs told us that the way employment related services are provided under the two programs is virtually indistinguishable, whereas in Minnesota, the Matching Grant and Public Private Partnership programs use two very different service delivery approaches, according to voluntary agency staff. Because providers consider different factors when placing refugees in assistance programs, it can be difficult to determine whether differences in program performance are attributed to program approaches or to differences in the populations served. For example, because refugee families with children may face different challenges to employment than refugees without children, a program that serves more families with children could have different employment outcomes than one that serves fewer. In one of our discussion groups, a single mother from Rwanda told us that she was unable to find work when she first arrived in the United States because she had to care for her children, the youngest being 6 weeks old at the time. Eventually, she found child care for her children and found her first job after being in the United States for almost 4 years. Additionally, several Matching Grant program administrators told us they were more likely to enroll refugees who speak English fluently, as the ability to speak English can greatly facilitate a refugee’s chances of finding employment. One provider in Florida explained that in Miami, despite the fact that most refugees can get by outside the workplace speaking only Spanish, most employers require that job applicants also speak English. A refugee from Belarus told us that in his home country he was an economist and a construction manager, but since arriving in Los Angeles over a year ago with his wife and child he has been unable to find work. He told us that he did not speak English when he arrived and believes that this has been a significant barrier to employment. According to the results of ORR’s annual survey of refugees published in the agency’s 2007 report to Congress (the most recent published report), English proficiency was one of the most important factors influencing the economic status of refugees, with close to 90 percent of those who lacked earnings and received cash assistance living in a household where no one could speak English. ORR considers a broad range of factors when estimating its program costs, and these estimates for fiscal years 1999 through 2009 have generally tracked actual program obligations in most but not all years. When estimating program costs, ORR officials told us they consider several factors, such as the projected inflation rate; participation rates; costs for cash and medical assistance; administrative costs; and monitoring, data collection, and evaluation costs, as well as the projected number of specific refugee groups such as unaccompanied refugee minors. According to our analysis, ORR’s estimates of program costs have generally tracked what the agency actually obligated. Between fiscal years 1999 and 2009, ORR’s estimates were, on average, within 6 percent of the agency’s actual obligations. (See fig. 7.) Despite its efforts to consider various factors when estimating program costs, ORR has faced difficulties in estimating specific variables, such as the number of refugees that will enter the country in a given year and the share of those refugees who will be eligible for ORR assistance programs, according to officials. ORR officials told us they use the presidential ceiling of refugees that may enter the United States in a given year when estimating the number of refugees they must serve. In fiscal year 2009, the number of refugees that arrived in the United States was more than 90 percent of the maximum number of refugees the United States set as the ceiling that year. However, this ceiling has not always been a good proxy of the actual number of incoming refugees. For fiscal years 2002 through 2007, the number of refugees that arrived was, on average, about 40 percent lower than the presidential ceiling. (See fig. 8.) In addition to using the refugee ceiling, ORR projects the number of refugee arrivals by using historical arrival patterns, and in its fiscal year 2004 budget estimate, ORR requested less than it did in the previous year because of the decreasing number of refugee arrivals since September 11, 2001. Once ORR has refugee arrival estimates, the agency projects the number of refugees who will likely participate in ORR-funded programs. The share of refugees who are eligible for ORR-funded services varies from year to year. For example, between fiscal years 2007 and 2009, the percentage of all refugees who received cash assistance through ORR’s assistance programs fluctuated between 26 percent and 38 percent. ORR must also estimate the average cost of providing cash assistance to refugees participating in its assistance programs, which can vary significantly depending on the distribution of refugees across the country. A refugee living in Texas participating in the Public Private Partnership program, for example, receives a cash assistance grant of about $200 per month, whereas a refugee living in Massachusetts participating in the Wilson/Fish program receives a cash assistance grant of about $428 per month. Consequently, the amount ORR reimburses state and voluntary agencies for the costs of providing cash assistance may change as refugee arrival patterns shift. For example, in fiscal year 2009, 3,082 more refugees were settled in Texas—a low benefit state—than in fiscal year 2008, while Minnesota and Connecticut—both high benefit states—saw decreases in their numbers of arrivals. ORR officials also told us that because they do not play a role in deciding where refugees are geographically placed, the agency is limited in its ability to estimate costs associated with refugee arrival patterns. Uncertainty regarding the costs for medical expenses incurred by refugees also affects ORR’s ability to accurately estimate funding for the amount of services it must provide. Since an increasing proportion of arriving refugees need intensive medical care, refugees’ medical costs on average have increased over time, creating uncertainty for ORR in estimating these expenses from year to year, according to officials. ORR officials indicated that refugees admitted in recent years have more diverse medical backgrounds than in the past, and that the number of refugees with chronic mental and medical conditions has grown, due in part to increases in refugee groups that have spent years living in refugee camps with limited access to medical care and proper nutrition. Burmese refugees in particular have lived for decades in refugee camps, according to ORR, and have grown from 128 refugee arrivals in fiscal year 2002 to 18,275 arrivals in fiscal year 2009, an increase from less than 1 percent to 24 percent of the total population of arriving refugees. Partly because of this demographic shift in the refugee arrival population, according to ORR officials, the agency’s cost for medical assistance more than doubled from fiscal year 1999 through fiscal year 2009. ORR officials and voluntary agency staff explained that detailed information about refugees’ health conditions are often not known prior to their arrival in the United States, which contributes to uncertainty in medical costs. For example, one voluntary agency director in Texas stated that medical information provided on refugees prior to their arrival is minimal, and only describes whether the client has a “Class A” condition, such as active tuberculosis, or a “Class B” condition, such as hypertension, without specifying the illness. Estimating the number of children ORR will likely serve as a result of the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008 also created uncertainty in ORR’s budget formulation in recent years, according to ORR. Officials said they were uncertain about the impact this Act would have on their budget due to a provision stipulating that victims of trafficking and undocumented youths who are granted Special Immigrant Juvenile status may receive care and placement services funded by ORR instead of being returned to their home country. The Department of Homeland Security provides ORR with estimates about asylees and unaccompanied alien children who do not enter the United States through the traditional resettlement channels. According to ORR officials, the Department of Homeland Security estimated that the number of minors receiving services from ORR in fiscal year 2009 would be approximately 12,000 to 14,000, but the overall number of Unaccompanied Alien Children declined from 7,211 in fiscal year 2008 to 6,622 in fiscal year 2009. Difficulties in accurately estimating program costs have contributed to fluctuations in ORR’s unobligated balances at the end of each fiscal year. For example, ORR officials said that they used the presidential ceiling of 70,000 to estimate the number of refugees they would likely serve during fiscal years 2006 through 2007. However, refugee arrivals were significantly lower those years, and consequently the agency’s costs to support newly arrived refugees were less than the amount it received in appropriations. Additionally, in anticipation of the potential influx of 4,000 to 6,200 additional unaccompanied alien children as a result of the William Wilberforce Trafficking Victims Protection Reauthorization Act of 2008, ORR requested supplemental funding, which was appropriated in fiscal year 2009. According to ORR, the Act requires that youth entering the United States from neighboring countries be screened to determine if they are victims of trafficking. ORR anticipated that these youth would be cared for under the Unaccompanied Alien Children program while being screened. However, ORR’s Unaccompanied Alien Children program served fewer children than anticipated, and at the end of fiscal year 2009, officials said they carried over about $31 million of unobligated Unaccompanied Alien Children funds and $52 million of unobligated supplementary funds. In total, from fiscal years 2006 to 2009, ORR’s unobligated balances grew from $17 million to over $83 million. (See fig. 9.) Congress appropriates a certain amount of money to ORR each year to fund its activities, and ORR has a 3-year period in which to obligate funding for most of its budget activities—so funds that have not expired and are not yet obligated for a specific activity at the end of a fiscal year can be used during the following 2 fiscal years. From fiscal years 1999 to 2005, ORR used prior years’ unexpired and unobligated funds to allow it to obligate more than it was appropriated in those years. For example, in fiscal year 2005, ORR allocated $205 million of its appropriation to its Transitional and Medical Services budget activity to reimburse states for the costs of providing cash and medical assistance to refugees. When states’ costs exceeded this amount, ORR was able to cover the difference between its expenses and its allocation by using funds from its unobligated balances from prior years. By the end of fiscal year 2005, ORR did not have any remaining unobligated balances. From fiscal years 2006 to 2009, however, ORR obligated less than it was appropriated and thus was able to accumulate balances again. ORR officials told us they prioritize their unobligated balances to supplement the program costs of refugees who participate in the agency’s Publicly Administered and Public Private Partnership programs because they place an emphasis on these programs in their funding decisions. Officials explained they do not typically use the agency’s unobligated balances to supplement funding for other activities, such as funds dedicated to Social Services or the Wilson/Fish program. ORR’s reimbursements to state and voluntary agencies for activities other than cash and medical assistance generally do not increase as the number of newly arrived refugees increase. Officials told us, for example, that the amount appropriated for Social Services has remained at approximately $154 million from fiscal years 2006 to 2009, even though the number of refugee arrivals increased by 81 percent. As a result, program providers have strategies to prioritize the use of limited funds in serving refugees. To ensure that new arrivals continued to receive needed services, refugee coordinators from Texas and Los Angeles told us they provided employment services to refugees for only about 1 year rather than the 5 years allowed in regulation. Similarly, ORR does not generally increase funding for Wilson/Fish services during a given year. In fiscal years 2008 and 2009, the San Diego Wilson/Fish program experienced an unexpected increase in refugees from 3,309 to 5,178. In fiscal year 2010, ORR directed providers to begin transferring refugee families with children, who were otherwise eligible for TANF but had been allowed to enroll in the Wilson/Fish program, off of their Wilson/Fish program and into the TANF program thereby transferring the costs of resettling these refugee families to other programs. ORR spends millions of dollars every year on assistance that is critical to addressing the basic needs of refugees who are new to the United States. State and voluntary agencies that administer ORR’s programs vary in how they provide assistance, and little is known about the effectiveness of the approaches they use to help refugees become self-sufficient—an overall goal of all of ORR’s programs. With refugees’ employment outcomes declining because of the recession and significant pressures on the federal budget, it is important that program providers use approaches that have been shown to be effective in helping refugees find employment that enables them to live without cash assistance. ORR tracks the success of its programs using performance measures, but these measures alone provide little information about the relative effectiveness of the various approaches providers use. It is only by looking more closely at the individual approaches and controlling for other factors that may influence employment outcomes that ORR can begin to identify and promote the most successful strategies while at the same time make more effective and efficient use of its resources. We recommend that the Secretary of Health and Human Services identify effective approaches that state and voluntary agencies can use to help refugees become employed and self-sufficient. To identify these approaches, the Secretary may consider, for example, conducting a series of rigorous evaluations of the programs and their approaches or expanding information collected on the annual survey. Recommendations from further study could be used by HHS or, if appropriate, by Congress, to improve ORR’s refugee resettlement programs. We shared a draft of this report with HHS for review and comment. On March 16, 2011, HHS provided written comments, which may be found in appendix V, and technical comments, which we incorporated in the report where appropriate. In its written comments, HHS confirmed several of our findings and agreed with our recommendation to identify effective approaches to help refugees become employed and self-sufficient. HHS indicated that it seeks to highlight promising practices related to effective employment approaches and cited two recently published studies that it sponsored as examples of its efforts. While the two studies cited described different services states and voluntary agencies use to assist refugees and one study even compared employment outcomes of refugees living in two cities, neither of the studies evaluated the effectiveness of the programs or the approaches providers use in helping refugees become self-sufficient. In fact, as HHS indicated, one study suggested future research should include such an evaluation. The agency also emphasized that refugee resettlement in the United States is intended to be a rescue and restore program, which not only provides refugees with temporary cash, medical, and employment assistance, but also promotes cultural orientation, civic engagement, and other activities. GAO acknowledges that HHS has a broad mission to provide refugees with critical resources to assist them in becoming integrated members of society. Nonetheless, we focused our report on the temporary assistance and employment services refugees receive within the first 4 to 8 months after arriving in the United States because it is within this amount of time that refugees are expected to find employment and become self-sufficient. We are sending copies of this report to relevant congressional committees; the Secretary of Health and Human Services, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7215 or brownke@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix VI. To describe how the Office of Refugee Resettlement (ORR) refugee assistance programs differ and the factors used to place refugees in a program, we interviewed ORR officials and National Voluntary Agency officials from all nine National Voluntary Agencies who help administer ORR programs. We also interviewed state, county, and local voluntary agency officials in five states, including California, Florida, Massachusetts, Minnesota, and Texas and conducted discussion groups with refugees in all but Massachusetts. We selected four states with a high concentration of ORR refugee caseloads and one state with a relatively low concentration of refugee cases. We also selected states with a range of average 2009 unemployment rates. States we selected were geographically diverse and all offered Matching Grant programs. We also collected and reviewed refugee case files from the government and voluntary agencies we visited, so that we would have case files representing refugees with different experiences. From the states we visited except California, we collected copies of two complete case files chosen by the voluntary agency—one case file representing a refugee who participated in either the Publicly Administered, Wilson/Fish, or Public Private Partnership; and one case file representing a refugee who participated in a Matching Grant program. The information we collected at these selected states does not allow us to generalize to other states, refugees, or local voluntary agencies. We also reviewed relevant federal laws and regulations. To describe refugee employment outcomes and the effectiveness of different approaches to providing assistance, we collected, aggregated, and analyzed performance data across all states for fiscal years 2007, 2008, and 2009 by program—Publicly Administered, Public Private Partnership, and Wilson/Fish. To assess the reliability of the performance data, we interviewed knowledgeable agency officials and reviewed official documents. We compiled these data into a spreadsheet and only included the performance outcomes of refugees who received cash assistance from one of ORR’s programs. Because this study focused primarily on the refugees who received cash assistance from ORR’s four assistance programs, we did not include refugees who participated in the Temporary Assistance for Needy Families (TANF) program. We also did not include any data that represented refugees who were receiving ORR services but not receiving refugee cash assistance. We also collected and analyzed performance data for the Matching Grant program from the nine national voluntary agencies for fiscal years 2007 through 2009. We analyzed these 3 years of performance data to gain insight on how refugee assistance programs were performing most recently. While the performance data have some limitations, we consider these data reliable and appropriate for this engagement. We also conducted a literature review and found one study that reliably addressed the effectiveness of approaches used by providers to provide refugee assistance. To describe how ORR estimates program costs and how estimates affect its unobligated balances, we interviewed officials from the Department of Health and Human Services (HHS), Administration for Children and Families (ACF), and ORR to determine how ORR formulates its budget. We also reviewed and analyzed budget documents from fiscal years 1999 through 2009, including ORR budget justifications and annual reports to Congress. We interviewed knowledgeable agency officials and reviewed official documents to assess the data that ORR uses to estimate program costs, and found the data to be sufficiently reliable. Analyzing budget information from fiscal years 1999 to 2009 allowed us to identify trends in ORR’s obligations and unobligated balances. In addition, we spoke with officials from the Departments of State and Homeland Security to obtain information on how they develop budgets for their refugee programs and what, if any, coordination occurs between these agencies and ORR to help ORR formulate its budget. We conducted this performance audit from December 2009 to March 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence we obtained provides a reasonable basis for our findings and conclusions. Appendix II: Profiles of States Visited Benefit level per month (1 adult/2 adults) Public Private Partnership $250/$437 Public Private Partnership $200/$300 for months 1-4 $187.50/$252 for months 5-8 , 2010, News Release, Report Number USDL-10-021. 7.50 per month for months 5-. For two adults, $00 per month for months 1-4 and $252 per month for months 5-. Appendix III: ORR Employment Outcomes The unduplicated number of refugees who entered employment. The unduplicated number of refugees who entered full-time employment where health benefits are offered within the first 6 months of employment. The average wage at placement for all refugees who entered full-time employment. The unduplicated number of refugees terminating cash assistance due to earnings from employment. Job Retention at 90th Day of Employment The unduplicated number of refugees who entered employment between July of the previous calendar year through June of the current calendar year. This rate is a measure of refugee’s retention of employment—not retention of a specific job. As long as the refugee is employed in a job a quarter after the date he or she entered employment, it is considered retention. The unduplicated number of refugees reducing cash assistance due to earnings from employment. The unduplicated number of refugees who entered employment. The unduplicated number of refugees who entered full time employment where health benefits are offered within the first 6 months of employment. The sum of the hourly wages for the unduplicated number of full-time job placements divided by the total unduplicated number of individuals placed in full-time employment. This rate measures the self-sufficiency of refugees enrolled in the Matching Grant program at the 120th day. Self-sufficiency means that the refugee or refugee family is earning a total family income at a level that enables the family unit to support itself without receipt of cash assistance. Refugees who receive non-cash assistance, such as Supplemental Nutrition Assistance Program (SNAP) benefits or housing subsidies, are considered to be self-sufficient by ORR if they have earnings and do not receive cash assistance. Economic Self-Sufficiency Retention (180) This refers to the individuals who were reported to be self-sufficient at the 120th day and continued to be self-sufficient 60 days later. Refugees who receive non-cash assistance, such as SNAP benefits or housing subsidies, are considered to be self-sufficient by ORR if they have earnings and do not receive cash assistance. This rate measures the self-sufficiency of all refugees enrolled in the Matching Grant program, including the status at 120 days and 180 days. Economic Self-Sufficiency (120) Economic Self-Sufficiency Retention (180) Economic Self-Sufficiency (Overall) Average Wage per Hour at Employment (full-time) The following examples are from our four discussion groups held in various parts of the country. Voluntary agencies convened the discussion groups, selected the participants, and provided translators when necessary. These examples are illustrations of refugees’ experiences, and are not indicative of any success or failure of ORR’s programs. A single male from Iraq had job experiences as a shop owner, translator, and sports trainer prior to arriving in the United States. His voluntary agency provided him with assistance in finding an apartment and a job. He currently works two jobs and makes enough money to make ends meet. He hopes to begin a new career by taking training in massage therapy in the near future. A married couple from Burma with two teenagers lived an agrarian life in their native land, farming rice and vegetables for their own subsistence and selling the rest. Caseworkers from a voluntary agency enabled them to obtain medical care and Social Security cards, and fill out job applications. Neither husband nor wife was literate in their native Burmese dialect and found learning English difficult. He has found work in a factory and his wife has qualified for SSI. With these sources of income they are able to pay their bills and save some money. A man from Eritrea was a plant scientist in his native country and knew some English. A voluntary agency helped him with enrolling in a nursing assistant certification program and applying for a job. He aspires to become a doctor. Married with two children, an Iranian physician with 15 years of medical experience in his native land arrived in the United States wanting to practice medicine. However, because of his limited English, he said it was difficult to study for the U.S. Medical Licensing exam. A voluntary agency helped him find a job as a medical assistant, but he did not keep that job. In the future, he hopes to enhance his English skills so he can enter and complete a physician’s assistant program. In addition to the contact named above, Kathryn Larin, Assistant Director; Danielle Giese and Cheri Harrington, Analysts-in-Charge; Richard Burkard; David Chrisinger; Erin Cohen; Rajiv D’Cruz; Mitchell Karpman; Carol Henn; Brittni Milam; James Rebbe; Cynthia Saunders; Kathleen van Gelder; Shana Wallace; and Daniel Webb made key contributions to this report. Displaced Iraqis: Integrated International Strategy Needed to Reintegrate Iraq’s Internally Displaced and Returning Refugees. GAO-11-124. Washington, D.C.: December 2, 2010. Humanitarian Assistance: Status of North Korean Refugee Resettlement and Asylum in the United States. GAO-10-691. Washington, D.C.: June 24, 2010. Iraq: Iraqi Refugees and Special Immigrant Visa Holders Face Challenges Resettling in the United States and Obtaining U.S. Government Employment. GAO-10-274. Washington, D.C.: March 9, 2010. Iraqi Refugee Assistance: Improvements Needed in Measuring Progress, Assessing Needs, Tracking Funds, and Developing an International Strategic Plan. GAO-09-120. Washington, D.C.: April 21, 2009. Refugee Resettlement: Unused Federal Funds in 1991 and 1992. GAO/HRD-94-44. Washington, D.C.: December 7, 1993. Refugee Resettlement: Initial Reception and Placement Assistance. GAO/NSIAD-93-193BR. Washington, D.C.: June 18, 1993. Refugee Resettlement: Federal Support to the States Has Declined. GAO/HRD-91-51. Washington, D.C.: December 21, 1990. Soviet Refugees: Issues Affecting Domestic Resettlement. GAO/HRD-90-106BR. Washington, D.C.: June 26, 1990. Refugee Program: Financial Accountability for Refugee Resettlement Can Be Improved. GAO/NSIAD-89-92. Washington, D.C.: March 17, 1989. Refugee Programs: Status of Early Employment Demonstration Projects. GAO/NSIAD-88-91. Washington, D.C.: February 3, 1988. Refugee Program: Initial Reception and Placement of New Arrivals Should Be Improved. GAO/NSIAD-86-69. Washington, D.C.: April 7, 1986. Greater Emphasis on Early Employment and Better Monitoring Needed in Indochinese Refugee Resettlement Program. GAO/HRD-83-15. Washington, D.C.: March 1, 1983.
In fiscal year 2009, the United States resettled close to 70,000 refugees fleeing persecution in their homelands. To assist in their transition to the United States and help them attain employment, the Department of Health and Human Services Office of Refugee Resettlement (ORR) provides temporary cash, medical, and other assistance through four different assistance programs. The economic downturn and an increase in refugee arrivals posed challenges to ORR's efforts to assist refugees and estimate program costs, resulting in fluctuating unobligated balances. Congress required GAO to examine (1) differences in ORR's refugee assistance programs and factors program providers consider when placing refugees in a particular program; (2) refugee employment outcomes and the effectiveness of different approaches to providing assistance; and (3) how ORR estimates program costs and how its estimates have affected the agency's unobligated balances. GAO met with federal and state officials, voluntary agency staff, and refugees; reviewed selected case files; analyzed ORR performance data for fiscal years 2007 through 2009; and reviewed and analyzed relevant federal laws, regulations, and budget documents. ORR supports several approaches to providing cash, medical assistance, and social services to refugees through its Matching Grant, Publicly Administered, Wilson/Fish, and Public Private Partnership programs. The Matching Grant program, which is administered and partially funded by private voluntary organizations, features several design elements that set it apart from ORR's other programs. For example, voluntary organizations select refugees for the Matching Grant program and those who participate have 4 to 6 months to find employment before their cash assistance ends. Most states also offer one of ORR's other programs: these programs enroll any eligible refugee, and participants have up to 8 months to find a job before their assistance ends. Three of ORR's programs--the Wilson/Fish, Public Private Partnership, and Matching Grant--were designed to give providers flexibility in developing innovative approaches to help refugees find employment and become self-sufficient. GAO observed providers using a number of different approaches, including offering refugees cash incentives for early employment, and these approaches varied within and among programs. Voluntary agencies told GAO that they consider several factors, such as the refugee's English language and employability skills, in deciding whether to enroll a refugee in the Matching Grant program or another ORR assistance program. ORR's four assistance programs showed some success in helping refugees obtain employment in fiscal year 2009, but the percentage of program participants who obtained employment declined in recent years and little is known about which approaches are most effective in improving the economic status of refugees. In fiscal year 2007, between 59 percent and 65 percent of refugees receiving cash assistance from ORR programs entered employment within 4 to 8 months. By fiscal year 2009, these rates decreased to between 31 percent and 52 percent, depending on the program. Little is known about the effectiveness of the different approaches providers use to improve employment outcomes for refugees, such as intensive case management and employment incentives, in part because of differences in the way programs are structured and the populations they serve and in part because of differences in the way program performance is measured. ORR's estimates of program costs have generally tracked actual obligations but challenges in estimating specific variables such as the number of eligible refugees and the cost of serving them have contributed to fluctuations in unobligated balances between fiscal years 1999 and 2009. ORR has a 3-year period in which to obligate its annual appropriations. From fiscal years 1999 to 2005, ORR used unexpired and unobligated prior year funds to obligate more than it was appropriated for those years, in part because of higher-than-expected increases in refugee arrivals and medical costs. As a result, its unobligated balances were reduced in most of these years and were gone by fiscal year 2005. However, from fiscal years 2006 to 2009, ORR obligated less than it was appropriated, which allowed the agency to carry over funds from one year to the next. As a result, its unobligated balances grew from $17 million in fiscal year 2006 to over $83 million in fiscal year 2009. GAO recommends that the Secretary of Health and Human Services identify effective approaches that state and voluntary agencies can use to help refugees become employed and self-sufficient.
The first space shuttle launch occurred on April 12, 1981. During the 25th launch on January 28, 1986, the shuttle Challenger was destroyed shortly after liftoff from Kennedy Space Center. Shuttle flights were suspended while the accident was investigated by the Presidential Commission. The shuttle returned to flight on September 29, 1988. Since that time, it has flown successfully about 50 times. The Presidential Commission determined that the 1986 accident was caused by a faulty seal in one of the solid rocket motor joints. The Commission also found other contributing causes to the accident, such as management isolation, communications failures, and lack of a properly staffed, supported, and robust safety organization. According to the Commission’s June 6, 1986, report, the decision to launch the Challenger was based on incomplete and sometimes misleading information, a conflict between engineering data and management judgments, and a the National Aeronautics and Space Administration (NASA) management structure that permitted internal flight safety problems to bypass key shuttle managers. Officials who made the launch decision were unaware of a recent history of problems with the defective solid rocket motor joint and of the motor contractor’s initial recommendation against launching. According to the Commission, if the decisionmakers had known all of the facts, it is highly unlikely that they would have decided to launch. Space flight can never be made risk free because it involves complex hardware and software systems, harsh operating environments, and the possibility of human error. A 1995 study by a NASA contractor, for example, placed the median estimate of a catastrophic shuttle failure at 1 in 145 launches. According to the advisory committee on the Future of the U. S. Space Program, “there can be no acceptable objective among those who would challenge the vastness of space other than perfection.” Unfortunately, as the Committee’s report points out, the objective of perfection is not readily met, especially since space missions are fundamentally difficult and demand undertakings that depend upon some of the world’s most advanced technology and there are many opportunities for error. The shuttle is an extremely complex system. The program employs thousands of people and launching a shuttle requires that 1.2 million separate procedures be accomplished correctly. Also, NASA has identified over 5,000 critical system components whose failure, either singularly or in combination, could cause loss of the vehicle or crew. Because these risks cannot be completely eliminated, they must be identified and properly managed. NASA’s risk management policy requires that program and project management communicate to NASA management and all program/project personnel the significance of assessed risks and the decisions made with respect to them. At NASA, risk management includes identifying the primary risk drivers and estimating the likelihood of occurrence, identifying the ensuing consequences, and determining the cost and schedule impact. NASA policy regarding safety is to avoid loss of life, injury of personnel, damage, and property loss; instill safety awareness in all NASA employees and contractors; assure that an organized and systematic approach is utilized to identify safety hazards and that safety is fully considered from conception to completion of all agency activities; and review and evaluate contractors’ and NASA’s plans, systems, and activities related to establishing and meeting safety requirements to ensure that desired objectives are effectively achieved. Failure modes and effects analyses are conducted for all flight hardware elements and ground support equipment. This analysis starts with the identification of all potential failure modes and evaluation of “worst case” effects. NASA places potential effects of failures into the general categories shown in table 1.1. Hazard analyses are conducted to identify potential safety hazards and means for minimizing the hazards. NASA’s actions to minimize hazards follow the sequence of (1) system designs that minimize potential hazards, (2) use of safety devices if the design does not eliminate a potential safety hazard, (3) use of warning devices to alert the flight or ground crew to potential hazards, and (4) use of special procedures. Approaches for assessing risk can be either quantitative or qualitative, depending on whether statistical probabilities are assigned to a risk element. All risk assessment approaches require experts to make subjective judgments about the risk elements as well as the likelihood of their occurrence. Quantitative approaches, such as probabilistic risk assessments, can be used to assess both the likelihood that an accident will occur (probability) and the level of damage or loss that will result (consequences). Quantitative assessment methods mathematically quantify risk on the basis of engineering judgment, calculated probabilities of component reliability, analysis of potential human failures, and whether they occur singly or in combination. A probabilistic risk assessment, for example, addresses three basic questions: (1) What could go wrong? (2) How likely is it that this will happen? and (3) What are the consequences? Qualitative assessments, on the other hand, assess risk through descriptive information, identifying the nature and components of risk or an ordinal scale, such as high, medium, and low. Qualitative ratings are usually based on the judgments of experts after they consider such things as test and operational experience, analytical results, trends, and other reported data. NASA follows a formal review process in certifying the shuttle for flight. The certification of flight readiness process is a step-by-step activity designed to certify the readiness of all components of the vehicle assembly and all aspects of mission support. The flight preparation process begins with project milestone reviews including (1) element acceptance, (2) payload readiness, (3) software readiness, and (4) project preflight readiness reviews. These reviews are chaired by NASA project managers and the contractors formally certify the flight readiness of the hardware and software. The next step in the process is the program milestone reviews. These reviews are held to assess the readiness for mating the external tank and solid rocket booster, orbiter and external tank, and ferrying the orbiter atop the shuttle carrier aircraft when required. These reviews are chaired by the manager of launch integration and each shuttle element manager certifies that it has satisfactorily completed the manufacture, assembly, test, and checkout of the elements, including the contractor’s certification that design and performance are up to standard. The final step in the flight preparation process is the flight readiness review. This review is held about 2 weeks prior to launch and is chaired by the Associate Administrator for Space Flight. All shuttle elements, safety and mission assurance, center directors, and senior representatives from the major contractors participate in this review. At the end of the flight readiness review, all organizations must certify that the mission is ready for launch. The Associate Administrator for Safety and Mission Assurance is also an active participant. The safety and mission assurance organization holds parallel reviews to assess safety issues related to the planned launch. The safety and mission assurance organization participates in all phases of the flight preparation process. Two days before a scheduled launch, a mission management team holds a review to assess flight readiness. Its agenda includes close out of any open work, close out of any flight readiness review action items, discussion of new or continuing anomalies, and an updated briefing on anticipated weather conditions at the launch site and at abort landing sites in different parts of the world. The mission management team meets every day after the launch –2 day review up to the conclusion of the mission. Figure 1.1 illustrates NASA’s flight preparation process. NASA’s safety organization provides an independent channel for assessing shuttle flight safety. Each center’s safety organization participates in the element acceptance reviews as well as the flight readiness review and the mission management team. Participation in these reviews provides the opportunity for NASA’s safety organization to express any residual concerns about the safety of an upcoming mission. The organization also holds independent prelaunch assessment reviews. In addition, the Associate Administrator for Safety and Mission Assurance attends the flight readiness review and has a direct communications link to the NASA Administrator. Other program briefings and reviews are also a part of the certification of flight readiness process. For example, the program manager holds an early morning telephone conference with the shuttle centers and headquarters each day to discuss the status of progress and problems. Likewise, about midday the working level shuttle managers hold a telephone conference to provide updated information. Safety and mission assurance personnel attend all of the shuttle program and project meetings and contribute their independent views. The former Chairman, Subcommittee on Investigations and Oversight, House Committee on Science, Space, and Technology, asked us to review NASA’s management of risk associated with flying the space shuttle. Specifically, we reviewed the actions NASA has taken to improve the free flow of information in the launch decision process and the progress NASA has made in adopting quantitative methods for assessing risk. To assess the communications environment, we reviewed policies, procedures, and practices related to management of the shuttle program used by the agency in making launch decisions; we observed various shuttle processing reviews, including a shuttle launch; and discussed various aspects of the program with those responsible for its management. We also conducted discussions of these topics with groups of shuttle and safety managers at NASA Headquarters, and the Johnson, Marshall, and Kennedy field centers. Together these individuals represented almost all of the top NASA officials responsible for shuttle launch decisions and management of most shuttle manufacturing and processing work. To understand the flow of risk information within shuttle contractor organizations and between NASA and its shuttle contractors, we also held discussions with groups of program and safety managers and working-level engineers at three of NASA’s prime shuttle contractors. We chose the three contractors because the work is among the more complex and highest risk in the program. Group discussions are very useful for exploring the various facets of communications issues and processes. However, they did not enable us to determine how many participants held a particular view or the intensity of their views. Therefore, to more precisely measure the themes that emerged from the group discussions, we sent a structured questionnaire to the NASA interview participants and some safety representatives who did not participate in the group interviews. Appendixes I through III contain a more detailed discussion of our group interview and survey methodology. To evaluate NASA’s use of quantitative risk assessment methodologies, we reviewed policies, procedures, and practices related to NASA’s shuttle risk management program and held discussions with senior shuttle managers and NASA’s safety and mission assurance organization. We also discussed the use of quantitative risk assessment methodologies with other federal agencies that are responsible for managing complex systems to establish a benchmark for the use of such methods within the federal government. This work included the Nuclear Regulatory Commission and the Federal Aviation Administration. We also obtained information on the Environmental Protection Agency’s use of quantitative risk assessment in the management of superfund cleanup sites. In addition, we consulted outside experts to obtain their views on the usefulness of quantitative risk assessments to NASA. We conducted our review primarily at NASA Headquarters, Washington, D.C.; Marshall Space Flight Center, Alabama; Johnson Space Center, Texas; Kennedy Space Center, Florida; Thiokol Corporation, Ogden, Utah; and Rocketdyne Division of Rockwell International, Canoga Park, California. We conducted our review between June 1994 and December 1995 in accordance with generally accepted government auditing standards. Good communications is one of the keys to effective risk management. Without adequate information about risks, launch decisions may be flawed as they were in the case of the Challenger accident. Interviews with key shuttle program officials, survey data, and our observations indicate that NASA has been successful in creating communication channels and an organizational culture that encourages people to discuss safety concerns and to bring those concerns to higher management if necessary. NASA has announced plans to make fundamental changes in the way it manages the shuttle program—turning day-to-day management over to a single prime contractor and reducing direct NASA involvement. Some managers expressed concern about the potential impact of this change, particularly with respect to staffing and organizational restructuring. NASA’s challenge will be to ensure adherence to the communications principles that are essential to promoting shuttle safety. According to the Presidential Commission, prior to the Challenger accident, project managers for the various elements of the shuttle program felt more accountable to their center management than to the shuttle program organization. As a result, vital program information frequently bypassed the program manager, who was located at the Johnson Space Center. The Commission recommended that NASA give the program manager authority over all program funding and work. In response, NASA centralized program management in a shuttle program director at headquarters with overall responsibility for shuttle operations and budgets. Also, the program manager at the Johnson Space Center was made a headquarters employee in order to minimize center-to-center communications problems. Effective January 31, 1996, however, shuttle program management responsibility was transferred from the headquarters director to the Johnson Space Center director. Because NASA has not yet prepared a detailed plan for implementing this change, we could not fully evaluate its implications. However, according to NASA officials in the Office of Human Space Flight, the Johnson Center director will have full authority over the shuttle resources and work at all participating centers and will report directly to the NASA administrator. NASA has also given astronauts a role in certifying the shuttle for launch and encouraged them to move into shuttle management positions, as recommended by the Presidential Commission. NASA also established the Headquarters Office of Safety and Mission Assurance under the direction of an associate administrator reporting directly to the NASA administrator. The agency strengthened the safety organizations at its shuttle field centers so that each director of safety and mission assurance reports to a center director rather than the engineering organization. NASA also increased the number of people assigned to the safety organization. In addition, NASA established a safety reporting system to provide an avenue for NASA and contractor personnel to confidentially report problems to safety and program management officials that could result in loss of life or mission capability, injury, or property damage. Participants in our discussion groups—both within NASA and in the contractor organizations—described a communication environment that is more open than the one that existed at the time of the accident. Respondents in our follow-up survey portrayed the culture as encouraging contractors and employees to discuss and, if necessary, elevate safety concerns. Discussion groups also identified multiple channels, both formal and informal, for communicating flight safety information. In some cases, these communication channels represent independent, parallel paths for assessing risk. Our own observations and analysis of NASA’s approach to dealing with a recent problem illustrated the openness with which agency officials address safety issues. In group discussions with key NASA and contractor shuttle managers and contractor working-level engineers, we asked them to assess conditions related to the flow of safety information to top management. All of the groups reported that the shuttle program’s organizational culture encourages people to discuss safety concerns and bring concerns to higher management if they believe the issues were not adequately addressed at lower levels. As one manager noted, because of the complexity of the shuttle program, open communication, group discussions, and the sharing of information are essential to flight and work place safety. NASA managers at the three field centers with primary responsibility for managing shuttle elements and at NASA headquarters reported having taken steps to create an organizational environment that encourages personnel at all levels to voice their views on safety to management. One manager noted that people are not afraid to surface their mistakes to management when they discover mistakes have occurred. Another manager said, “If . . . I got the idea that I had a manager in the system who wasn’t allowing their people to feel comfortable in bringing things, probably that’s the time I think I would change that person’s job because . . . our people need to feel that they can come without attribution and talk about what they need to talk about.” Managers in each group we interviewed cited various techniques they use to create an organizational environment that encourages personnel at all levels to voice their professional viewpoints on safety issues to management, even if dissenting. For example, managers invite people to express their concerns by trying to keep every line of communication open and telling people that bringing up a problem does not reflect poor performance; holding extensive dialogue over shuttle safety issues, beginning early in the problem identification stage, so that everyone fully understands the issues; encouraging people to come in or call their managers if they want to talk about a safety concern, no matter how small the issue; and not only encouraging, but expecting, open expression of professional differences at all levels. The contractor managers also described a working relationship with NASA that they believe encourages open communication and the elevation of safety concerns. They described the flow of information between NASA and shuttle contractors as continual, open, and comprehensive. From their perspective, daily contact between contractor and NASA working-level personnel contributes to the exchange of information. Contractor support to and participation in flight readiness reviews and other shuttle processing meetings, and their reporting of safety information directly into NASA’s centralized information systems are among the other mechanisms that achieve that exchange. One manager noted that the Challenger accident prompted a change in his contractor’s management approach. Before the accident, company meetings were closed to the NASA site representatives. Since the accident, NASA representatives attend all technical meetings. Managers from two other contractors said that they would not hesitate to go to the highest levels of NASA management to ensure that safety issues received appropriate attention. Contractor working-level engineers portrayed their organizations as supportive of engineers elevating shuttle safety issues and concerns to management. For example, at one contractor facility, program teams are structured so that minority opinions about the handling of safety problems can be elevated to a higher level board. At another contractor facility, the work environment was described as one that encourages debate, discussion, and never keeping a safety concern quiet. At the third contractor plant, the formal reporting process ensures that NASA and contractor managers are continually apprised of issues, review how issues are resolved, and can request more work if they do not agree with the resolution of a safety issue. The managers and safety representatives who responded to our survey also gave very favorable ratings to NASA’s current communications culture. For example, 90 percent of those responding to the survey said that to a great or very great extent NASA’s organizational culture encourages civil service employees to discuss safety concerns with management. As shown in figure 2.1, more than 80 percent of the respondents to our survey rated the following current shuttle communications and information flow conditions very favorably. As part of our review, we attended numerous certification of flight readiness and prelaunch assessment reviews for shuttle mission STS-64, including the flight readiness review and launch. We observed open and candid discussions, debate of issues, and a structure that required the recording and follow-up of unresolved issues. At most reviews, presentations appeared thorough and participants asked many probing questions to ensure they had an adequate understanding of the issues being briefed. If participants did not believe they adequately understood an issue or additional work was required to resolve an issue, it was listed as an open item to be resolved prior to launch. Managers, safety personnel, and working-level engineers described shuttle program and contractor procedures and structures that provide multiple avenues for continual communication with contractors, across centers, and with headquarters to discuss safety issues. These avenues include the certification of flight readiness process, daily telephone conferences, and weekly meetings. In response to our survey, almost all NASA program managers and safety representatives believe the opportunities to discuss and communicate shuttle issues and concerns meet, or even exceed, the needs of the program in terms of the number of forums held and the types and levels of expertise represented. The certification of flight readiness process requires the involvement of all centers and projects on issues that could affect safety or mission success. In preparation for a launch, NASA relies on a number of reviews to ensure that the shuttle is safe for flight. These reviews are designed to ensure compliance with requirements, that prior problems/failures have been corrected, planned work has been completed, and operational support is in place for the mission. Managers also reported other, sometimes less formal, channels for communicating safety information. For example, the shuttle program manager holds an early morning telephone conference daily, enabling NASA managers at headquarters and the centers to discuss problems and draw upon the experience of others. The manager of launch integration also conduct a daily “noon board” telephone conference to discuss shuttle issues, status, and required changes related to vehicle processing at the Kennedy Space Center. Project representatives from the various shuttle centers participate if the issue involves their shuttle element. Also, NASA’s shuttle program manager chairs a weekly Program Requirements Control Board meeting that is the controlling authority for all changes to the shuttle program baseline. Safety and mission assurance engineers participate in all of these meetings. Further, NASA safety and project representatives at contractor plants help ensure a continual flow of information on contractor issues. In addition, the NASA Safety Reporting System (an anonymous reporting system) provides another opportunity for people to report safety concerns. In addition to taking part in all of the program and project reviews for the certification of flight readiness, NASA’s Office of Safety and Mission Assurance conducts prelaunch assessment reviews of all major shuttle elements. The office’s System Safety Review Panel also conducts several reviews, including a review of in-flight anomalies from previous missions. These safety office reviews are conducted independently of the project offices responsible for the various shuttle elements. Results of the safety office reviews are presented at the flight readiness review. The safety organization continues to monitor shuttle missions up to and during launch. Figure 2.2 illustrates the parallel assessments by safety and mission assurance and the shuttle program and project offices. We asked contractor working-level engineers what avenues are open to them to communicate their views in the event that they disagree with a safety decision made at higher levels of management, either within their organization or within NASA. A variety of communication routes were cited: a company ombudsman, the firm’s safety manager, NASA counterparts, or higher levels of management within the contractor’s organization and the NASA Safety Reporting System. While there was a high level of agreement that the current culture encourages and enables contractors and employees to discuss safety issues and concerns, there was not universal agreement about the kinds of risk information needed for final launch decisions. We asked NASA managers and safety representatives to designate the types of safety issues that should always be briefed in detail to corporate-level management at the final flight readiness review. Seven of the 15 types of issues we asked about were widely endorsed as needing the board’s review; however, opinions were divided in other areas. For example, the views of the board members tended to differ from those of the other managers and safety representatives regarding whether hazards and new waivers should always be briefed in detail. Opinions were also divided about the level of detail that should be provided when there are changes that affect procedures or processes involving the flight crew, operations, software, or shuttle hardware. We also observed differences in the amount of detail provided during two flight readiness reviews. At the first review, we observed that the review board’s chairman required less detail about issues and concerns than at the second review. The second review meeting we observed was chaired by a different official. This official requested a greater level of detail about issues being discussed. Thus, the change in personnel caused some initial confusion about the type and amount of information needed to make corporate-level launch decisions. To provide a better understanding of the cultural and communication path changes within NASA, we compared NASA’s approach to handling the motor joint issue at the time of Challenger with a recent issue concerning another joint in the solid rocket motor. On two successive flights in 1995, hot gas penetrated beyond the joint’s sealer compound and made very small singe marks on the joint’s primary o-ring. NASA was more cautious in its approach to handling the latest motor joint problem. For example, NASA immediately halted shuttle launches and publicly aired the problem. NASA held weekly press meetings to discuss the problem and progress in correcting it. Shuttle and contractor managers at all organizational levels were heavily involved in the issue and the safety organization provided an independent assessment of the problem. NASA did not resume shuttle launches until it was confident that the problem was understood and corrected. Table 2.1 describes our observations. Some discussion group participants told us they are concerned about the impacts of continued cost reductions and planned program changes. Over the next 5 years, plans call for NASA to make significant additional reductions in shuttle costs while maintaining the capability to meet the demanding schedule for international space station assembly and support. Although final decisions have not been made, NASA has initiated a number of actions to further reduce shuttle operation costs, including turning shuttle operations over to a single prime contractor. Some participants in our discussion groups expressed concern about the effect of continued cost reductions and the transition to contractor management of the program. In July 1995, we reported on the schedule pressures created by the International Space Station assembly requirements. Based on our own analysis and internal NASA studies, we concluded that the shuttle’s ability to meet station launch requirements appeared questionable. To meet the station’s “assembly complete” milestone, shuttle officials had designed a very compressed launch schedule. During certain periods of the station assembly, clusters of shuttle flights are scheduled to be launched within very short time frames. For example, the schedule calls for five launches within a 6-month period in fiscal year 2000 and seven launches during a 9-month period in fiscal year 2002. Because the schedule is so compressed at times, there is very little margin for error. There is little flexibility in the schedule to meet major contingencies, such as late delivery of station hardware, or technical problems with the orbiter. We reported in June 1995 that NASA had reduced shuttle operations funding requirements by a cumulative amount of $2.9 billion between fiscal years 1992 and 1995 when the fiscal year 1992 budget request is compared to the fiscal year 1995 request. In our survey, we asked NASA managers and safety representatives what actions had been taken to accommodate the funding reductions and whether these actions, in their opinion, had enhanced, degraded, or had little or no effect on the accuracy, completeness, and timeliness of shuttle safety-related information. Generally, their assessment was that the actions either had little or no effect on quality or somewhat degraded quality. For example, of nine respondents who reported funding reductions accomplished by delaying safety improvements, six said the delay somewhat degraded the quality of safety-related information. However, some respondents reported actions taken to cut costs actually enhanced the quality of information. Just over 75 percent of NASA managers and safety representatives we surveyed believed that NASA emphasized safety over shuttle schedule to a great or very great extent. Figure 2.3 illustrates NASA managers and safety representative responses to our survey question on the extent to which program priorities place greater importance on safety than on meeting schedule. Just over 60 percent of NASA managers and safety representatives we surveyed believe that to a great or very great extent NASA emphasizes safety over reducing cost. Figure 2.4 illustrates responses to our survey question on the extent to which program priorities place greater importance on safety than on cost reduction. Contractor managers and working-level engineers also told us that past funding reductions had not affected the quality of safety-related information they develop. According to the contractor managers, reductions in the shuttle flight rate and various contractor productivity enhancements have enabled them to accommodate past personnel cuts without, they believe, sacrificing the quality of shuttle information they develop. Some working-level engineers in the group interviews cited a variety of concerns about the effects of funding reductions. For example, the engineers said (1) investigations of lower priority issues take longer to complete because there is not enough time to devote to them, (2) keeping people with the required skill level is a concern, and (3) there is a lack of storage in automated databases to archive safety information. In addition, some engineers told us that the funding reductions have adversely impacted employee morale because people are being asked to accomplish more with fewer resources and some employees fear losing their job. Some engineers said, however, that although morale was lower, they did not believe it adversely affected flight safety. In November 1995, the Associate Administrator for Space Flight testified that NASA plans an additional $2.5 billion cumulative reduction from total shuttle funding requirements in fiscal years 1996 through 2000 against the fiscal year 1996 budget request. According to the Associate Administrator, the program will achieve the budget reductions through restructuring and other workforce and content reductions. Both NASA and contractor managers in our discussion groups expressed concerns about how they would cope with additional funding cuts. For example, the project managers for two contractors said that workforce reductions can impact their timeliness in responding to situations that arise. One contractor manager noted that while the company measures various indexes such as “first time quality” and overtime, it is difficult to specify the point at which additional program changes to accommodate funding cuts might reduce quality. Another contractor manager noted that at some point, funding reductions could translate into not having enough people, so that maintaining the required quality will mean continual schedule delays—a signal to the contractor that their program cannot be reduced further. Although firm estimates are not available, NASA expects to achieve significant cost savings by turning shuttle operations over to a prime contractor. The contractor would be responsible for shuttle processing and launch, but NASA will retain the responsibility for making the final launch decision. The single prime contractor would combine many of the tasks now performed under 28 separate shuttle program contracts. Savings are expected to accrue because shuttle operations would be more efficient and require fewer civil service employees. Current plans are to award the contract by fiscal year 1997. During our discussion groups, some NASA managers expressed concern about the transition of shuttle operations to a single prime contractor. They feel that over the years NASA has assembled an expert shuttle operations team and there are many unknowns about making a transition to a new way of doing business. For example, the safety and mission assurance organization maintains independent oversight of shuttle operations. NASA’s projections are that the quality assurance oversight role will be reduced under the single prime contractor concept of operations. Although managers expressed concern about transitioning to a single operations contractor, in response to our survey, 76 percent of the managers and safety representatives said that quality assurance inspections and reviews should be decreased. According to NASA, there will continue to be independent oversight and the agency has plans to assure that the oversight/insight will be properly focused with the reduced level of resources expected. NASA will retain decision authority and direct oversight over work that is considered out-of-family (those events/activities that may contain a level of risk beyond the known and accepted level). In addition, NASA will retain the developmental effort for new hardware. This work will transition to the single prime contractor, but only after all the unknowns are understood by NASA. Further, NASA will return to an oversight mode when there is an indication that there is an increase in the understood level of risk for any reason. The single prime contractor will be required to propose a process for performing risk assessment and to demonstrate that they are able to institute and properly manage the process. This includes the process for keeping NASA informed of issues that have the potential for increasing risk. Through our discussion groups, individual interviews, and observations, we identified several management principles related to communication and information flow that appear to guide shuttle communications. We also identified additional management principles that we believe are essential to promoting shuttle safety in the future. In our survey, we listed these principles and asked NASA managers and safety representatives to identify those guiding principles that they believe are essential to promoting shuttle program safety as NASA deals with budget constraints, associated downsizing, and restructuring in the near term, and with continuation of shuttle flights in the long term. A large percentage of managers and safety representatives we surveyed agreed that the following principles are essential to promoting shuttle safety. The organizational environment and structures for both contractor and NASA personnel encourage timely, open discussion and debate to ensure managers have the benefit of all relevant knowledge of shuttle program issues. Managers (civil service and contractors) stress safety over schedule and cost and those managers foster these values among employees. The organizational environment encourages people (civil service and contractor) to elevate concerns to higher management if they believe the issues were not adequately addressed at lower levels. The working arrangement between NASA and contractors ensures agency managers obtain continual knowledge of problems and issues so that appropriate decisions can be made. Organizational mechanisms enable NASA corporate-level managers to carry out their decision-making responsibilities for certifying readiness for flight. NASA uses the most appropriate analytic and quantitative methods available to assess shuttle risks and conduct sufficient assessments and reviews to carry out the agency’s oversight of shuttle work processes. Management information systems, including databases, are accessible, accurate, complete, and timely for shuttle program oversight and decision-making. The NASA environment is a self-evaluative one that monitors its effectiveness in communication and information flow and seeks ways to improve it. In addition to the principles previously listed, some NASA managers provided additional principles that they believe are essential to promoting shuttle safety as NASA deals with budget constraints, downsizing, and restructuring. Management of changes in the program receives adequate attention and time to ensure that (1) program priorities are adhered to, (2) government and contractor responsibilities for the reporting and resolution of safety-related issues are clearly defined, and (3) changes to the shuttle program are appropriately evaluated before implementation. Appropriate training is conducted to ensure that personnel can effectively and efficiently carry out their work when changes in program operations, processes, and staffing occur. Morale and the working environment of employees are considered key elements in assuring a safe and quality program. Prime contractor management methods ensure quality of subcontractor work. NASA has created an organizational culture that encourages shuttle program and contractor employees at all levels to bring safety concerns to the attention of NASA’s top management. NASA has also established policies and procedures to ensure the free flow of needed safety-related information. However, in response to our survey, some shuttle program personnel expressed concern about whether NASA might be emphasizing cost reductions over flight safety as planned budget reductions and operational changes occur. Also, in response to our survey, several types of issues were endorsed as always needing the flight readiness review board’s attention. However, opinions were divided in other areas, suggesting that managers and safety representatives may not be clear on each other’s expectations about the issues that should always be briefed. If, as is likely, the planned shuttle operations contractor assumes more of the burden of providing information to the flight readiness review, it will be important to clearly specify the type and level of detail of information to be provided. NASA has adopted certain management principles that help guide the shuttle launch decision process. These include such steps as stressing safety over schedule and cost and developing an organizational culture that encourages both contractor and NASA personnel to elevate concerns to higher management if they believe the issues were not adequately addressed at lower levels. We recommend that the Administrator of NASA identify guiding principles of good risk management, such as those contained in this chapter, and ensure that terms and conditions of the planned shuttle operations contract reflect these principles. We also recommend that the Administrator take steps to ensure that flight readiness review participants understand and agree on the minimum issues that should always be discussed at the review and the level of detail that should be provided. In commenting on a draft of this report, NASA agreed with our first recommendation and stated that the agency is taking steps to implement it. According to NASA, the shuttle flight operations contract request for proposal and statement of work have been carefully reviewed and these documents reflect the principles of good risk management described in this report. NASA said that it will ensure that the contract terms and conditions are compatible with these principles. Regarding the second recommendation, NASA said that it is appropriate and the agency has recently completed an activity to update and clarify the roles and responsibilities of each program element and organization relative to the flight readiness review. The new procedure is to be fully implemented in support of shuttle flight STS-78 in June 1996. We made additional changes to the report, where appropriate, based on NASA’s technical comments. The National Research Council recommended in 1988 that NASA apply quantitative risk assessments to the shuttle program. However, NASA still relies primarily on qualitative methods to assess and prioritize significant shuttle risk. This approach relies heavily on the judgment of shuttle engineers to identify significant risk items that could cause loss of a shuttle or crew. Although NASA awarded a contract for development of a quantitative method model known as a probabilistic risk assessment for the shuttle program, NASA has not fully assessed the potential benefits of using the tool in routine shuttle decision-making. The agency also has not developed an overall strategy for assuring use of this method where it is appropriate. In addition, databases are not always timely, complete, accessible, or reliable enough to be used in these type analyses. The National Research Council investigation of NASA’s risk assessment approach following the Challenger accident found that quantitative assessment methods had not been used to directly support NASA decision-making related to the space shuttle. The Council recommended that probabilistic risk assessment approaches be applied to the shuttle at the earliest possible date. They also recommended that databases be expanded to support probabilistic risk assessments, trend analysis, and other quantitative analysis and that NASA develop a statistical sciences capability to perform necessary risk assessments. Quantitative methods, such as probabilistic risk assessments, have been used in the decision-making process by other federal agencies involved in high-risk ventures. For example, the Nuclear Regulatory Commission uses probabilistic assessments in its regulation and oversight of nuclear power plants. These techniques are used to assess the safety of operating reactor events and as an integral part of the design certification review process for advanced reactor designs. Commission officials stated they have found probabilistic risk assessments to be an effective tool for making plant-by-plant examinations to determine areas needing more emphasis, such as how long it takes a utility to respond to problems. Commission officials told us that, in their experience, probabilistic risk assessments can help identify and focus their attention on risk areas that require the most resources. The Environmental Protection Agency uses quantitative risk assessments to determine the health risks posed by superfund hazardous waste sites. The agency reviews contaminated sites for investigation and cleanup. One element of the investigation is a baseline risk assessment—an evaluation of current or potential threat to human health. The evaluation establishes probabilities that are used to decide whether a site requires cleanup. For example, if the risk of humans developing cancer from site chemicals is greater than 1 in 10,000, Environmental Protection Agency policy requires that the site be cleaned. NASA pointed out that it is important to make a clear distinction between quantitative risk assessments in general and the specific probabilistic risk assessment method when determining the value of applying these methods to space hardware issues. NASA said it recognized that probabilistic risk assessments had proven valuable at the Nuclear Regulatory Commission and the Environmental Protection Agency. However, this method did not have comparable utility at NASA. Reactor design and certification risk assessments are based on failure rates compiled from hundreds of plants and facilities while the shuttle has significantly less hard data available to quantify risk. In addition, NASA said the public health risk posed by nuclear power plant accidents or toxic waste sites argues for a multimillion dollar investment in risk assessment that can span years of analysis. In contrast, according to NASA most shuttle risk issues must be resolved in a shorter time frame. In response to the Council’s interim report, NASA began taking tentative steps toward the use of probabilistic analysis by initiating contractor trial probabilistic risk assessments of some shuttle elements. In parallel with this, NASA began developing a procedure to prioritize the shuttle’s highest risk elements. This proposed technique would lend itself to the incorporation of quantitative measures of risk and probabilities of occurrence as these measures were developed. NASA planned to assess the benefits and applicability of this method to the shuttle risk management process based on the results of the contractor studies. A former Associate Administrator for Safety and Mission Assurance indicated that he would personally evaluate the probabilistic risk assessment technique and develop a strategy for introducing it throughout NASA. However, the strategy has not yet been developed. Regarding its databases, NASA responded by developing a centralized database designed to improve the quality of information by providing an integrated view of the status of shuttle problems in near real time. The Council recommended that development of this system be given a high priority. NASA developed a database to provide information, but, as discussed in other sections of this chapter, that database has limitations. NASA officials told us that while some progress has been made, the use of probabilistic methods have not reached a mature state at NASA. NASA has made limited use of probabilistic risk assessments of the shuttle, including proof-of-concept studies, assessment of some specific shuttle systems, and required assessments of accident probabilities for launches involving radioactive material. A 1994 survey of probabilistic methods used in structural design, which included some shuttle projects, found that there is no agreed-upon approach across centers for preferred methods, practices, or software and that the various quantitative tools have not been fully examined, evaluated, and accepted by NASA centers. In early 1994, the NASA Administrator and the Office of Space Flight concluded that a probabilistic risk assessment of shuttle risk was needed to guide safety improvement decision-making. According to a safety official, NASA contracted with Science Applications International Corporation in January 1994 to conduct a probabilistic risk assessment of the space shuttle. This was the first assessment to include a complete shuttle mission. The contractor was required to develop and apply a risk model of the shuttle during flight and to quantify in-flight safety risk. The analysis was to identify, quantify, and prioritize risk contributors for the shuttle. According to the model’s author, secondary objectives were to provide a vehicle for introducing and transferring probabilistic risk assessment technology to NASA, and to demonstrate the value of the technology. The model was completed in April 1995. According to the contractor who developed the probabilistic risk assessment model, the model could be a useful tool in NASA’s management of shuttle risks. For example, the model might be used to establish realistic cost objectives for redesigning the high risk components, helping to assure that limited resources are focused toward solving those problems that will have the most impact on safety. The National Research Council also noted that a detailed quantitative risk assessment provides decision makers with a better basis for managing risk. An internal shuttle program survey of managers, safety experts, and senior engineers revealed mixed reactions to the model. Although generally positive, respondents cited some concerns. For example, some respondents commented that more use of actual failure data would have benefited the analysis and that some assumptions used were debatable. Some found fault with the excess use of expert opinion and the lack of thoroughness in delineating certain assumptions. Following the survey, the Deputy Associate Administrator for Space Flight informed shuttle and safety managers that they should feel free to use the report and model as a “limited tool in the risk management tool box.” According to some NASA safety officials, the model has not been routinely used by NASA personnel as a risk assessment tool because officials are still evaluating the utility of the model and barriers exist to its use by NASA employees. For example, there is no instruction manual for using the model and it requires use of contractor owned software. According to safety officials, NASA does not have current copies of the required software and older inadequate versions are limited. According to these officials, only one NASA employee has been able to use the model on a NASA computer using the older software. In addition, no firm decisions have been made regarding maintenance and update of the model to reflect shuttle changes, such as the super light weight external tank. Safety officials stated they are continuing to assess the model to determine its utility within NASA. NASA project and safety officials compile a list of significant shuttle risk issues for each project to target resources and manage risk reduction efforts. Only risks that can be reduced by incorporating hardware or procedure modifications are included in the assessment. According to NASA’s April 1995 shuttle safety risk ranking methodology guidance, the source of risk information currently used in the rankings is qualitative and the process ranks catastrophic events by judgmentally derived prioritization matrices. The guidelines state that many comparisons of catastrophic events could be made but are sometimes subjective, emotional, and rely on different techniques. A complete probabilistic risk assessment would be the most desirable analysis, according to the guidelines, but probabilistic analyses are labor-intensive efforts that require many system experts, a complete understanding of the methodology, and proper management of the effort. NASA has made limited progress in adopting the National Research Council’s recommendations that the agency assess risk with quantitative methods, such as probabilistic risk assessments. NASA uses a variety of methods to assess shuttle risk issues, and efforts are underway to increase the use of quantitative methods. Qualitative methods are still widely used when risk issues are thought to be well understood. NASA has made limited use of the classical probabilistic risk assessment method of analysis. Cost, lack of specific expertise, and lack of data are the reasons cited for limited use. According to shuttle and safety managers, lack of a strategy for incorporating the methods into decision-making processes has impeded NASA’s progress in adopting the National Research Council’s recommendations on risk assessments. Also, insufficient expertise exists at NASA to conduct specific quantitative analyses, such as probabilistic risk assessments. NASA project and safety officials told us that progress in implementing quantitative risk assessment methods has been impeded because NASA does not have a working strategy for formalizing these methods for the shuttle program. Such a strategy would include clear and measurable goals, resource requirements, assessments of current utilization and skills within NASA, and training needs, including the need to learn by doing selected projects. Without this focus, projects and safety organizations are skeptical about the cost and benefits of using the probabilistic risk assessment model. Project and safety officials at several centers expressed concerns about the applicability of probabilistic risk assessments to the shuttle program. While officials stated they recognize probabilistic risk assessments could be used as an effective additional tool to assess risk, they see a need for more training on the methodology and the need to learn by doing selected projects. Several stated they do not have the resources needed for this type analysis but are stretched just to operate their programs. Several officials stated they believe there is a lack of trust in the probabilistic risk assessment method because people do not understand it. Many officials expressed concern about the complexity of the shuttle probabilistic risk assessment model, the lack of good data, and the dependence upon the contractor to make needed changes to the model. Several officials commented that NASA needs a “champion” at headquarters to provide a focused effort to emphasize use of these tools when appropriate. NASA headquarters safety and mission quality officials stated they have not developed a master plan for formalizing quantitative techniques within NASA or made the progress they would like in this area. However, steps are being taken to address several of the concerns expressed by project and safety officials at the centers. For example, training courses in risk management and assessment are being planned that will be offered to safety and other NASA personnel. Reference manuals on sources for data and techniques on risk assessments are under contract. According to NASA safety officials, the first effort to develop these type documents began in 1989 but was unsuccessful and the documents were not published. However, NASA has established a coordination committee to develop a standard, comprehensive approach to introduce structural design methods that can be used in the shuttle program. NASA is also trying to give this issue visibility as the agency plans to move to a single prime contractor and to assure that the statement of work contains provisions that the contractor use quantitative risk assessment techniques where appropriate. According to the National Research Council, decisionmakers within NASA must be supported by people skilled in the statistical sciences to aid in the transformation of complex data into useful information. The Council recommended that NASA develop a staff of experts in these areas to provide improved analytical support for risk management. NASA officials at several centers and at NASA Headquarters told us they lack sufficient personnel with these skills, and in one case, a center lost needed contractor skills that caused the delay or termination of a planned analytical project. A 1994 NASA survey of probabilistic methods used in structural design work found that a wide variance of knowledge exists at the centers and that a majority of working-level engineers are not familiar with and do not use probabilistic methods. Another factor that has hindered development of quantitative methods of risk assessment is that NASA’s databases do not always provide timely, accessible, accurate, and complete information. A large percentage of managers and safety representatives we surveyed believe that NASA should provide management information systems, including databases that are accessible, accurate, complete, and timely for shuttle program oversight and decision-making. However, more than half assessed NASA’s current management information systems as needing improvement. NASA has developed automated database systems to provide shuttle data used in decision-making. One system, called the Program Compliance Assurance and Status System, is a central database designed to integrate existing data, such as in-flight anomalies, from various sources in the program. Another system, the Problem Reporting and Corrective Action system, provides data to the central system and is designed to document and track problems in the program. According to NASA officials, the Program Compliance Assurance and Status System is neither timely nor fully utilized. The system is cumbersome to use because it is based on older technology, some trend and other data is not centralized in the system, and software needed to convert contractor data to NASA database format has not been developed. Program officials told us they maintain trends on some aspects of the shuttle program, but have found the centralized system to be difficult to use and not compatible with other existing databases. The officials stated that the required conversion programs have never been developed to input some contractor data into the system. In some cases, safety officials must obtain data directly from contractors to conduct quantitative risk assessments. Because the system is hard to use in real-time and the data is not always current, some officials stated they are using a different software program with faster computers to access and correlate data more rapidly. A January 1995 internal report on shuttle problem reporting system data integrity at two centers found missing criticality codes on thousands of entries. Blank entries could, therefore, be interpreted as either not applicable or inadvertently omitted. A NASA Headquarters official was not aware of any corrective action on this matter. Officials told us that the Problem Reporting and Corrective Action System records are often not reliable, lack data needed for quantitative risk assessments, and lack uniformity in categorizing problems. The system also contains entries that may not meet the definition of a “real problem.” NASA safety officials acknowledged that the system needs improvement but stated no firm decision has been made regarding the extent of improvements pending the transition to a single prime contractor. NASA has made limited progress in adopting the National Research Council’s recommendation that the agency assess risk with quantitative methods, such as probabilistic risk assessments. NASA officials, for the most part, rely on qualitative methods for assessing risk in the shuttle program when they believe risk issues are well understood. Although some progress has been made, NASA lacks an overall strategy with focused management emphasis to incorporate methods, such as probabilistic risk assessments into the shuttle program, when appropriate. Resource constraints and specific expertise are cited as barriers to increased use of these methods. In addition, NASA’s databases need improvement and are not fully utilized by decisionmakers nor are they adequate to support the use of quantitative risk assessment methodologies. We recommend that the Administrator of NASA establish a strategy, to include specific milestones, for deciding whether and how quantitative methods, such as probabilistic risk assessments, might be used as a supplemental tool to assess shuttle risk. We also recommend that the Administrator assess the shuttle program’s centralized database, as well as other databases, to insure that data required to conduct risk assessments and inform decisionmakers is accessible, timely, accurate, and complete. NASA agreed with the need to establish a strategy, with milestones, for incorporation of quantitative risk assessment methods into the shuttle’s risk management program. According to NASA, the agency will establish a team to develop the strategy. NASA also agreed that the shuttle program’s centralized databases need to be assessed. In this regard, NASA will form a team of engineers to thoroughly examine the Program Compliance Assurance and Status system. The team will be tasked to determine the adequacy of what presently exists and make recommendations for improvements as necessary. The assessment team will report to the shuttle program manager. In addition, the Problem Reporting and Corrective Action System is being examined at each center by a reengineering team. This team is searching out deficiencies and will recommend needed improvements that must be implemented by the shuttle flight operations contractor. We made additional changes to the report, where appropriate, based on NASA’s technical comments.
Pursuant to a congressional request, GAO reviewed the National Aeronautics and Space Administration's (NASA) management of risk associated with space shuttle flights, focusing on NASA attempts to: (1) increase the flow and communication of risk information; and (2) use quantitative methods for assessing risk. GAO found that: (1) NASA has successfully created numerous formal and informal communication channels and an open organizational culture that encourages people to discuss safety concerns and to elevate unaddressed concerns to higher management levels; (2) while most personnel agreed that the current culture encourages discussions of safety concerns, there was not universal agreement about the kinds of risk information needed for final launch decisions; (3) some personnel expressed concerns about the effects of pending cost reductions and program changes on shuttle safety; (4) NASA primarily relies on qualitative methods to assess and prioritize significant shuttle risk; (5) costs, lack of expertise, and lack of data have hindered NASA progress in increasing its use of quantitative methods to assess shuttle safety risks; and (6) NASA databases do not always provide timely, accessible, accurate, and complete information to facilitate quantitative assessment or decisionmaking.
FAA’s air traffic management mission is to promote the safe, orderly, and expeditious flow of air traffic in the national airspace. To accomplish this mission, FAA employs a vast network of ATC and traffic flow management computer hardware, software, and communications equipment to (1) prevent collisions between aircraft and obstructions and (2) facilitate the efficient movement of aircraft through the air traffic system. Automated information processing and display, communication, navigation, surveillance, and weather resources permit air traffic controllers to view key information, such as aircraft location, aircraft flight plans, and prevailing weather conditions, and to communicate with pilots. These resources reside at, or are associated with, several ATC facilities—flight service stations, air traffic control towers, terminal radar approach control (TRACON) facilities, and air route traffic control centers (en route centers). These facilities’ ATC functions are described below. About 90 flight service stations provide pre-flight and in-flight services, primarily for general aviation aircraft, such as flight plan filing and weather report updates. Airport towers control aircraft on the ground and before landing and after take-off when they are within about 4 nautical miles of the airport. Air traffic controllers rely on a combination of technology and visual surveillance to direct aircraft departures and approaches, maintain safe distances between aircraft, and communicate weather-related information, clearances, and other instructions to pilots and other personnel. Approximately 180 TRACONs sequence and separate aircraft as they approach and leave busy airports, beginning about 4 nautical miles and ending about 50 nautical miles from the airport, where en route centers’ control begins. Twenty en route centers control planes over the continental United States in transit and during approaches to some airports. Each en route center handles a different region of airspace, passing control from one to another as respective borders are reached until the aircraft reaches TRACON airspace. En route center controlled airspace usually extends above 18,000 feet for commercial aircraft. Two en route centers—Oakland and New York—also control aircraft over the ocean. Controlling aircraft over oceans is radically different from controlling aircraft over land because radar surveillance only extends 175 to 225 miles offshore. Beyond the radars’ sight, controllers must rely on periodic radio communications through a third party—Aeronautical Radio Incorporated (ARINC), a private organization funded by the airlines and FAA to operate radio stations—to determine aircraft locations. See figure 1 for a visual summary of the processes for controlling aircraft over the continental United States and oceans. Although en route centers’ specific hardware and software configurations may differ slightly, the centers rely on over 50 systems to perform mission critical information processing and display, navigation, surveillance, communications, and weather functions. Examples include the systems that display aircraft situation data for air traffic controllers, the system that collects data from various weather sources and distributes them to weather terminals, radars for aircraft surveillance, radars for wind and precipitation detection, ground-to-ground and ground-to-air communication systems, and systems to back-up primary systems. (See appendix II for a simplified block diagram of an en route center’s systems environment.) DCC is one of the 50-plus en route center systems. DCC runs on 1960s vintage IBM 9020E mainframe computers, and its software is written in two languages, assembly and JOVIAL. It is used at 5 of the 20 en route centers. (See figure 2 for the locations of the 20 en route centers and identification of the five that are DCC-equipped.) DCC’s purpose is to accept data from the Host Computer System (HCS) and process it to form the alphanumeric, symbolic, and map data that appear for air traffic controllers on their Plan View Displays (PVD). (See figure 3 for a simplified block diagram of DCC and the en route center systems with which it interfaces.) In response to expected increases in the frequency and severity of DCC problems and the possibility of delays in the system intended to permanently replace DCC as well as other en route display-related systems, FAA awarded a roughly $30 million contract in September 1994 for development of “a single, deployment-ready” interim replacement (i.e., DCCR) unit. FAA officials characterized this development effort as an “insurance policy” to protect FAA against delays in the permanent replacement, then called the Initial Sector Suite System (ISSS) and now called the Display System Replacement (DSR). (See appendix III for more information on DSR.) In July 1995, following a flurry of DCC problems and outages and known delays with ISSS, FAA decided that there was an urgent and compelling need to replace DCC at all five DCC-equipped en route centers in the interim before DSR is ready. In making such capital investment decisions, FAA uses four criteria: sponsor (i.e., user) support; mission importance; information technology architectural conformance and maturity; and cost-effectiveness. Each criterion carries a standard weighting factor that is to be consistently applied to all proposed projects. (See figure 4 for these weighting factors.) According to DCCR documentation and FAA officials, sponsor support and mission need (i.e., aviation safety) drove the July 1995 decision to produce and deploy DCCR. In particular, FAA’s Air Traffic Services organization, the Air Traffic Controllers Association, and the Air Transport Association strongly endorsed DCCR. Also, FAA officials told us that extensive media attention to the DCC outages, considerable congressional interest, and public safety concern were major considerations. For example, one official stated that FAA was “taking too much heat in the papers for DCC outages and wanted DCCR to solve the problem.” In FAA’s view, the need to quickly replace DCC was urgent and compelling, and DCCR was the only practical alternative to sustaining safe, orderly, and efficient air traffic services in the near-term. FAA considered two cost estimates in analyzing DCCR’s costs versus benefits. However, the results of this analysis were inconclusive, and according to FAA officials, were not relevant to the decision to produce and deploy DCCR because of the urgent need to replace DCC. One of the cost estimates was done by the DCCR project office and the other by the program analysis and operations research office. Using the two cost estimates, the FAA analyzed three DCCR life expectancy scenarios. Under the “most likely” scenario, the project office’s cost estimate produced a DCCR net present value of negative $37 million and a benefit-to-cost ratio of 0.7 to 1. In contrast, the program analysis and operations research office’s lower cost estimate under the same scenario placed these values at $29 million and 1.4 to 1, respectively. Neither estimate considered maintenance costs. Given the expense of DCC maintenance, including it would likely have made DCCR more cost-effective under both estimates. While FAA officials agreed with our assessment of the impact of including maintenance costs, they did not quantify this impact. The month following its July 1995 decision, FAA awarded a roughly $34 million contract to produce five DCCR systems and has publicly committed to having the first site operational in October 1997 and the fifth and last site in February 1998. (See figure 5 for the respective sites’ publicly announced delivery and operational readiness demonstration dates.) DCCR’s installation and operation will not change the air traffic controllers’ current system interface and thus will be transparent to them. It consists of two components—the Display Channel Rehost Processor (DCRP) and the Display Controller and Switch (DC&S). DCRP will use a commercial, off-the-shelf IBM processor to execute about 120,000 lines of rehosted DCC code and 60,000 lines of new code. The primary contractor is developing the new code, and a subcontractor is rehosting the DCC code. DC&S uses custom-developed hardware and about 65,000 lines of new code implemented in firmware to perform keyboard, trackball, and display control functions. (See figure 6 for a simplified block diagram of DCCR and the systems with which it interfaces.) According to FAA, a major system outage is one which significantly delays air travel or produces significant media interest. Most of the recent major system outages at the five DCC-equipped centers have been DCC-related. Our analysis of FAA major outage data from September 1994 through May 1996 at the Chicago, Dallas-Ft. Worth, New York, Washington, and Cleveland en route centers showed that DCC accounted for 10 of the 21 outages, or about 48 percent. Moreover, these DCC outages were responsible for 195 of 225 hours, or about 87 percent, of unscheduled system downtime at these centers during this time. (See figures 7 and 8.) System availability is defined as the time that a system is operating satisfactorily, expressed as a percentage of the time the system is required to be operational. FAA has specified a DCC system availability requirement of 99.9 percent. DCC exceeded that requirement from fiscal year 1990 through 1993, but failed to meet it in fiscal years 1994 and 1995, with availability of 99.83 and 99.81 percent, or 0.07 and 0.09 percent below the requirement, respectively. (See figure 9.) According to FAA officials, DCC’s acceptable history of availability has been attained through the extraordinarily hard work, commitment, and ingenuity of its highly skilled, but small, workforce of technicians. For example, to obtain replacement circuit boards for the 9020E, which is out of production, FAA officials told us that technicians scavenged parts from a computer used by the FAA Air Traffic Training Academy and cannibalized parts from two scrapped computers at the FAA Supply Depot. Two factors determine a system’s availability—the frequency of unscheduled outages and the time to recover from each outage (i.e., mean time to restore or MTTR). According to FAA data, the number of DCC outages annually has increased by about 55 percent since calendar year 1990, from 22 to 34, and FAA predicts that this number will hold relatively steady through calendar year 2000. (See figure 10.) In contrast, the DCC MTTR grew by over 434 percent in calendar years 1994 and 1995 over previous years, and FAA predicts that DCC MTTR will grow at an average annual rate of 13 percent through the year 2000. (See figure 11.) FAA attributes increasing MTTR to depleted inventories of out-of production DCC spare parts and a shortage of experienced DCC repair technicians. Decreases in DCC availability will result in costly delays for airlines and passengers. Thus far, FAA has made good progress on its DCCR acquisition, but much remains to be accomplished. To FAA’s credit, the fourth and final software build has completed integration testing, and some formal system-level test and demonstration activities have occurred. However, the number of software defects being found is slightly higher than projections, and despite the fact that FAA’s defect fix rate has kept pace with the higher numbers and its DCCR defect trend lines are favorable when considering defect severity, unresolved defects delayed the start of concurrent system-level testing at the Technical Center and the first site by several weeks. Notwithstanding this delay, DCCR’s operational readiness date may nevertheless be accelerated by several more months if FAA is successful in conducting system acceptance and operational tests concurrently. Also to FAA’s credit, it has prudently made formal risk management and quality assurance integral components of the acquisition. However, two risks associated with concurrent test plans are not being formally addressed—managing contention for limited test staff among three concurrent test activities, and controlling and synchronizing changes to three DCCR system test configurations. DCCR involves both converting and migrating existing code written for DCC’s IBM 9020E platform, and writing new code. In sum, DCCR consists of about 245,000 lines of code—120,000 lines of rehosted DCC code (of which about 20,000 are modified and 100,000 are unchanged) and 125,000 lines of new code. Of this newly developed software, about 60,000 lines of code relate to the DCRP component of DCCR and 65,000 lines relate to the DC&S component. To FAA’s credit, it has thus far completed the fourth and final DCRP software build as well as formal software integration and software testing activities. Also, the DC&S subcontractor has completed formal installation and integration testing of the DC&S firmware, and DC&S has been accepted by the DCCR prime contractor. In addition, a demonstration of DCCR was held on May 1, 1996, for the FAA Deputy Administrator, and formal system-level testing of the initial version of DCCR was completed on September 24, 1996, 2 months ahead of schedule. One measure of software quality is the severity and density (number per one thousand lines of code) of software errors or defects. Defects are managed by (1) documenting them via program trouble reports (PTR), when they are discovered, and submitting them to a change control board, (2) determining whether they are valid, (3) assigning valid PTRs a priority on the basis of severity, and (4) resolving the valid PTRs and closing them. DCCR’s severity categories are emergency, test critical, high, medium, and low. According to the DCCR prime contractor, an emergency PTR causes test progress to stop and requires an immediate resolution in the form of a fix or an adequate workaround; a test critical PTR severely impedes test progress, and resolution is required prior to the next scheduled accumulation and reporting of valid PTRs; a high PTR must be resolved before an integration and test activity is completed; a medium PTR is a significant system or application problem, but it does not require resolution for integration and test completion; and a low PTR is a minor or insignificant system or application problem that does not require resolution for integration and test completion. One way to gauge progress in the software maturation process is to compare the number of defects being found to projections in the number of defects expected. These projections are normally made on the basis of models that consider defect experience on like or similar software development efforts. In the case of DCCR, the actual number of cumulative PTRs discovered is slightly higher than projected. (See figure 12.) Specifically, as of July 1996, actual cumulative defects were about 17 percent over expectations. Considering the possibility of variability in model results as well as FAA’s track record during this same period in “working-off” defects at a pace consistent with defect discovery, we see no cause for alarm at this time. Another measure of software maturation is the trend in the number of open (i.e., unresolved) PTRs over time adjusted for the PTRs’ severity mix. Using a simple weighting scale of one through five, which corresponds to the DCCR PTR severity categories, we analyzed the change in open PTRs from March 1996 through August 1996 and found a downward trend. (See figure 13.) According to software engineering guidance, a downward slope over time is ideal. According to the publicly announced DCCR schedule, the first site is to be operationally ready on October 1997. However, on the basis of our analysis of DCCR plans, contractual terms, completed activities, and discussions with project officials, DCCR could be operationally ready as early as December 1996, 10 months ahead of schedule. Currently, DCCR’s development is about 4 months ahead of the published schedule, having completed DCRP software build four integration and testing as well as formal testing of the initial version of DCCR earlier than planned. An additional 6 months may also be saved if FAA Technical Center acceptance testing, FAA Technical Center operational test and evaluation, and first site acceptance testing can be successfully accomplished concurrently, as FAA plans for DCCR, rather than sequentially, as is normally the case. Concurrent testing will be successful, however, only if the software has no significant problems. With respect to DCCR’s financial status, FAA estimates DCCR’s project cost to be about $64 million, $48 million of which are contract costs and $16 million are other project-related activities, such as support contractors, field support, and training. Of the $48 million for contract costs, spending plans show that as of July 1996, $31.8 million was to be spent; and of the $16 million for other activities, obligation plans show that $12.4 million was to be obligated by the end of fiscal year 1996. On the basis of the latest monthly contractor reports, cumulative contract costs through July 19, 1996, are $29.1 million, which is about $2.7 million below spending plans. However, these cost reports have not been independently verified by FAA or its support contractors, which is FAA’s normal practice on large contracts. Project officials stated that other, more costly contracts, such as DSR development and deployment, are consuming cost verification resources. On the basis of FAA internal financial management system reports,cumulative obligations for other project-related activities through July 31, 1996, are $11.5 million, which is about $600,000 under the obligation plan with only 2 months left in the plan period. However, these obligation figures are not complete because neither the planned nor actual obligations include all project-related activities, such as FAA personnel compensation, benefits, and travel. Acquisition of software-intensive systems, like DCCR, is inherently risky. Best practices used in government and private sector acquisition and development activities include the use of formal risk management to proactively and continually identify, assess, track, control, and report risks. Carnegie Mellon University’s Software Engineering Institute recommends a joint contractor/government risk management approach in its guide Team Risk Management: A New Model for Customer Supplier Relationships. For DCCR, FAA and the prime contractor have a formal, collaborative risk management process that includes a risk management plan and an operational process that is consistent with this plan. They maintain a single “risk watch list” that is updated periodically on the basis of the joint FAA/contractor risk management team’s biweekly evaluation of risk information sheets submitted by FAA or contractor staff. The team assigns a severity category to each risk (high, medium, and low), develops a mitigation strategy for each risk, and tracks and reports on the strategies’ implementation. Currently, the DCCR risk watch list contains four low risks: (1) contention for FAA Technical Center laboratory resources (facilities and systems) during concurrent test activities, (2) costly updates to the DC&S firmware to correct latent errors after it is delivered to the Technical Center and the five en route centers, (3) yet-to-be-tested ability of a fully integrated DCCR to meet system-level performance parameters, and (4) increased system maintenance time, and thus system downtime, due to the lack of a remote monitoring and maintenance connection to each site’s DCRP component. The watch list also contains one medium risk, which is lack of DCCR training course materials and actual training before DCCR’s operational readiness demonstration date. A quality assurance program exists to ensure that (1) products and processes fully satisfy established standards and procedures and (2) any deficiencies in the product, process, or their associated standards are swiftly brought to management’s attention. The quality assurance plan is the centerpiece of an effective quality assurance program. The plan defines the activities necessary to ensure that software development processes and products conform to applicable requirements and standards. To encourage and protect its objectivity and candor, the quality assurance group should be organizationally independent of project management (i.e., have an independent reporting line to senior managers). Both FAA and the DCCR prime contractor have implemented quality assurance programs. The FAA Quality Reliability Officer, who is independent from the DCCR project office, has been actively monitoring contractor performance. Quality assurance activities performed thus far include preparing a quality assurance plan, auditing the hardware manufacturing process, monitoring project office software peer reviews, monitoring software inspections and walkthroughs, and monitoring the contractor’s configuration management activities. Throughout a system’s development cycle, various types of test activities occur that incrementally build on earlier tests and progressively reveal more and more about the system’s ability to meet specified functional, performance, and interface requirements. Early test activities focus on smaller system components, such as software strings and modules, and later tests address integrated software modules, eventually building toward different types of system-level test and evaluation activities. As such, each increment of tests is designed to sequentially test for and disclose different information about the system’s ability to perform as intended. Under FAA’s normal progression of system-level testing, Technical Center acceptance testing would occur first, followed first by Technical Center operational test and evaluation, and then by first site acceptance testing.According to FAA test officials familiar with DCCR, some overlap between the conclusion of one of these tests and the beginning of another of these tests in sequence is normal. However, the degree of overlap occurring on DCCR, which is complete concurrency of all three tests, is unusual. FAA plans to concurrently conduct Technical Center acceptance tests, Technical Center operational test and evaluation, and first site acceptance test as a way of saving time and thus implementing DCCR sooner. This approach assumes that no significant problems will arise during the test activities. According to project officials, this should be the case for DCCR because, in their opinion, (1) the system is virtually free of material defects and thus is mature, (2) FAA has experience with the DCCR commercial hardware, which is similar to that being used on another operational en route system (Peripheral Adapter Module Replacement Item), and (3) DCCR provides the same functionality as DCC. Test concurrency, particularly the 100 percent overlap planned by FAA, carries with it additional risks that must be managed closely and carefully. For example, concurrency will increase contention for test resources, in particular Technical Center system and human resources. Also, concurrency introduces the possibility of problems being found and corrected independently during the different test activities, resulting in more than one baseline test configuration. Should this occur, the results of testing activities could be meaningless. Both DCCR project and contractor officials acknowledged both risk items. However, FAA is only formally managing contention for Technical Center system resources during testing as part of its risk management program. According to FAA officials, both (1) contention for Technical Center human resources during testing and (2) test baseline change control are being managed informally and outside the framework of the formal risk management program. By not formally managing these risks, FAA is increasing the chances that they will be overlooked and adversely affect DCCR. For example, by not formally managing the latter, FAA has not ensured that the contractor’s DCCR configuration management plan expressly defines the process for controlling changes across multiple baselines during testing, an inherently more difficult configuration management scenario than is normally encountered during single baseline system testing. Although contractor representatives described for us the process they plan to use for controlling changes over multiple baselines, the configuration management plan does not reflect this. By not having a documented configuration management process that addresses the change control complications introduced by concurrent testing, FAA is unnecessarily increasing the risk of introducing more than one test baseline configuration and thereby rendering concurrent test results meaningless. Finally, concurrent testing will save time only if no significant system problems are found. Correcting significant problems requires stopping all tests, correcting the baseline, and then restarting testing. If all tests are not stopped and restarted using the same, corrected baseline, inconsistent configurations would be tested, producing potentially meaningless results and wasted effort. DCC outages caused by old, out-of-production equipment have disrupted air traffic, producing costly airline delays as air traffic control centers must reduce traffic volumes to compensate for lost system capability. The outages are likely to become increasingly disruptive as the availability of DCC spare parts and repair technicians shrink. FAA has thus far made good progress in its efforts to replace DCC with DCCR. Although key acquisition milestones, events, and risks remain, FAA is currently on track to deliver promised capabilities ahead of schedule and within budget. How successful FAA will ultimately be, however, depends on how effectively it performs key remaining tasks, such as system-level testing, and how effectively it manages known acquisition risks. While FAA has formal strategies and efforts underway to address some of these risks, two risks associated with upcoming concurrent system-level testing—contention for human test resources and test baseline configuration change control—are not being formally managed. As a result, FAA has no assurance that either risk will be carefully and effectively mitigated. To maximize the likelihood of delivering promised DCCR capabilities on time and within contract budgets, we recommend that you direct the FAA Administrator to ensure that (1) contention for human test resources during DCCR concurrent test activities and (2) change control over system test configuration baselines during concurrent test activities are managed as formal program risks. At a minimum, this formal risk management should include definition, implementation, and tracking of risk mitigation strategies. On September 17, 1996, we discussed a draft of this report with Department of Transportation and FAA officials, including FAA’s DCCR Deputy Project Manager and FAA’s Program Director for Airway Facilities Requirements. These officials agreed with the report’s conclusions and recommendations, and commented that both risk areas have been added to the DCCR risk watch list. Our review of the latest risk watch list confirmed that the risks are now being formally managed. This report contains recommendations to you. The head of a federal agency is required by 31 U.S.C. 720 to submit a written statement on actions taken on these recommendations. You should send your statement to the Senate Committee on Governmental Affairs and the House Committee on Government Reform and Oversight within 60 days after the date of this report. You must also send the written statement to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made over 60 days after the date of this report. We are sending copies of this letter to relevant congressional committees and subcommittees, the Director of the Office of Management and Budget, the Administrator of the Federal Aviation Administration, and other interested parties. We will send copies to others upon request. If you have questions or wish to discuss the issues in this report, please contact me at (202) 512-6412. Major contributors to this report are listed in appendix IV. Because DCCR is a critical, yet short-lived, system and because of FAA’s poor track record in acquiring ATC systems, we reviewed the DCCR acquisition. Our objectives were to determine (1) the portion of the recent major outages experienced at the five DCC-equipped en route centers that were attributable to DCC, (2) whether DCC was meeting its system availability requirement, (3) FAA’s projections of future DCC outages and availability, and (4) whether FAA was effectively managing the DCCR acquisition to ensure delivery of specified capabilities on schedule and within estimated cost. To determine what portion of recent major outages at the five DCC-equipped en route centers were attributable to DCC, we used information from a May 21, 1996, FAA report entitled Summary of Major Outages at Centers to calculate by cause the number of major outages and the amount of down time associated with these outages. We also interviewed the FAA Airway Facilities Service official who collected the data used in the report to clarify their meaning and define the term “major outage.” We did not verify the information contained in this report concerning the number and cause of the outages or the amount of downtime resulting from the outages. To determine whether DCC was meeting its system availability requirement, we collaborated with FAA to calculate DCC’s required availability using data from the system specification. We then compared required availability to DCC’s actual availability for fiscal years 1990 through 1995, which we obtained from FAA’s National Airspace Performance Analysis System. We did not verify the reliability of DCC’s actual availability data generated by the performance analysis system. To assess future DCC outages and availability, we obtained FAA projections of the number of DCC outages and the associated MTTR for these outages for calendar years 1996 through 2000, reviewed FAA’s Supportability Review of Display Channel Complex (DCC) and Computer Display Channel (CDC) (Initial Report), dated May 1995, and Supportability Review Update of Display Channel Complex (DCC) Hardware, dated March 1996, and interviewed Air Traffic Services officials responsible for these reports. We also interviewed National Transportation Safety Board officials about the findings in their Special Investigation Report, Air Traffic Control Equipment Outages, dated January 1996. To determine whether FAA is effectively managing the DCCR acquisition, we analyzed project and contractor documentation concerning (1) key acquisition and development process areas, such as test and evaluation, risk management, configuration management, and quality assurance, and (2) indicators of product quality, such as trends in reported defects. We also interviewed DCCR project officials and contractor representatives, and analyzed project office and contractor reports addressing progress against cost and schedule plans and budgets. We did not evaluate the reliability of the systems that produced these reports. On the basis of our analysis, we assessed the DCCR risk watch list to ensure that all significant risks were being formally managed. In support of all four objectives, we visited one of the five en route centers that is DCC-equipped to observe the system in operation and discuss with controller and maintenance technician representatives DCC functions, mission importance, and performance. We requested comments on a draft of this product from the Secretary of Transportation. On September 17, 1996, we obtained oral comments from Transportation and FAA officials. These comments have been incorporated in the report as appropriate. We performed our work at FAA Headquarters in Washington, D.C., the FAA Technical Center in Atlantic City, New Jersey, and the Washington en route center in Leesburg, Virginia. Our work was performed from March 1996 through September 1996, in accordance with generally accepted government auditing standards. (NARACS) Collects surface observations data from AWOS and ASOS and distributes these data to weather processing and display systems. Provides capability for real-time and nonreal-time monitoring of en route center systems, remote control of equipment and facilities, communications/coordination, and system security. Provides backup air-to-ground radio voice communications service in the event of a failure of the primary or secondary air-to-ground radio system. Provides FDIO print capability. Provides display capability that will be replaced by DSR. Provides display capability that will be replaced by DSR. Provides display capability that will be replaced by DCCR, which will in turn be replaced by DSR. Provides display capability that will replace DCC. Provides character and image display capability that will be replaced by DSR. Provides an interfacility multiplexed data transmission network. Controls local and remote air-to-ground radios. Make legal recordings of all voice communications between air traffic controllers and pilots. Provides a backup to HCS for radar processing, and radar track and display processing. Provides flight data input/output capability by transferring flight data inter-/intrafacility. Provides the processing capability to support AFSS workstations and automated pilot briefings, and maintains a national flight service database. Processes radar surveillance data, associates flight plans with tracks, processes flight plans, performs conflict alerts, and processes weather data. Provides capability for data entry and display and provides a standard serial data interface to connect to a RMS. Provides capability for real-time monitoring and alarm notification, certification parameter data logging, automatic record keeping and information retrieval, and trend analysis, failure anticipation, remote control of equipment and facilities, diagnostic and fault isolation, remote adjustments, and system security. Provides weather data processing and display. Make legal recordings of all voice communications between air traffic controllers and pilots. (continued) Systems Within the En Route Center and Their Functions (continued) Provides minimum essential command, control, and communications capabilities to direct the management, operation, and reconstitution of the National Airspace System during a national or local emergency. Provides interfacing capability to HCS. Provides communication network for transmitting data via addressed packets. Provides the capability to request and display NEXRAD weather data. Provides aircraft situation display capability for the controller that is to be replaced by DSR. Controls local and remote air-to-ground radios. Provides FDIO remote print capability. Provides National Radio Communications System emergency communications essential during and after earthquakes, hurricanes, and tornadoes. Provides air-to-ground voice communication services and ground-to-ground voice communication services between controllers, other ATC personnel, and others at the same and different en route centers and other ATC facilities. Provides track generation and traffic display as part of the Oceanic Traffic Planning System. Oceanic system that displays aircraft position based on extrapolations from flight plans. Provides a display showing the location of aircraft across the country that is used for strategic planning purposes. Provides national level management and monitoring of the airspace system, including air traffic flow, aircraft operations, and en route sector and airport utilization and loading. Systems and Facilities Outside but Interfacing With an En Route Center Automated Flight Service Station Workstation Air Route Surveillance Radar - 1 Air Route Surveillance Radar - 2 Air Route Surveillance Radar - 3 Air Route Surveillance Radar - 4 (continued) Explanatory Notes to Simplified Block Diagram of an En Route Center’s Systems Environment Systems and Facilities Outside but Interfacing With an En Route Center (continued) FAA’s Display System Replacement (DSR) is precisely what its name suggests—a system to replace air traffic controllers’ existing display-related systems in each of the en route centers, including PVDs, channel complexes (i.e., DCC, DCCR, and CDC), multiplexors, display generators, and various other peripheral devices. Accordingly, DSR consists of controller workstations connected via a local area network to three interfacing systems (HCS, EDARC, and Weather and Radar Processor). While providing controllers a modern ATC system interface (i.e., aircraft situation monitor), DSR is not intended to introduce new situation data, images, displays, or functions. Thus, FAA anticipates that DSR will minimally impact how ATC operations are performed. However, DSR is expected to provide significant improvements in display system reliability (via fault tolerant software and redundant hardware and networks), maintainability (via high level application languages and integrated monitoring and control functions), and expandability (via an open system architecture). FAA currently plans to deploy DSR to all 20 en route centers in the continental United States, as well as ATC facilities in Anchorage and potentially in Honolulu. According to FAA’s Air Traffic Systems Development Status Report dated June 1996, DSR’s project cost estimate is about $1.06 billion, and as of May 31, 1996, $379 million has been obligated. The operational readiness date for the first site (Seattle) is October 1998 and the last site (Anchorage) is June 2000. Harold J. Brumm, Economist The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Federal Aviation Administration's (FAA) Display Channel Complex Rehost (DCCR) project, intended as an interim replacementof the Display Channel Complex (DCC), focusing on: (1) recent outages caused by DCC; (2) whether DCC was meeting its system availability requirement; (3) FAA's projections of future DCC outages and availability; and (4) whether FAA was effectively managing the DCCR acquisition to ensure delivery of specified capabilities on schedule and within estimated cost. GAO found that: (1) DCC, built and deployed over 30 years ago, is critical to FAA's ability to display aircraft situational data in five of FAA's 20 air route traffic control centers; (2) DCC is also responsible for most of the major outages at the five centers from September 1994 through May 1996, accounting for about 48 percent of the total number of major outages and nearly 87 percent of unscheduled system downtime associated with these outages; (3) according to FAA, DCC was able to exceed its availability requirement from fiscal year (FY) 1990 to 1993, on average at the five centers, because of heroic maintenance efforts using "chewing gum and chicken wire"; (4) however, it fell slightly short of the requirement in FY 1994 and 1995, and FAA expects availability to decrease further because of shortages of spare parts and experienced DCC technicians; (5) decreases in DCC availability will result in costly delays for airlines and passengers; (6) FAA has made good progress in acquiring DCCR, but much remains to be accomplished; (7) thus far, the fourth and final DCCR software build is complete, and the number of reported software defects, while cumulatively slightly higher than projections, is showing a favorable trend when adjusted for defect severity; (8) also, FAA is ahead of schedule in completing informal system-level tests, formal testing is generally on schedule, and the first site is ready to begin the system acceptance process; (9) DCCR's development has benefitted from formal risk management and quality assurance programs, and FAA has plans in place to accelerate completion of formal system-level tests; (10) contractor financial reports show that DCCR is under spending estimates; (11) in light of its progress to date, FAA has an opportunity to deliver promised DCCR capabilities on time and within contract budgets; (12) the likelihood of doing so can be increased, however, by acting to mitigate two known risks associated with remaining development activities; (13) specifically, FAA's test plans call for conducting three system-level tests concurrently rather than sequentially, as is normally done; (14) by doing so, FAA expects to implement DCCR early; (15) however, FAA is not formally managing two risks associated with DCCR concurrent testing, which are: (a) staffing three test activities at the same time and thus potentially spreading test personnel too thin; and (b) not defining how it will control and synchronize changes to three system test configurations so as to prevent configuration differences among the three during testing; and (16) by formally managing these risks, FAA will greatly reduce the chances of them impeding future DCCR progress.
With many hospitals, outpatient clinics, domiciliaries, and nursing homes, VA is one of the largest direct-delivery health care systems in the country. In fiscal year 1997, VA received a medical care appropriation of about $17 billion to provide inpatient, outpatient, nursing home, and domiciliary services to 2.6 million of the nation’s 26 million veterans. VA services include care to veterans with special needs such as spinal cord dysfunction, blindness, post-traumatic stress disorder, substance abuse, and serious mental illness. In 1995, VA shifted management authority from its headquarters to new regional management structures—VISNs. VA created 22 VISNs, each led by a director and a small staff of medical, budget, and administrative officials. (See fig. 1 for a map of the VISNs.) The VISNs have been configured around historic referral patterns to VA’s tertiary care medical centers. These networks have substantial operational autonomy and now perform the basic decision-making and budgetary duties of the VA health care system. The network office in each VISN oversees the operations of the medical centers in its area and allocates funds to each of them. VISNs vary in several ways, including geographic size, ranging from about 10,000 square miles in VISN 3 (Bronx) to 885,000 square miles in VISN 20 (Portland); the number of hospitals in each, ranging from 5 in VISN 5 (Baltimore) and VISN 10 (Cincinnati) to 11 in VISN 4 (Pittsburgh); and the extent of services provided, reflecting, for example, historically longer inpatient and nursing home stays in the Northeast. When VA reorganized its health care system into 22 VISNs, it gave network and medical center directors the authority to realign services to increase efficiency and improve access. One aspect of VA’s reorganization was establishing two incentives to encourage network and medical center directors to reach these objectives. First, VHA established organizationwide goals for improving efficiency and access and created performance measures to hold network directors accountable for achieving them. Second, it implemented VERA, a new workload-based allocation system that encourages networks to identify and implement efficiencies and serve more veterans. The performance measures emphasize organizational priorities, such as increasing outpatient surgeries and reducing inpatient care, and they enable VA to gauge each network’s performance. VA has incorporated these measures into each network director’s performance contract and required each VISN to have a strategic plan explaining how it intends to improve efficiency and access. VERA, introduced in fiscal year 1997, allocates budget resources to the networks and provides them incentives for achieving cost efficiencies and serving more veterans. VERA is intended to improve the equity of resource allocations to networks. It provides more comparable levels of resources to each network for each high-priority veteran served than the system it replaced, which allocated resources primarily on the basis of facilities’ historical budgets. Networks that increase their patient workload compared with other networks gain resources under VERA; those whose patient workloads decrease compared with other networks lose resources. More efficient networks (that is, those whose patient care costs are below the national cost) have more funds available for local initiatives. Less efficient networks (whose patient care costs are above the national cost), however, must increase efficiency to have such funds available. By directly funding the networks, rather than the medical centers as in the past, VERA clearly conveys that each facility is a part of a larger regional network that must facilitate veterans’ equitable access to services. VERA recognizes that networks are responsible for fostering change, eliminating duplicative services, and encouraging cooperation among medical facilities. Network officials have the authority to tailor their VERA allocations to facilities and programs within parameters set by national policy and guidelines and to integrate services among facilities for achieving equitable access to care and other purposes. In the mid-1990s, VA, recognizing that its health care system was inefficient and in need of reform, followed the lead of private-sector health care providers and began reorganizing its system to improve efficiency and access. Like other federal health programs, such as Medicare and Medicaid, that are adopting managed care practices to control program expenditures, VA recognized that it could improve its health care system by adopting selected managed care practices. Consequently, in 1995, VA introduced substantial structural and operational changes in its health care system to improve the quality and efficiency of and access to care by reducing its historical reliance on inpatient care. VA shifted its focus from a bed-based, inpatient system emphasizing specialty care to one emphasizing primary care provided on an outpatient basis. In addition, the Congress enacted legislation in October 1996 eliminating several restrictions on veterans’ eligibility for VA outpatient care, which allowed VA to serve more patients on this basis. These actions accelerated VA’s shift in delivery of health care services from expensive hospital-based inpatient care to less costly outpatient care. VA has begun to increase its use of outpatient surgery and nonhospital care settings, reduce and reassign staff, and integrate services. As a result, VA has achieved efficiencies by reducing personnel costs. From fiscal years 1993 to 1997, VA increased the number of outpatient visits nationwide by about 27 percent. VA estimates that in fiscal year 1997, it will provide nearly 32 million outpatient visits, an increase of 6.2 percent from fiscal year 1996. From fiscal years 1993 to 1997, the number of hospital admissions for inpatient care decreased about 23 percent. (See fig. 2.) VA documentation shows that the seven networks we reviewed increased the number of outpatient visits from fiscal year 1995 to fiscal year 1996 by about 590,000 visits—an increase of 5.8 percent. They decreased inpatient episodes in fiscal year 1996 by over 22,000 from fiscal year 1995—a decrease of 6.2 percent. According to data obtained from the medical centers we visited, the number of outpatient visits increased between fiscal years 1995 and 1996. For example, the Jackson, Mississippi, medical center increased outpatient visits by about 4,000 (about 2 percent); the Pittsburgh medical center increased these visits by about 20,000 (about 7 percent). At the Brockton/West Roxbury, Massachusetts, medical center, the number of outpatient visits increased by about 5 percent from fiscal year 1995 to fiscal year 1996. Medical center officials told us that they increased outpatient visits by shifting resources from inpatient to outpatient care, increasing marketing and conducting outreach efforts, extending clinic hours to evenings and weekends, and reassigning staff. Outreach efforts included health fairs conducted at various community locations, flu vaccinations, and cancer screenings. In VISN 4 (Pittsburgh), medical center officials said that when appropriate, they move patients to outpatient locations. They also use educational programs to inform people of alternatives to expensive inpatient care. As part of its emphasis on outpatient care, VA has promoted preventive measures to keep veterans healthier and out of the hospital to improve efficiency, access, and quality of care. Preventive measures consist of periodic health assessments that provide screening, counseling, risk assessment, and patient education. To encourage preventive care, VA assesses network and medical center directors on their facilities’ progress in implementing nationally recognized health prevention standards for eight diseases with major social consequences. All of the medical centers we visited provided preventive care services and education programs. An example of a preventive measure is VA’s guideline for examining the feet of diabetic patients during an outpatient visit to detect circulatory problems. In addition, the centers conduct classes in smoking and alcohol abuse cessation, stress management and hypertension reduction, and a wide variety of other disease prevention measures. Prevention efforts vary by medical center. The Pittsburgh medical center is piloting a prevention clinic in conjunction with one of its primary care teams. Clinic visits involve patients arriving 1 hour early for appointments with their primary care provider. During this time, a nurse or nurse practitioner discusses prevention issues with the patient and writes orders for prevention activities that will then be reviewed and signed by the patient’s primary care provider during the scheduled appointment. The Brockton/West Roxbury medical center offers smoking cessation clinics, which are held in the evenings to improve veterans’ access to them. Beginning in fiscal year 1997, nurses at the Clarksburg, West Virginia, medical center started making follow-up telephone calls to recently treated patients to answer questions and ensure that patients are following post-treatment instructions, taking their medications, and following dietary instructions. As a result, the medical center expects fewer return visits by these patients. Consistent with the changes in other health care sectors, VA has used advances in diagnostic, therapeutic, surgery, and rehabilitative services to increase its use of outpatient surgery. VA’s goal is for its medical centers to perform at least 65 percent of selected surgical procedures on an outpatient basis. Outpatient surgical units require less extensive staffing levels because patients are typically discharged in less than 12 hours and do not need around-the-clock nursing care. In addition, because patients spend less time in the hospital, costs for housekeeping, nutrition, linens, medical, and administrative services are lower. Most VA medical centers now have outpatient surgery capability, and the percentage of such surgeries has increased nationwide from 34 to 66 percent between fiscal year 1993 and mid-fiscal year 1997. During this same time period, each of the seven networks we reviewed increased the percentage of outpatient surgeries. (See fig. 3.) Each of the medical centers we reviewed that performed surgery increased the number of outpatient surgeries performed. Officials at four of the six medical centers we reviewed that had inpatient surgery reported that increasing outpatient surgeries has lowered hospital admissions, reducing costs. Clarksburg medical center officials reported an increase in the percentage of outpatient surgeries between fiscal year 1995 and mid-February of fiscal year 1997 from 62 to 83 percent. Furthermore, the number of both inpatient and outpatient procedures increased from about 2,130 to more than 2,276 between fiscal years 1995 and 1996. The medical centers we visited use a variety of practices to support outpatient surgery. At the Pittsburgh medical center, for example, patients requiring care following surgery, but not needing hospitalization, receive that care in an observation unit. The Clarksburg, Jackson, and Lebanon medical centers also use observation units. In addition, the medical centers in Jackson, Lebanon, and Pittsburgh reduce costs by providing local accommodations or “Hoptel” beds for veterans who live far from the medical center on the night before scheduled outpatient surgery rather than admit them to the hospital. Following are other practices medical centers reported using to support outpatient surgery: Improved scheduling helps support outpatient surgery. One example of this is keeping time slots available in specialty clinics to ensure that patients with multiple conditions be scheduled for timely evaluations before surgery—patients such as those with heart problems who are seen in a cardiology clinic before having noncardiac surgery. Another example involves scheduling patients with similar diagnoses for simultaneous treatment in a clinic, allowing VA to better manage workload and staff assignments and also reducing the time veterans spend waiting to get an appointment. In addition, some facilities are contacting patients before surgery to reduce the no-show rate. Medical centers are educating patients to improve compliance with preoperative guidelines, precluding the need to reschedule surgery due to patients’ failure to follow such guidelines. Preoperative clinics are being used to perform lab tests, X rays, medical histories, and physical assessments of patients before surgery, precluding the need for overnight hospital stays. Medical centers use nationally developed guidelines to improve patient health outcomes. These guidelines allow VA to standardize treatment by using appropriate and cost-effective medical practices. VA’s efforts to decrease BDOC as well as the number of operating beds reflect its goal of becoming an outpatient care-based system and more efficient. In fact, VA establishes BDOC performance goals for each network that are comparable with or lower than VA’s projections of the local Medicare region’s data for short-stay hospitals. VA has reduced BDOC, decreasing the amount of inpatient care provided. By the end of June 1997, each of VA’s 22 VISNs had reduced its BDOC to a number below its BDOC at the end of fiscal year 1996; nationally, VA’s BDOC per 1,000 unique users dropped from 2,959 in August 1995 to 1,651 in August 1997—a 44-percent decrease. In fiscal year 1997, BDOC for all of the VISNs we contacted in our study were lower than VA’s projections of Medicare data for the regions with which they were compared. BDOC decreased at each of the medical centers we reviewed. From August 1995 through August 1997, BDOC decreases ranged from a low of 577 (22 percent) at the Jackson medical center to a high of 2,237 (62 percent) at the Pittsburgh medical center. (See table 1.) Consistent with its goals of becoming an outpatient care-based system and increasing efficiency, VA has also decreased operating beds, which are hospital beds staffed for delivering a specific type of care. VA’s average number of medical, surgical, and psychiatric operating beds decreased nationwide from about 51,000 in fiscal year 1995 to 46,000 in fiscal year 1996—a decrease of 9.8 percent. VA data on the seven networks we contacted show that the average number of operating beds decreased between fiscal years 1995 and 1996, ranging from a 95-bed decrease (6.5 percent) in VISN 20 (Portland) to a 546-bed decrease (14.3 percent) in VISN 16 (Jackson). (See table 2.) Similarly, the medical centers we visited reduced their collective operating beds by 375 or 12.8 percent between fiscal years 1995 and 1996. The Pittsburgh medical center, a tertiary care facility, had the largest decrease in beds—114 beds or 11.9 percent; the Fayetteville, Arkansas, medical center, a primary care facility, had the largest percentage decrease of the medical centers we reviewed—27.7 percent (38 beds). The Northampton, Massachusetts, medical center, however, which has a larger proportion of its workload in inpatient psychiatry, had the smallest decrease—21 beds or 6.4 percent. Furthermore, data provided by the seven medical centers we reviewed showed an additional reduction of 542 operating beds through mid-fiscal year 1997. VA has targeted staff reduction as a major part of its effort to improve efficiency because medical staffing costs exceed $10 billion annually— about 60 percent of VA’s medical care budget. By closing beds and integrating medical center services, VA decreased full-time employee equivalents (FTEE) by 8.1 percent between the beginning of fiscal years 1996 and 1998—a reduction of almost 16,114 FTEEs. (See app. II for details of FTEE reductions in the seven networks contacted.) VISN 3 (Bronx) has aggressively addressed staffing reductions. For example, from October 1995 through March 1997, the Brooklyn, New York, medical center closed 65 beds and reduced physician staff by 26 FTEEs, registered nurses by almost 90 FTEEs, nursing assistants and licensed practical nurses by over 40 FTEEs, and administrative and other workers by about 252 FTEEs. According to network officials, during this time period, networkwide staffing was reduced by almost 2,124 FTEEs. In VISN 4 (Pittsburgh), the Lebanon medical center reduced staff by approximately 117 FTEEs since fiscal year 1995 with its shift to outpatient care. The Brockton/West Roxbury medical center in VISN 1 (Boston) reduced FTEEs by 200 in fiscal year 1996 and 137 in fiscal year 1997. Service integrations are part of VA’s nationwide strategy to restructure its health care delivery system to improve efficiency as well as access to care and quality of care. Integrations involve the combining of administrative units of multiple facilities as well as the elimination of unnecessarily duplicative services within and among facilities. Integrations produce efficiencies through staff reductions or economies of scale that enable facilities to serve more patients. Integrations can significantly benefit veterans mainly because VA can reinvest the money it saves to enhance veterans’ access to care and improve service and quality. VISNs and medical centers we visited have completed several integrations and have others in progress. In fiscal year 1997, for example, the two VA hospitals in Pittsburgh—the University Drive hospital (a tertiary care referral center) and the Highland Drive hospital (a psychiatric facility)— integrated to form the Pittsburgh Health Care System under a single medical director. This integration also eliminated duplicate service units, resulting in the closing of one acute and two intermediate care units at Highland Drive. As part of this integration, the medical center identified excess staff positions and reduced the number of FTEEs by 232 during fiscal year 1997. In another case, VISN 1 (Boston) is proposing a large-scale integration of two tertiary care centers located within 7 miles of each other in the Boston metropolitan area. The resulting integration, if approved, could change the mission of the Brockton/West Roxbury medical center to one focusing on outpatient care, while the other center, the Boston medical center, could retain its tertiary care status. Not all networks are planning facility integrations, however. VISN 16 (Jackson) officials told us that they did not plan any facility integrations because of the distances between hospitals in this geographically large network. In addition, VA has integrated medical and support services within hospitals. For example, VISN 1 (Boston) has integrated the laboratory and laundry services of eight medical centers. The Brockton/West Roxbury medical center now processes all mail-out laboratory tests for the network. Furthermore, the Northampton medical center integrated its medical service and ambulatory care into primary care and integrated engineering and environmental management services into one facilities management service unit. In VISN 4 (Pittsburgh), the Lebanon medical center merged five support and resources management units into two new departments in fiscal year 1997. In fiscal years 1996 and 1997, the Clarksburg medical center integrated several services, including surgical service with supply processing and distribution, which distributes surgical supplies and sterilizes equipment. In VISN 16 (Jackson), the Jackson medical center integrated environmental and engineering services into a new facility management service unit and created a diagnostic service by combining radiology, pathology/laboratory, and nuclear medicine. Efficiencies from increased outpatient care, staff reductions and reassignments, and integrations at the medical centers we reviewed have resulted in savings. In some cases, efficiencies did not save money because hospitals reinvested funds to enhance existing services or to offer new services. Savings from shifting to outpatient care varied at the medical centers we reviewed. For example, Lebanon medical center officials estimated that the shift to outpatient care saved their facility $346 for each day of inpatient care avoided in fiscal year 1997, while officials at the Jackson medical center estimated that they saved $665 for every day of inpatient care avoided. At the Pittsburgh medical center, officials estimated that savings from an increase in outpatient surgeries for fiscal year 1997 totaled more than $7.5 million through May 31, 1997. For example, these officials estimated that using observation beds saved about $930,000 from October 1, 1996, through May 31, 1997. The Brockton/West Roxbury medical center avoided $630,454 in inpatient costs in fiscal year 1997 by increasing the number of outpatient surgeries, according to officials’ estimates.Facilities used these savings to fund increases in other services, notably primary care. Nationally, the networks’ efforts to reduce staff have reduced VA’s personnel expenditures. On the basis of VA staffing data, we estimate that the reduction of 16,114 FTEEs (8.1 percent) in staff—as measured from the beginning of fiscal year 1996 to the beginning of fiscal year 1998—will save VA annual costs of approximately $897,000,000. The three networks and seven facilities we visited reduced FTEEs during this period. At the facilities we visited, the number of staff reduced ranged from 396 FTEEs (14 percent) at the Pittsburgh medical center to 13 FTEEs (less than 1 percent) at the Jackson medical center. Integrations within and among medical centers have helped generate savings and increase operational efficiency. VA estimates that integrating facilities had generated over $83 million in savings by July 1997. Medical centers have used these savings to provide new CBOCs and to make new services available or to improve accessibility of existing services. In Pittsburgh, the integration of the University Drive and the Highland Drive hospitals reduced FTEEs by 232 during fiscal year 1997. Hospital officials estimated savings from reduced staffing levels and other actions associated with the integration to be approximately $4.2 million in fiscal year 1997. VISN 1 (Boston) officials estimate that a proposed integration of tertiary care facilities will save $40 million a year for 5 years. Beginning in fiscal year 1997, this network also expects to save $640,000 annually from the integration of laundry services at three of its medical centers and over $1.8 million by having the Brockton/West Roxbury medical center perform laboratory services for all VA hospitals in the network. The Northampton medical center integrated medical and ambulatory care services into primary care and combined engineering with environmental management services, saving $138,293, according to officials there. Lebanon medical center officials project an annual savings of more than $489,000 from integrating administrative services at their facility. Jackson medical center officials estimate that FTEE reductions attributable to integrations will save about $400,000 per year. In some cases integrations did not save money because hospitals reinvested potential savings to enhance existing services or to allow them to offer new services. For example, officials at the Jackson medical center said that although they realized no net savings from consolidating ward administration into nursing services, the resulting efficiencies enabled them to expand nursing coverage for the operating room and outpatient areas. Veterans’ access to health services is improving as VA hospitals reinvest the savings from efficiency initiatives and restructure their service delivery. VA hospitals have increased the number of primary care teams, added or improved space to accommodate additional primary care patients, shortened appointment waiting times, increased the number of locations providing community-based care, and redefined the role of VA inpatient nursing home care. As a result, the networks we contacted have been increasing the number of high-priority veterans they serve. VA has improved veterans’ access to health care through the use of primary care. Medical centers assign patients to primary care teams, which are responsible for managing patient care. The composition of a primary care team varies depending upon a medical center’s mission and patient population, but these teams generally include physicians, one or more health care professionals (for example, nurse practitioners, physician assistants, registered and licensed practical nurses, and medical residents), and clerks for administrative support. Some teams may include a psychiatrist, social worker, dietician, or physical therapist. For example, the Northampton medical center, which has more psychiatric than acute care beds, has established a primary care team to treat psychiatric patients. Members of this team include a psychiatrist, psychiatric social worker, psychologist, and clinical pharmacist as well as a clinical nurse specialist or physician assistant, dietician, and administrative staff. As the first point of contact, primary care teams provide accessible, routine care for veterans, establish an ongoing relationship with them, and coordinate treatment for patients requiring specialized care. They generally provide a comprehensive range of medical services, except for emergency or specialty care. As managers of patient care, teams help ensure that appropriate services are provided and duplicate services are avoided. For example, by calling veterans on the telephone primary care teams can answer veterans’ questions about their health and ask whether veterans are following their post-discharge instructions. This practice may eliminate the need for veterans to visit medical centers. In addition, primary care team staff encourage veterans to schedule appointments rather than just walk in to medical centers for treatment as many veterans have done in the past. Appointments enable VA to improve scheduling of its workload and resources, reducing the time patients spend waiting for an appointment as well as that spent waiting upon arrival to be seen. For example, officials at the Causeway Street outpatient clinic in Boston and the Jackson medical center told us that scheduling nonurgent patients for appointments reduced the number of walk-ins and allowed for more efficient staff assignment. This helps reduce the number of patients receiving care inappropriately at specialty clinics, improving access for those who need such care. Each of the medical centers we visited had established primary care teams, and most of them had increased the number of these teams between fiscal years 1995 and 1997. For example, the Brockton/West Roxbury and the Lebanon medical centers had no primary care teams in fiscal year 1995; by fiscal year 1997, they had seven and four, respectively. The medical centers we reviewed showed sizable growth in the numbers of veterans assigned to primary care teams. (See table 3.) In fiscal year 1997, VA had over 1,000 primary care teams in operation. Medical centers we visited have taken many actions to accommodate increased numbers of primary care patients. For example, they have expanded and converted hospital space to create additional primary care clinics, added more examination and treatment rooms and support space, and used off-site clinics to deliver primary care. Previously, physicians in the medical centers we visited had only the use of their offices or one exam room to see patients. Multiple examination rooms enable primary care teams to treat more patients because a physician can treat one patient while other patients prepare for or are attended by other team members. More examination and treatment rooms for each physician or team allow primary care doctors to see more patients, more efficiently using their time and reducing patients’ waiting time. For example, at the Lebanon medical center, we observed renovations under way to increase primary care space from 978 to 4,786 square feet in fiscal year 1997. Furthermore, by converting additional hospital space, Lebanon will add 2,400 square feet in fall 1998. In fiscal year 1998, the Fayetteville medical center is expanding its primary care space from 3,400 to 11,233 square feet, including 16 examination rooms, 2 treatment rooms, and support space. By renovating existing space for use by primary care teams, the Jackson medical center increased space from 2,021 to 13,835 square feet from fiscal years 1995 to 1996. Renovation under way at the time of our visit will more than double the number of examination rooms for each primary care physician. This medical center is also more than doubling the number of physicians assigned to primary care. Finally, all medical centers we visited also provide full- or part-time primary care clinics in off-site locations in neighboring communities, improving access to care for veterans in those areas. All the medical centers we reviewed reported that increased space devoted to primary care allowed them to see more patients: The Fayetteville medical center anticipates that the additional space will allow them to treat more than 55 new primary care patients each week. This increase in new patients will be possible because the additional space will allow each physician to use two examination rooms instead of one- half of a room, which was what they had before renovating. Additional space devoted to primary care at the Clarksburg medical center will enable each primary care team to increase the number of its assigned patients from 2,116 per team in 1995 to almost 4,500 in 1997. Additional space allowed primary care enrollment at the Lebanon medical center to increase from 1,984 veterans in fiscal year 1996 to more than 4,308 in fiscal year 1997. The Jackson medical center reported that newly converted hospital space for primary care completed in December 1997 will allow physicians to see 20 percent more patients than they now see. Each primary care provider will have use of two to three rooms; each provider had only one room before this expansion. VA cited decreased waiting times for appointments as a part of its objective to increase veterans’ access to services in its Prescription for Change—its blueprint for reforming health care. In fiscal year 1996, VA headquarters established a 30-day standard for veterans’ obtaining appointments for specialty and primary care clinics. Documents we reviewed showed that all 22 VISNs succeeded in achieving a median waiting time of less than 30 days. Some of the medical centers we visited have shortened appointment waiting times for specialty care as access to primary care has improved. At the Lebanon medical center, as the number of VA primary care patients increased by 2,324 in fiscal year 1997, waiting times for appointments at some specialty clinics decreased. For example, the appointment waiting time at this center’s urology clinic declined from 100 days to 40 days. Fayetteville medical center officials report that before their medical center introduced primary care, the average appointment waiting time for specialty care was more than 90 days; it is now less than 30 days. At the Pittsburgh medical center, appointment waiting times for new patients decreased between fiscal year 1995 and 1997 in over half of that center’s specialty clinics. Some medical centers have also shortened waiting times for primary care appointments. From fiscal years 1996 to 1997, the Jackson and Pittsburgh medical centers shortened appointment waiting times for primary care from 32 to 13 days and 12 to 5 days, respectively. Data provided by the Lebanon medical center showed that the number of veterans receiving an appointment within 7 days more than doubled in this time period. At the Brockton/West Roxbury and Fayetteville medical centers, however, appointment waiting times remained constant—at approximately 7 days— reflecting the increasing number of veterans enrolled in primary care. In addition, medical centers have shortened appointment times by establishing more flexible scheduling of outpatient services. For example, the Brockton/West Roxbury medical center now schedules its smoking cessation clinics in the evenings and other medical clinics on weekends to improve access. Officials there cite improved scheduling of clinics as one factor in improving access and leading to an increase in patients assigned to primary care. VA is also improving veterans’ access to health care by increasing the number of CBOCs that it funds or operates. CBOCs are geographically separate from their “parent” medical center and provide outpatient primary care. Their locations facilitate access to health services for veterans who live some distance from a VA facility—about one-half of all veterans live 25 miles or more from a VA hospital—especially those living in medically underserved areas. CBOCs exemplify VA’s effort to convert from a hospital-based system to one focusing on outpatient services.When appropriate, providers at CBOCs refer patients to hospitals for specialty care. Some of VA’s goals for CBOCs are to shorten hospital lengths of stay by doing preadmission work-up or providing postdischarge follow-up care closer to the patient’s home; reduce veterans’ need to travel long distances to receive care; redirect patients currently served at medical center clinics, shortening waiting times or relieving congestion at these sites; shorten waiting times for follow-up care, for example, postsurgical care or after a hospitalization; and improve access to care for historically underserved veteran populations. The Congress must review and approve medical centers’ proposals to open CBOCs after preliminary review by VISNs and VA headquarters. As of November 1997, 153 CBOCs were approved or operating nationwide. VA estimates that these clinics, when fully operational, will serve more than 280,000 veterans each year. Fifty-eight of the recently approved CBOCs were in the seven networks we reviewed. As of November 1997, these networks indicated their intent to establish at least 150 additional CBOCs through fiscal year 2002. Some medical centers that we contacted changed their nursing home services to improve access and reduce costs. In the past, some medical centers in the Northeast provided extensive nursing home benefits, which could involve stays lasting many years. Responding to VERA’s incentives, officials at the medical centers in Pittsburgh, Lebanon, and Newington/ West Haven (the Connecticut Health Care System) told us that they have made nursing home services available to more veterans at less cost to VA by establishing alternatives to long-term, inpatient nursing home care. The Pittsburgh and Lebanon medical centers now use their inpatient nursing home services to evaluate, medically stabilize, and then, if appropriate, prepare patients for placement in the least restrictive community environment, including their own homes. According to Lebanon officials, for example, this “transitional care” approach has reduced the average length of stay in the nursing home unit. This has enabled them to increase the number of patients served annually from 264 in fiscal year 1995 to 448 in fiscal year 1997 without increasing the number of staff in the unit. At the Pittsburgh medical center, the number of nursing home patients served increased from 399 in fiscal year 1996 to 571 in fiscal year 1997, according to facility officials. Beginning in 1996, the Connecticut Health Care System replaced its nursing home program with a sub-acute care program and additional patient support services. The objective of sub-acute care is the same as that of the nursing home programs in the Pittsburgh and Lebanon medical centers. Following evaluation and medical stabilization in the sub-acute care unit, patients are discharged to their home or a community facility. To enable veterans to return home, the Connecticut Health Care System established a day hospital program to provide medical services, such as physical therapy and intravenous medications, to patients who then return home at night; upgraded support services in patients’ homes, such as providing visiting nurses; and improved transportation services. These changes reduced the Connecticut Health Care System’s nursing home beds from 150 in fiscal year 1995 to 40 by the end of fiscal year 1997. Despite the decrease, the number of patients served in fiscal year 1997 was more than double the number served in fiscal year 1991. Network efforts to improve access to VA medical services have led to VA’s serving an increased number of high-priority patients (Category A). Category A patients are those veterans who qualify to receive medical care on the basis of a service-connected disability, low income, or special health care needs. In each of the networks we contacted, the number of unique (unduplicated) Category A veterans served rose between fiscal years 1996 and 1997. (See table 4.) VA headquarters’ monitoring of changes to the health care system is important because network and medical center directors are responding to incentives to change VA’s health care delivery. These changes, which are intended to improve efficiency and access, could lead to outcomes that compromise care received by some veterans. For example, officials in several of the VISNs we contacted have reinvested savings from changes in inpatient care and specialty services—such as nursing home care—to improve veterans’ access to primary care. Previously, however, we reported that VA headquarters lacked timely and detailed indicators of certain changes in its health care delivery—particularly to veterans receiving special care services such as nursing home care or treatment for spinal cord injuries. Without such indicators, it is difficult for VA to ensure that service delivery changes do not compromise the appropriateness of the health care veterans receive. VA, to its credit, has developed some performance indicators for VISN directors such as patient satisfaction, efficiency indicators (for example, BDOC), and number of veterans served. VA officials told us that it holds VISN directors accountable for meeting goals related to these indicators. VA also created indicators measuring the number of veterans treated for certain disabling conditions and funds spent for their care. Although the indicators will provide headquarters officials with some important process information about patient care, as we noted in our previous report, these data—and VA’s other data sources—generally provide little assessment of the outcomes of program changes on veterans. As noted, monitoring the impact of such changes is critical because networks are responding to VA’s incentives to reduce the cost of care. Special care services, which include the most expensive services VA delivers (for example, nursing home care or care for veterans with spinal cord injuries), are especially important to monitor because the population receiving these services is particularly vulnerable. Lack of adequate performance information will hinder VA headquarters’ ability to take corrective action if networks’ program changes are inconsistent with VA’s organizational goals. VA officials told us they have begun to develop some outcome measures. VA is making unprecedented changes to its health care system. Introducing practices inspired by managed care, VA is shifting the emphasis of its medical care delivery system from extensive inpatient services to outpatient care. Responding to management and budgetary incentives, VISN and medical center directors are implementing changes intended to improve the efficiency of their operations, while improving veterans’ access to their services. The medical centers we contacted are operating more efficiently in several key areas: they are performing more outpatient treatment and surgery, shortening veterans’ length of stay in the hospital, and integrating hospital services to streamline operations. As VA shifts from providing mainly inpatient to outpatient care, it needs fewer hospital beds and staff; staff reductions should lead to significant cost reductions. In addition, to improve access, the facilities we contacted are increasing the number of patients assigned to primary care and decreasing the waiting times for appointments. Other data we reviewed show similar efficiency and access improvements throughout VA’s health care system. The transformation of the VA health care system, however, is a work in progress. Networks and medical centers are rapidly introducing new approaches to delivering care and planning the introduction of other initiatives. Adequate monitoring of the outcomes of these changes is essential to assure VA’s stakeholders that veterans are receiving health care that is timely and appropriate. Officials from the Veterans Health Administration reviewed a draft of this report. They generally agreed with its contents and provided technical comments, which we incorporated as appropriate. As arranged with your staff, we are sending copies of this report to the Acting Secretary of Veterans Affairs, interested congressional committees, and other interested parties. We will make copies of this report available to others upon request. If you have any questions about this report, please call me at (202) 512-7101 or Bruce D. Layton, Assistant Director at (202) 512-6837. Other major contributors to this report are Frederick K. Caison, Linda C. Diggs, Darrell J. Rasmussen, Jean N. Harker, Brian W. Eddington, and Liz Williams. We focused our work on VA’s efforts to improve the efficiency of its health care system and improve veterans’ access to health services. To assess VA’s progress in increasing the efficiency of its health care system, we examined VA records documenting effects of efficiency initiatives, including increased outpatient visits, decreased bed-days of care and operating beds, reduction and reassignment of staff, and integration of services. We focused on these measures because VA lacks outcome measures that show the impact of these changes on veterans’ health status. To assess VA’s progress in improving veterans’ access to services, we examined the steps VA is taking to accomplish this, including emphasizing primary care and increasing the number of locations that provide community-based care. To obtain data on efficiency and access issues, we interviewed network and medical center directors, medical center staff, VA headquarters officials, and representatives from veterans service organizations, such as the American Legion, Disabled American Veterans, Paralyzed Veterans of America, and Veterans of Foreign Wars. We visited three Veterans Integrated Service Network (VISN) offices in Boston, Jackson, and Pittsburgh—to obtain the views of network directors, chief medical officers, and chief financial officers and supporting documentation on network-led initiatives to manage VISNs’ resources and change service delivery. We selected these VISNs for site visits because of the differing impact of the Veterans Equitable Resource Allocation (VERA) system on their fiscal year 1997 budgets and also because of the differences in the geographic dispersion of these networks’ facilities. In addition, we conducted telephone interviews and collected efficiency and access information from two other networks with budget decreases in fiscal year 1997—VISN 2 (Albany) and VISN 3 (Bronx) in New York—and two networks with budget increases—VISN 18 (Phoenix) in Arizona and VISN 20 (Portland) in Oregon. We also visited seven medical centers—in Brockton/West Roxbury and Northampton, Massachusetts; Jackson, Mississippi; Fayetteville, Arkansas; Lebanon and Pittsburgh, Pennsylvania; and Clarksburg, West Virginia. We toured these facilities to identify physical changes made to accommodate increased use of primary care. We interviewed medical facility directors, administrative officials, chiefs of the various services, physicians, nurses, and union officials for information on VA’s reorganization and VERA implementation and collected facility-specific documents. In addition, we met with the director of the Connecticut Health Care System in VISN 1 (Boston) to discuss that medical center’s initiatives to improve access and efficiency. We met with officials of the Causeway Street outpatient clinic, which provides 180,000 primary and specialty care visits each year to veterans in downtown Boston, and toured the facility. We also interviewed officials in the Veterans Health Administration’s Office of the Deputy Under Secretary for Health; Office of the Chief Network Officer; Office of Policy, Planning and Performance; Office of the Chief Financial Officer; and strategic health care groups. We obtained and reviewed VA headquarters documents on policies, monitoring procedures, and performance data to address issues about the monitoring of changes implemented by networks and medical centers. We drew on previous work to make observations about VA’s monitoring of the health care that networks are providing. Because many of VA’s reform initiatives had been recently introduced or were in the planning phase during our review and due to inconsistencies among facilities’ reporting of data, we relied on VA documentation and officials’ estimates of savings. We did not verify the accuracy of these estimates. We performed our review in accordance with generally accepted government auditing standards between November 1996 and January 1998. We selected seven networks on the basis of projected changes in resource allocations if the Veterans Equitable Resource Allocation (VERA) system had been fully implemented in fiscal year 1997. We selected four networks—VISN 1 (Boston), VISN 2 (Albany), VISN 3 (Bronx), and VISN 4 (Pittsburgh)—that would have lost resources had VERA been fully implemented. We selected three networks—VISN 16 (Jackson), VISN 18 (Phoenix), and VISN 20 (Portland)—that would have gained resources had VERA been fully implemented. The cities named on the map of each VISN show the locations of VA medical centers in that VISN. The Pittsburgh Health Care System includes two hospitals in Pittsburgh. We compiled data for the profiles from several sources, including VA annual reports, network strategic plans, and documents provided by headquarters and network officials. Data are from fiscal year 1996 unless otherwise noted. VA’s figures for full-time employee equivalents (FTEE) are based on regular hours worked by VA employees during the first pay period of each fiscal year. The annual counts for Category A veterans (those with service-connected disabilities, low incomes, or special health care needs) reflect the number of unique Category A veterans seen at least once during a fiscal year and the two previous fiscal years. Other veterans generally have incomes and net worth above a certain threshold and must pay part of the cost of the care they receive. Nonveterans include veterans’ dependents and beneficiaries in the Civilian Health and Medical Program of the Uniformed Services and VA employees. Data on inpatient and outpatient treatments count each visit of a patient separately; therefore, these data show the number of times patients received care at a VISN medical center. Patients may have received care at more than one medical center. VA Health Care: Resource Allocation Has Improved, but Better Oversight Is Needed (GAO/HEHS-97-178, Sept. 17, 1997). VA Health Care: Opportunities to Enhance Montgomery and Tuskegee Service Integration (GAO/T-HEHS-97-191, July 28, 1997). VA Health Care: Lessons Learned From Medical Facility Integrations (GAO/T-HEHS-97-184, July 24, 1997). VA Health Care: Assessment of VA’s Fiscal Year 1998 Budget Proposal (GAO/T-HEHS-97-121, May 1, 1997). Department of Veterans Affairs: Programmatic and Management Challenges Facing the Department (GAO/T-HEHS-97-97, Mar. 18, 1997). Veterans’ Health Care: Facilities’ Resource Allocations Could Be More Equitable (GAO/HEHS-96-48, Feb. 7, 1996). VA Health Care: Exploring Options to Improve Veterans Access to VA Facilities (GAO/HEHS-96-52, Feb. 6, 1996). VA Health Care: Improving Veterans’ Access Poses Financial and Mission-Related Challenges (GAO/HEHS-97-7, Oct. 25, 1996). VHA’s Management Improvement Initiative (GAO/HEHS-96-191R, Aug. 30, 1996). Veterans’ Health Care: Challenges for the Future (GAO/T-HEHS-96-172, June 27, 1996). VA Health Care: Efforts to Improve Veterans’ Access to Primary Care Services (GAO/T-HEHS-96-134, Apr. 24, 1996). VA Health Care: Challenges and Options for the Future (GAO/T-HEHS-95-147, May 9, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO reviewed the Department of Veterans Affairs' (VA) efforts to improve and monitor veterans' access to health care. GAO noted that: (1) VA has taken important steps to improve the efficiency of its health care system and veterans' access to it; (2) VA medical centers have increased efficiency by expanding the use of outpatient care; (3) for example, VA has increased the percentage of surgical procedures performed on an outpatient basis from 34 percent in fiscal year 1993 to 66 percent by mid-fiscal year 1997; (4) this has allowed it to reduce bed-days of care, operating beds, and staff; (5) at the Pittsburgh, Pennsylvania, medical center, the increase in outpatient surgeries saved more than $7.5 million from October 1995 through May 31, 1997; (6) preventive care, including health assessments and patient education, has also increased, which VA officials told GAO can lead to efficiencies because patients can be kept healthier, avoiding expensive hospital stays; (7) furthermore, VA is increasing efficiency by integrating services both within and among medical centers; (8) VA is improving access to health care in several ways; (9) for example, VA has begun to emphasize primary care, in which generalist physicians see patients initially and coordinate any specialty care that patients may need; (10) by increasing the number of primary care teams, VA has improved access to routine care and expedited referrals to specialty care; (11) VA is also improving access to health care by providing outpatient care at additional community-based outpatient clinics, expanding evening and weekend hours for clinics, and exploring other innovations; (12) these efforts have shortened the time veterans spend waiting for an appointment as well as that spent waiting to be seen upon arrival for an appointment; (13) all of the medical centers GAO visited have established primary care teams and increased the number of veterans assigned to primary care; (14) as networks and medical centers continue to respond to incentives to improve the efficiency of their operations, headquarters' monitoring of the impact of such responses is necessary to help ensure that they do not compromise the appropriateness of health care veterans receive; and (15) GAO found that although VA has implemented health care monitoring mechanisms to assess some of the changes networks and medical centers are introducing, these mechanisms have not fully succeeded.
In light of the significant changes in the international security environment resulting from the dissolution of the Soviet Union and declining resources available for defense needs, the Department of Defense (DOD) has been reexamining U.S. defense strategy, force levels, and budgetary requirements for the post-Cold War era. In 1990, the President presented a defense plan reflecting a shift in U.S. strategy from preparing for a global war in Europe against the Soviet Union to preparing for major regional conflicts against uncertain adversaries. This plan proposed a significantly reduced force structure, or base force, but retained sufficient forces to counter a possible reemergence of the Soviet threat. Following the change in administrations in 1993, the new Secretary of Defense reassessed U.S. defense requirements in an effort referred to as DOD’s bottom-up review. This review, completed in October 1993, examined the nation’s defense strategy, force structure, modernization, infrastructure, foundations, and resources needed for the post-Cold War era. As a result of the bottom-up review, DOD continued to focus U.S. strategy on regional threats; however, it de-emphasized the possibility of a reemerging Soviet threat and reduced U.S. forces to levels smaller than the base force. According to DOD officials, the Secretary of Defense called for the bottom-up review to be completed in time to be considered in developing DOD’s fiscal year 1995 budget and Future Years Defense Program. Therefore, the review was completed in about 7 months. In the Report on the Bottom-Up Review, DOD stated that much more work had to be done. According to DOD’s bottom-up review, the United States must pursue an overall defense strategy characterized by continued political, economic, and military engagement internationally. This strategy of engagement advocates (1) preventing the emergence of threats to U.S. interests by promoting democracy, economic growth, free markets, human dignity, and the peaceful resolution of conflict and (2) pursuing international partnerships for freedom, prosperity, and peace. The bottom-up review outlined the new dangers facing U.S. interests in the post-Cold War era and a specific strategy for dealing with each one. These dangers included (1) the proliferation of nuclear weapons and other weapons of mass destruction; (2) regional dangers, posed primarily by the threat of large-scale aggression by major regional powers with opposing interests; (3) dangers to democracy and reform in the former Soviet Union, Eastern Europe, and elsewhere; and (4) economic dangers to national security. In the Report on the Bottom-Up Review, the Secretary of Defense cited regional aggression as chief among the new dangers. To deal with regional aggression and other regional dangers, DOD’s strategy is to (1) defeat aggressors in major regional conflicts; (2) maintain a presence overseas—the need for U.S. forces to conduct normal peacetime operations in critical regions—to deter conflicts and provide regional stability; and (3) conduct smaller-scale intervention operations, such as peacekeeping, humanitarian assistance, and disaster relief. To deal with the threat of regional aggression, DOD judged that it is prudent for the United States to maintain sufficient military power to be able to fight and win two major regional conflicts that occur nearly simultaneously. The bottom-up review determined the specific forces, capabilities, and improvements in capabilities for executing the two-conflict strategy. In reaching its conclusions, DOD examined various strategy and force options for major regional conflicts, as shown in table 1.1. For assessment purposes, DOD focused on two specific scenarios involving regional aggression. In evaluating the strategy and force options, DOD also considered requirements for conducting (1) peace enforcement or intervention operations in smaller-scale conflicts or crises, (2) overseas presence, and (3) deterrence of attacks with weapons of mass destruction. DOD, for various reasons, chose the strategy of fighting and winning two nearly simultaneous major regional conflicts and related forces with enhancements—the third option shown in table 1.1. For example, DOD believed that this option would possibly deter a second regional aggressor from attacking its neighbors while the United States was responding to another regional conflict. In addition, fielding forces sufficient to win two wars nearly simultaneously would provide a hedge against the possibility that a future adversary might one day confront the United States with a larger-than-expected threat. Finally, DOD believed that this strategy option, forces, and enhancements were affordable within expected budget constraints. According to its Report on the Bottom-Up Review, DOD also estimated the cost of the bottom-up review program and matched it against the President’s objective for reducing the defense budget. DOD estimated that the program would achieve $91 billion in total savings and that additional savings would be identified during DOD’s normal program and budget review. DOD estimated that the projected force would be available by fiscal year 1999. DOD used the results of the bottom-up review to develop its fiscal year 1995 budget and Future Years Defense Program. In concluding that the forces selected for the third option could implement its strategy, DOD made several key assumptions about the forces’ deployability and capabilities, including that forces involved in other operations, such as peacekeeping, would be redeployed to a regional conflict; certain specialized units or unique assets would be shifted from one sufficient strategic lift assets and support forces would be available; Army National Guard enhanced combat brigades could be deployed within 90 days of being called to active duty to supplement active combat units; and a series of enhancements, such as improvements to strategic mobility and U.S. fire power, were critical to implementing the two-conflict strategy and would be available by about 2000. The specific enhancements included improving (1) strategic mobility, through more prepositioning and enhancements to airlift and sealift; (2) the strike capabilities of aircraft carriers; (3) the lethality of Army firepower; and (4) the ability of long-range bombers to deliver conventional precision-guided munitions. Completing these enhancements, according to DOD, would both reduce overall ground force requirements and increase the responsiveness and effectiveness of its power projection forces. In most cases, the projected enhancements involved ongoing programs to upgrade existing capabilities. For example, the bottom-up review cited the need for additional airlift and sealift assets to improve strategic mobility. DOD had previously identified this need in its 1991 mobility requirements study and had already programmed funds to procure some of the specific assets. According to the Secretary of Defense, the bottom-up review was a comprehensive reassessment that set the framework for defense planning for the next 5 years and beyond. In September 1993 and May 1994, the Secretary issued his defense planning guidance for fiscal years 1995 to 1999 and fiscal years 1996 to 2001, respectively. This guidance formally directed the military services and defense agencies to implement the bottom-up review’s conclusions. The May 1994 guidance included an illustrative planning scenario reflecting DOD’s concept of how the United States would respond to two major regional conflicts that occur nearly simultaneously. Among other things, the scenario detailed the amount of time between the outbreak of hostilities in both conflicts, number and types of forces deployed to each conflict, timing of deployments, and projected time for completing various combat phases. DOD directed military planners to use this scenario, along with other guidelines in the September and May guidance, in developing program and budget requirements for DOD’s selected strategy and forces. In considering DOD’s portion of the President’s budget for fiscal year 1995, Members of the Congress raised questions about the bottom-up review, including the accuracy of its assumptions and affordability of its projected force. As a result, in the Fiscal Year 1995 National Defense Authorization Act, the Congress is requiring the Secretary of Defense to review the assumptions and conclusions of the President’s budget, the bottom-up review, and the Future Years Defense Program. The Secretary is required to submit a report on the results of its review to the President and the Congress in May 1995. Among other things, this report must describe the force structure required to execute DOD’s two-conflict strategy in light of other ongoing or potential operations and may also address possible adjustments to the strategy. We examined DOD’s bottom-up review assumptions about key aspects of the two-conflict strategy to determine whether they reasonably supported DOD’s conclusion that the projected force, with enhancements, can execute the strategy. In conducting our assessment, we did not examine DOD’s rationale for selecting the two-conflict strategy, the capabilities of potential regional aggressors, and the extent to which allied support could reduce the need for U.S. forces. To determine DOD’s assumptions and conclusions about executing the two-conflict strategy, we interviewed knowledgeable officials involved in the bottom-up review at the offices of the Assistant Secretary of Defense for Strategy, Resources, and Requirements; the Assistant Secretary of Defense for Reserve Affairs; the Joint Chiefs of Staff; and the Army, Air Force, Navy, and Marine Corps headquarters. We also reviewed relevant documentation, including the final report on the bottom-up review and the Secretary of Defense’s planning guidance, and received briefings on regional dangers from DOD officials. We did not examine DOD’s rationale for selecting the two-conflict strategy; rather, we focused on examining DOD’s assumptions on key aspects of the strategy. DOD denied us access to specific information on the inputs and results of its analysis of force options. However, we obtained considerable information on DOD’s analysis through interviewing knowledgeable officials and reviewing available documentation. To analyze whether DOD’s assumptions reasonably supported DOD’s conclusion that the projected force, with enhancements, can execute the two-conflict strategy, we interviewed officials at the headquarters of the four military services, the U.S. Army Forces Command, the U.S. Transportation Command, the Air Combat Command, the Army National Guard Bureau, and the U.S. Army Reserve Command. We also reviewed relevant documentation on (1) the use of U.S. forces engaged in peacekeeping operations, (2) Army support capability, (3) training of Army National Guard enhanced combat brigades, and (4) DOD plans for improving strategic mobility and the lethality of U.S. firepower. We interviewed officials at two war-fighting commands to obtain their views on DOD’s assumptions in the bottom-up review, the feasibility of conducting the two-conflict strategy, and the defense planning guidance implementing the bottom-up review’s findings. We conducted our review from October 1993 to October 1994 in accordance with generally accepted government auditing standards. Under the bottom-up review’s two-conflict strategy, military planners, for the first time, are required to plan to deploy forces to respond to two nearly simultaneous regional conflicts. However, in doing the review, DOD did not fully analyze its assumptions regarding key aspects of the strategy, such as the ability of forces to redeploy from other operations to regional conflicts or between conflicts, availability of strategic lift and support forces, and deployability of Army National Guard combat brigades. Furthermore, we question some of DOD’s assumptions. For example, certain support forces needed in the early stages of a regional conflict could not immediately redeploy from peace operations because they would be needed to assist in redeploying other forces. The Army currently lacks sufficient numbers of certain support forces for a single conflict, and National Guard combat brigades are experiencing difficulty meeting peacetime training requirements that are critical to ensuring timely deployment in wartime. Finally, some enhancements may not be available as planned. The bottom-up review’s strategy of maintaining the capability to fight and win nearly simultaneous conflicts changed the basis for U.S. military planning. Specifically, the base force was required to be capable of conducting a decisive offense in response to one conflict and still be capable of mounting a credible defense against an aggressor in another region before the first crisis ended. As a result, war-fighting commanders prepared operational plans for regional conflicts with the assumption that no other conflict was ongoing or would occur after their conflict began. They therefore assumed that combat and support forces, strategic mobility assets, and other capabilities required to execute their plan would be available. The bottom-up review’s strategy envisions that U.S. forces could be engaged in offensive operations in two conflicts nearly simultaneously. This strategy requires DOD to meet the requirements of two war-fighting commanders at the same time. DOD officials stated that extensive analysis, beyond that conducted during the bottom-up review, is required to consider the implications of responding to two nearly simultaneous conflicts. Since the bottom-up review, DOD has begun additional analyses. According to the bottom-up review, if a major regional conflict occurs, DOD will deploy a substantial portion of its forces stationed in the United States and draw on forces assigned to overseas presence missions. If DOD believes it is prudent to do so, it will keep forces engaged in smaller-scale operations, such as peacekeeping, while responding to a single conflict. If a second conflict breaks out, DOD would need to deploy another block of forces, requiring a further reallocation of overseas presence forces, any forces still engaged in smaller-scale operations, and most of the remaining U.S.-based forces. In determining force requirements for the two-conflict strategy, DOD assumed that forces already engaged in other operations could redeploy to a regional conflict. However, DOD did not analyze the feasibility of or requirements for such a redeployment during the bottom-up review. For example, DOD did not consider (1) requirements for readiness upgrades for forces before redeployment, (2) requirements for diverting airlift and sealift assets to pick up personnel and equipment from the operation, and (3) the impact on the war-fighting commander involved in a regional conflict if combat and support forces engaged in other operations were not immediately available. DOD did not begin to analyze its assumption on redeploying forces from operations other than war until after completing the bottom-up review. In June 1994, the Army initiated a study of the impact of peace operations on Army requirements, including the implications of redeploying combat and support forces from such operations to regional conflicts. The Army does not expect to complete this analysis until early to mid-1995. Our work on the impact of peace operations on U.S. forces suggests that it would be difficult for certain support and combat forces to disengage and quickly redeploy to a major regional conflict. For example, certain Army support forces and specialized Air Force combat aircraft, such as the F-4G and F-15E, deployed to peace operations are the same forces needed in the early stages of a regional conflict. However, some support forces, such as transportation units that move personnel and cargo through ports, could not immediately redeploy because they would be needed to assist in redeploying other forces. Furthermore, while Air Force aircraft and aircrews could easily fly from the peace operation to a regional conflict, the maintenance and logistics support needed to keep the aircraft flying—supplies, equipment, and personnel—would have to wait for available airlift. Obtaining sufficient airlift to redeploy forces from a peace operation would be challenging because already limited airlift assets committed to deploying forces to the regional conflict would have to be diverted to pick up these forces. Finally, forces may need to upgrade their training, equipment, and supplies before redeploying. For example, according to Air Force officials, peace operations tend to degrade the overall combat readiness of Air Force flight crews. Similarly, naval aviators also find that they lose proficiency in some combat skills through prolonged participation in peace operations. We are reporting separately on the impact of peace operations on U.S. forces. According to the bottom-up review, certain specialized units or unique assets would be dual-tasked—shifted from the first regional conflict to the second conflict. In prior years, the Air Force had enough fighter and bomber aircraft to meet the war-fighting requirements of two regional conflicts. DOD believes that it may not have certain assets, such as B-2 bombers, F-117 stealth fighters, and EF-111 aircraft, in sufficient quantities to support two conflicts, and it therefore may need to shift aircraft from one conflict to another. Although DOD assumed that dual-tasking would occur, it did not analyze how assets would be shifted from one conflict to another. For example, in determining force requirements, DOD did not determine what specific types and numbers of assets would be required to be dual-tasked and when they could be redeployed, or whether sufficient logistical support, such as airlift, refueling aircraft, air crews, or spare parts kits, would be available for the redeployment. DOD officials explained that because a model for two nearly simultaneous conflicts does not exist, the modeling to determine force requirements during the bottom-up review did not simulate the shifting of assets from one conflict to another. Rather, DOD identified the specific number of assets required for each conflict and assumed that dual-tasking would compensate for any shortfalls. After the bottom-up review, the Air Force and its Air Mobility Command began analyzing the implications and requirements for dual-tasking based on assumptions contained in the Secretary of Defense’s May 1994 defense planning guidance. Among other things, these analyses—completed in August and May 1994, respectively—identified the specific assets that would be dual-tasked, the timing of redeployment, and refueling aircraft needed to support the redeployment from one conflict to another. The Air Force is continuing to analyze the requirements for dual-tasking, including the availability of aircrews and spare parts kits. Furthermore, in November 1994, at the direction of the Chairman of the Joint Chiefs of Staff, DOD began a war game analysis of several variables of the two-conflict strategy, including requirements for dual-tasking. This analysis is expected to be completed sometime in 1995. As discussed in chapter 3, war-fighting commands are analyzing the two-conflict strategy using a different scenario and deployment concept from those outlined in the defense planning guidance. They, too, are examining dual-tasking, including how many and what type of assets would need to shift and at what point in the conflict such a shift could reasonably occur. Until this analysis is completed, currently projected for sometime in 1995, and its results are reconciled with ongoing Air Force and DOD studies, the specific requirements for dual-tasking will not be known. According to the bottom-up review, the illustrative planning scenarios that DOD used in determining force and strategy options for regional conflicts assumed that a well-armed regional power would initiate aggression thousands of miles from the United States. On short notice, U.S. forces from other areas would be rapidly deployed to the area and enter the battle as quickly as possible. Because DOD assumed that most of these forces would not be in the region when hostilities begin, it emphasized that sufficient strategic mobility—airlift, sealift, and prepositioning of equipment at forward locations—would be needed to successfully execute the two-conflict strategy. The bottom-up review called for specific enhancements to DOD’s existing strategic mobility capability—most of which DOD had identified in its 1991 mobility requirements study. This congressionally required study determined future requirements for airlift, sealift, and prepositioning and recommended a program to improve DOD’s mobility capability. In conducting the study, DOD analyzed various scenarios involving single regional conflicts and a scenario involving two concurrent conflicts. Based on the requirements for the single conflict scenario deemed to be most demanding, the study recommended increasing sealift capacity for prepositioned equipment and rapid deployment of heavy Army divisions and other U.S. forces by (1) acquiring—through new construction and conversion—additional capacity equal to 20 large, medium-speed, roll-on/roll-off ships; (2) leasing two container ships; (3) expanding the Ready Reserve Force from 96 to 142 ships (an increase of 46 ships); and (4) increasing the overall readiness of the Ready Reserve Force;increasing U.S. capability to respond within the first few weeks of a regional conflict by prepositioning Army combat, support, and port-opening equipment aboard nine of the newly constructed or converted large, medium-speed ships (by fiscal year 1997);improving airlift by continuing the C-17 aircraft program, acquiring up to 120 aircraft; and improving the capability of the U.S. transportation system to move combat and support units from their peacetime locations to ports of embarkation by, among other things, purchasing 233 additional heavy-lift rail cars and developing an ammunition loading facility on the U.S. west coast. According to the mobility study, the recommended program reflected a moderate-risk and affordable mobility force for a single regional conflict that would enable DOD to move 4-2/3 Army divisions in 6 weeks. It also concluded that its recommended program was not sufficient to handle a second concurrent major regional conflict. During the bottom-up review, DOD relied heavily on the results of the mobility study when considering mobility requirements for the two-conflict strategy. The bottom-up review endorsed the mobility study’s recommendations, and called for increasing the amount of equipment prepositioned on land in the Persian Gulf area. At the time of the bottom-up review, DOD had a battalion-sized set of equipment ashore in the Persian Gulf and planned to increase this prepositioning to two brigade sets. DOD believed that the prepositioning was necessary because the bottom-up review envisioned that forces would need to deploy more quickly than provided for in the 1991 study. After completing the bottom-up review, DOD initiated a detailed analysis of mobility requirements for the two-conflict strategy to validate its recommendations in the 1991 mobility study and the bottom-up review. According to DOD, this study was required because of significant changes resulting from the bottom-up review and delays in DOD’s mobility program. For example, the bottom-up review and related defense planning guidance presented a new military strategy, changed the overall force structure, and called for enhancements in war-fighting capability. Furthermore, as discussed later, DOD experienced delays in acquiring C-17 aircraft. By February 1995, DOD expects to complete its study, identifying any changes in mobility requirements and necessary adjustments to its mobility program. DOD will then identify the appropriate mix of specific airlift aircraft—C-17 and alternatives to the C-17. DOD plans to complete this mix analysis by November 1995. Until the two studies are complete, DOD will not know the overall mobility requirements and related costs for its two-conflict strategy. During the bottom-up review, DOD assumed that sufficient support units would be available to support combat operations in two nearly simultaneous major regional conflicts. However, the Army currently does not have the units needed to support its overall combat force. Furthermore, analysis of current U.S. plans for responding to regional conflicts indicates that the Army lacks sufficient units for a particular conflict and would have even more difficulty supporting two conflicts. The bottom-up review did not analyze the specific types and quantities of Army support units needed to execute the two-conflict strategy. In modeling force and strategy options, DOD used notional numbers to simulate the support forces that would typically deploy to support an Army division. It assumed that the Army would deploy with all of the specific support units needed to support its combat forces. According to DOD officials, they did not thoroughly analyze support requirements because of the short time frame to complete the bottom-up review. In September 1994, the Army began analyzing support requirements for its two-conflict combat force of 10 active divisions and 15 Army National Guard enhanced brigades—existing Guard combat brigades with improved readiness. This analysis was part of its biennial process for determining support needs. The process, referred to as the Total Army Analysis, identifies the numbers and types of units needed to support a given combat force in a designated scenario and the personnel and equipment needed to fill these units. The Army then assesses other priorities, such as combat requirements, risks involved if support requirements are not fully met, and decides on how many support units to fill, given available funding. The Army planned to complete by mid-1995 its Total Army Analysis of support requirements for the two-conflict strategy based on the bottom-up review force. Although the bottom-up review assumed that the Army would have sufficient support forces, the Army cannot support its current active force of 12 divisions, and Army officials anticipate that shortfalls will also exist for the two-conflict combat force. In an earlier Total Army Analysis of support requirements for the 12-division force, the Army was unable to fill 838 support units, including engineer, medical, quartermaster, and transportation units. Although these 838 units, as a whole, represent a small portion of the Army’s total support units, they reflect key capabilities that the Army has determined are required to support combat operations. While the number of active divisions in the two-conflict force is smaller than the current force, the total number of personnel allotted to the Army under the bottom-up review is also smaller, leaving fewer people to fill support units. Army officials involved in the ongoing Total Army Analysis therefore believe the analysis will reveal that the Army cannot fully fill all support units needed for the two-conflict strategy and force. In the past, the Army has had difficulty generating sufficient support units for deployed combat forces, and it currently does not have certain types of units called for in plans for a single regional conflict. In 1992, we reported that in trying to support a combat force of about eight divisions during the Persian Gulf War, the Army deployed virtually all of some types of support units and exhausted some units. For example, the Army deployed virtually all prisoner-handling, postal, and medium truck units and all graves registration, pipeline and terminal operation, heavy truck, and water supply units. Because of favorable conditions, such as a long lead time for deployment, extensive host nation support from Saudi Arabia, a ground offensive of short duration, and the lack of a second conflict requiring a U.S. response, the Army was able to mitigate most of the adverse impact of its support shortfalls during the Gulf War. The bottom-up review strategy and force present a greater challenge because the Army may need to generate support forces for at least 10 active divisions deployed nearly simultaneously, with little warning time, to two major conflicts. Analysis of current U.S. plans for two particular regional conflicts indicates that the Army would face the same types of difficulties it encountered during the Gulf War. Our examination of the requirements for 17 types of support units contained in the plans showed that the Army (1) lacks a total of 238 units to meet the requirements of a single conflict and (2) has tasked 654 units to support combat operations in both conflicts. Table 2.1 shows the number of units, by type, that the Army lacks for a single conflict and that are assigned to both plans. As shown in table 2.1, the largest shortfalls in units required for a single conflict occurred in five types—medical (84 units), engineer (33 units), quartermaster (20 units), military police (40 units), and transportation (29 units), totaling 206 units. For two plans—each covering a different conflict—the shortfall would increase to 338 units. Table 2.2 shows a breakdown of this shortfall. We are reporting separately on the Army’s ability to provide support forces for the two-conflict strategy, including options for alleviating possible shortfalls. The bottom-up review called for 15 Army National Guard enhanced brigades to execute the two-conflict strategy and about 22 other National Guard brigades—now organized as 8 divisions—for other purposes, including providing the basis for rotational forces in extended crises and fulfilling domestic missions. We believe that these divisions include support units, personnel, and equipment that the Army may be able to draw upon to augment its support capability. The Army’s portion of the forces for the two-conflict strategy consists of 10 active divisions and 15 Army National Guard enhanced brigades. The bottom-up review stated that the enhanced brigades were needed to execute the two-conflict strategy and assigned them the broad mission of reinforcing active divisions in regional conflicts. For example, DOD envisioned that these brigades would deploy to one or both conflicts if operations did not go as planned or would replace overseas presence forces redeployed to a regional conflict. The bottom-up review further stated that, in the future, Guard combat brigades would be organized and filled so that they could be mobilized, trained, and deployed more quickly. It committed the Army to focus on readiness initiatives directed toward the enhanced brigades and established a specific goal to have these brigades ready to begin deployment within 90 days of being called to active duty. In April 1994, the Army Chief of Staff approved the 15 Guard brigades selected—8 heavy brigades and 7 light brigades—as the enhanced brigades. Although DOD assumed that the enhanced brigades would deploy quickly to reinforce active divisions in a regional conflict, it did not analyze the specific wartime requirements for these brigades. DOD officials stated that in analyzing force options for responding to regional conflicts, they used active notional Army brigades and did not test the impact on the war fight of deploying reserve enhanced brigades. Furthermore, DOD did not determine basic factors such as the (1) specific wartime missions of the enhanced brigades and the timing for deploying the brigades, (2) ability of National Guard combat brigades to deploy quickly and fulfill combat missions given readiness problems experienced during the Gulf War, and (3) specific capability enhancements needed to improve the brigades’ readiness. Because fundamental questions remained about the brigades, the Army formed a task force in November 1993 to do an in-depth study of alternatives for organizing, tasking, training, and equipping the brigades. In April 1994, the Army Chief of Staff confirmed, based on the task force’s findings, the bottom-up review’s assertion that the enhanced brigades would reinforce active forces. However, the brigades’ specific missions, such as whether the brigades would conduct combat maneuvers, provide security, or perform other tasks, are still undefined. As discussed in chapter 3, war-fighting commands are just beginning to analyze how and when the enhanced brigades might be used in a regional conflict. The Army Chief of Staff also determined that the brigades would maintain personnel and equipment at the highest readiness level during peacetime and be ready to deploy at this level no later than 90 days after being called up; train with specific divisions or corps in peacetime, but maintain the flexibility to operate with any division or corps in wartime; focus their training on mission-essential tasks involving movement (maneuvering) to contact with the enemy, attacks on enemy positions, and defense against enemy attacks; be of standard Army design for heavy and light brigades and armored be equipped and modernized in a manner compatible with active divisions. The U.S. Army Forces Command was tasked to develop and test a training strategy to ensure that the enhanced brigades meet the 90-day deployment goal. This strategy will include any necessary adjustments to the Army’s current training program for Guard combat brigades. Army headquarters elements were tasked to identify the requirements and costs associated with equipping the brigades. As of January 1995, the Army expected to complete the equipment study in February 1995 and the training strategy in mid-1995. Once the training strategy is completed, the Army envisions that by 1999 it will be tested on only 3 of the 15 brigades. Based on the test results, the Army will decide whether to apply the training strategy to the remaining brigades. The bottom-up review’s goal to have enhanced brigades ready to deploy within 90 days of being called to active duty is based on Army estimates that the brigades will need 90 days of post-mobilization training to achieve proficiency in more complex skills at higher echelons, such as companies and battalions. However, these estimates assumed that the brigades will have achieved proficiency in basic skills at the individual soldier, crew, and platoon level during peacetime training. During the Persian Gulf War, three Guard combat brigades were activated, but the Army did not deploy any of these brigades. Instead, they remained in training status until the war was over. As we reported in November 1992,and testified in March 1994, the brigades experienced problems in achieving proficiency in basic skills at the time of mobilization. For example, many Guard soldiers were not completely trained to do their jobs, many tank and Bradley Fighting Vehicle crews were not proficient in many commissioned and noncommissioned officers had not completed required leadership courses. As a result, Guard brigades were trained to achieve proficiency in many basic skills, rather than more complex skills, after mobilization. Because the Army believed the brigades were not ready to deploy, it substituted active brigades. Contributing to the brigades’ training problems was the fact that reserve forces generally train only about 39 days each year, and a considerable portion of this time can be taken up by administrative matters or in traveling long distances to reach training ranges. Because of the Gulf War experience, the Army significantly changed its strategy for training Guard combat brigades, including implementing an initiative called Bold Shift. This project, initiated in September 1991, was designed to focus brigade training during peacetime at the basic—individual, crew, and platoon—level. Prior to this initiative, peacetime training encompassed both basic and complex skills. Our ongoing work on the Bold Shift program suggests that Guard combat brigades are still continuing to experience problems in achieving proficiency in basic skills. For example, as we stated in our March 1994 testimony, 1992 training data for seven existing Guard combat brigades showed that none had reached pre-mobilization training and readiness goals. Our analysis of 1993 training data confirmed that this trend is continuing. We are reporting separately on the specific training problems and progress of Guard combat brigades under Bold Shift. As discussed in chapter 1, the bottom-up review described several specific enhancements to U.S. capabilities as key to the projected force’s ability to fight and win two nearly simultaneous conflicts, including improving strategic mobility and the lethality of U.S. firepower. According to DOD, these improvements would compensate for the loss in capability resulting from reductions in forces required in the bottom-up review. Although DOD estimated that most enhancements would be done by about 2000, some may not come on line as planned or at all. To improve strategic mobility, DOD’s plans included procuring C-17 airlift aircraft, increasing the number of sealift ships available, improving the responsiveness of the Ready Reserve Force, and prepositioning additional equipment on land. At the time of the bottom-up review, DOD assumed that by 1999, 80 of 120 C-17s and 21 additional Ready Reserve Force roll-on/roll-off ships would be available as planned. By the same time, DOD expected to preposition equipment on ships and increase the amount of equipment prepositioned on land in the Persian Gulf area from a battalion-sized set to two brigade sets, located in two different locations. War-fighting command officials stated that prepositioning this equipment is critical to executing the two-conflict strategy. As of January 1995, DOD, as we reported in November 1994, had made progress in improving the responsiveness of the Ready Reserve Fleet. It had also prepositioned a brigade set of equipment on ships and nearly completed prepositioning a brigade set of equipment on land in the Persian Gulf area. DOD has encountered some problems or funding uncertainties in acquiring additional airlift and sealift and prepositioning the second brigade set on land. Specifically, DOD’s assumption that 80 C-17 aircraft would be available by fiscal year 1999 was overly optimistic. Since its inception, the C-17 program has been plagued with cost, schedule, and performance problems. We testified in April 1994 that total costs continued to grow, delivery schedules had slipped, and aircraft had been delivered with unfinished work or known deficiencies. In December 1993, the Secretary of Defense decided to limit the program to 40 aircraft unless the contractor significantly improved management and productivity. Furthermore, as discussed previously, the Secretary also decided to study alternatives for a mixed airlift force of C-17s and nondevelopmental—commercial or military—aircraft. DOD expects to complete the study in November 1995 and at that time will decide whether to procure additional C-17s. As of October 1994, the contractor had delivered 15 C-17 aircraft and planned to deliver the remaining 25 aircraft by September 1998. As of October 1994, the Department of Transportation had acquired 14 of the 21 Ready Reserve Force ships planned to be available for DOD’s mobility program by fiscal year 1999. It planned to acquire the remaining seven ships with funds remaining from fiscal year 1994 and requested for fiscal year 1995. However, during fiscal year 1995 deliberations, the Congress rescinded $158 million in fiscal year 1994 funds programmed for the seven ships, but provided $43 million in fiscal year 1995 funds. DOD believes that this funding will be sufficient to procure two ships and plans to program funds for the remaining five ships in its budgets for 1996 to 1998. DOD’s plans to preposition the second of two brigade sets of equipment ashore in the Persian Gulf are also uncertain. As of January 1995, the U.S. Central Command had identified a location for the second set of equipment and reached necessary agreements with the host country. However, according to DOD officials, the Army had obtained funding only for the site survey and the project’s design. The Army plans to request funding in its fiscal year 1996 budget submission for the remainder of the project over a 3-year period covering fiscal years 1996-98. The bottom-up review called for various improvements to the lethality of U.S. firepower, including development of precision-guided munitions and the addition of air-to-ground attack capability to the Navy’s F-14 aircraft (referred to as the Block I upgrade). At the time of our review, these improvements were part of DOD’s ongoing programs and therefore reflected capabilities that were already planned. DOD assumed that sufficient quantities of precision munitions for the two-conflict strategy would be available by about the year 2000 and the Block I upgrade would be completed by the year 2003. The bottom-up review emphasized that precision-guided munitions already in the U.S. inventory, as well as new types of munitions still under development, are needed to ensure that U.S. forces can operate successfully in future major regional conflicts and other operations. It noted that they hold the promise of dramatically improving the ability of U.S. forces to destroy enemy armored vehicles and halt invading ground forces, as well as destroy fixed targets at longer ranges, thus reducing exposure to enemy air defenses. Specific antiarmor precision munitions cited included the Tri-Service Standoff Attack Missile. The Tri-Service Standoff Attack Missile will not come on line as planned. Because of significant developmental difficulties and growth in the expected unit cost, DOD canceled the Tri-Service Standoff Attack Missile program. We reported extensively on cost, schedule, and performance problems with the Tri-Service Standoff Attack Missile. Furthermore, we concluded that the Navy did not adequately justify the need for the Block I upgrade. During deliberations on DOD’s fiscal year 1995 appropriation, the Congress canceled funding for the F-14 Block I upgrade because of questions about its affordability. The strategy of fighting and winning two nearly simultaneous conflicts will require a significant change in military planning for the deployment and use of U.S. forces. However, in the bottom-up review, DOD determined the strategy, forces, capability enhancements, and estimated costs for accomplishing the strategy without sufficiently analyzing key assumptions to ensure their validity. Until DOD fully analyzes basic factors, such as whether forces engaged in other operations that are needed in the early stages of a regional conflict can quickly redeploy, sufficient mobility and support forces exist, reserve brigades can deploy when needed or improvements in capabilities will be available, it will not have a firm basis for determining the forces, supporting capabilities, and funding needed for the two-conflict strategy or if the strategy should be changed. DOD disagreed with our overall conclusion that DOD did not adequately analyze the assumptions used in the bottom-up review. DOD said that DOD’s leadership recognized practical limitations on the scope of analysis that could be done in the time available and fully considered these limitations in making decisions about key aspects of the long-term defense program. DOD further stated that, in raising questions about the bottom-up review’s assumptions, we did not recognize the difference between broad conceptual force planning and detailed operational planning. DOD said that it did not develop actual war plans, but rather identified broad, but comprehensive, requirements that U.S. forces should be able to meet to carry out crucial elements of DOD’s defense strategy. DOD also stated that to ensure adequate force planning, it recognized the need to continually refine and update its assessments. DOD noted that, to date, follow-on analyses have upheld the basic tenets and findings of the bottom-up review. We were unable to confirm DOD’s statement regarding the results of the follow-on analyses because DOD will not make these results available until the studies are completed. We recognize that DOD was faced with time limitations in doing the bottom-up review, and was therefore restricted in the extent of analyses that could be done. We also agree that the bottom-up review was a broad force planning and programming effort rather than a war-planning effort. In fact, in chapters 1 and 3, we clearly distinguish between the bottom-up review and detailed future operational planning. However, in the bottom-up review, DOD made a specific judgment that the United States would maintain the capability to fight and win two nearly simultaneous major regional conflicts and decided the specific size and composition of the force capable of meeting this strategy. In making these decisions, DOD made critical assumptions about factors that are key to the successful execution of the two-conflict strategy without performing sufficient analyses to test the validity of its assumptions. In fact, DOD and the war-fighting commands are now exploring basic questions about DOD’s assumptions, such as whether forces involved in smaller-scale operations can actually be available when needed to deploy to a regional conflict, whether the same combat forces would be needed at the same time in both regional conflicts and whether the Army has sufficient support for nearly simultaneous combat operations in two conflicts. DOD also disagreed with our specific findings that (1) it did not assess requirements for shifting assets between regional conflicts, (2) it did not fully assess mobility requirements, and (3) the Army would be challenged in supporting two major regional conflicts. First, DOD stated that it has ample experience in rapidly deploying forces, particularly combat and support aircraft, from one theater to another. DOD said that in its bottom-up review analysis, it made judgments about its future ability to shift assets based on that experience. We agree that DOD has ample experience in redeploying forces from one theater to another. We note, however, that DOD’s experience has not included redeployments from one major regional conflict to another, as envisioned in the bottom-up review and defense planning guidance scenario. Furthermore, as discussed in chapter 3, the war-fighting commands’ study has raised questions about shifting assets between conflicts. For these reasons, we continue to believe that the bottom-up review did not adequately assess requirements for shifting assets between conflicts. Second, DOD stated that in assessing mobility requirements during the bottom-up review, it relied heavily on its 1991 mobility requirements study. DOD believes that it understands the vast majority of its basic lift requirements and capabilities for responding to two nearly simultaneous conflicts. We agree that the 1991 study provided a useful baseline; however, the bottom-up review resulted in significant changes in mobility assumptions. DOD did not begin to analyze these changes until after the bottom-up review. Furthermore, the 1991 study concluded that its recommended mobility program was not sufficient for two concurrent conflicts. Until DOD’s reassessment of mobility requirements is complete, we continue to believe that DOD will not know the extent of strategic airlift, sealift, and prepositioning needed to support two major regional conflicts. Finally, DOD stated that the Army demonstrated, as recently as Operation Desert Storm in 1991, that it can fully support large-scale combat operations in a single major regional conflict. DOD also believes that it is premature to draw conclusions regarding Army support shortfalls until the Army completes its ongoing analysis of support requirements for the two-conflict strategy. We recognize that the Army was able to support combat operations during Operation Desert Storm; however, as discussed in chapter 2, the Army did encounter difficulties. Also, the operation was conducted under several favorable circumstances; for example, there was no second conflict at the same time. Furthermore, we did not conclude that the Army could not support two major regional conflicts. Rather, we showed that DOD did not analyze the validity of its assumption that sufficient support forces would be available and that various factors suggest that the Army would be challenged in meeting this requirement. We agree that the Army’s ongoing analysis will identify specific requirements and shortfalls. Additional annotated evaluations of DOD’s comments are presented in appendix I. War-fighting command officials believe that DOD’s concept for responding to two nearly simultaneous major regional conflicts—detailed in defense planning guidance that is being used to develop program and budget requirements—may not be the best approach. Their estimates of key characteristics of a situation involving two nearly simultaneous conflicts and the deployment of forces differ significantly from DOD’s estimates, including the amount of warning time for both conflicts and time between the onset of each conflict, mix of combat forces needed to respond to each conflict, and timing of force deployments. As a result, the commands are examining options they believe may maximize the use of U.S. capabilities. Command officials emphasized that they are not suggesting the United States cannot accomplish the two-conflict strategy. Their study is analyzing many of the variables that DOD made assumptions about during the bottom-up review, such as shifting assets between conflicts and the sufficiency of strategic lift. In May 1994, the Secretary of Defense issued his defense planning guidance for the 5-year planning period 1996 to 2001. This guidance provided several illustrative planning scenarios depicting the challenges U.S. forces might face during the planning period and generic force packages representing the types of military capability needed to address these challenges. The specific scenarios covered single regional conflicts, two nearly simultaneous conflicts, and various smaller-scale operations. They included a detailed summary of the situation, enemy objectives and forces, U.S. objectives and forces, projected warning times of enemy attack, a schedule for the deployment of U.S. forces to the conflict area, and assumptions governing the circumstances depicted in the scenario. According to the defense guidance, the illustrative scenarios, among other things, (1) provide a “technical yardstick” to help focus, develop, and evaluate defense forces and programs in further detail and (2) enable service components to formulate detailed programs that provide levels of readiness, sustainability, support, and mobility appropriate to the bottom-up review’s two-conflict strategy. For example, DOD is using the defense planning guidance scenario for two nearly simultaneous conflicts as a basis for its study of mobility requirements, and the Air Force and its Air Mobility Command used the scenario in examining requirements for dual-tasking assets and refueling aircraft (see chap. 2). The Joint Chiefs of Staff will use the defense planning guidance and scenario in apportioning specific forces, strategic lift, prepositioning, and other assets to war-fighting commanders for accomplishing assigned missions, including responding to regional conflicts. In general, the defense planning guidance scenario for nearly simultaneous conflicts depicted a situation in which a second conflict breaks out while the United States is engaged in and preoccupied with a major regional conflict a considerable distance away. The scenario envisioned that U.S. combat and supporting capabilities, including strategic mobility, would first be focused on responding to the first conflict until indications of a second conflict were recognized. It made several key assumptions, including the anticipated warning time, number of days separating the two conflicts, forces sufficient to respond to each conflict, additional forces available to the war-fighting commanders if adverse conditions developed, and the timing of various combat phases. Specific details about the scenario and assumptions are classified. Two war-fighting commands with responsibility for responding to major regional conflicts question whether the defense planning guidance scenario being used to develop program and budgetary requirements for the two-conflict strategy reflects the best approach. Specifically, they believe that the guidance may not best reflect how two nearly simultaneous conflicts would evolve and how the United States should respond. Their overall concern is that the scenario focuses on responding to the first conflict and then the second conflict and does not sufficiently recognize the value of taking significant action to deter the second conflict when the first conflict occurs. The specific details of the commands’ concerns are classified. The commands are also concerned about specific aspects of the scenario and its assumptions, including the following: The warning time for both conflicts and the separation time between the two conflicts are likely to be shorter than DOD envisions. DOD’s concept for deploying forces may not provide the mix of combat and supporting capability that the two commands believe is necessary to successfully respond to two nearly simultaneous conflicts. The scenario does not recognize that both commands have operational requirements for some of the same air, ground, and naval forces and prepositioned equipment that if deployed to the first conflict may not be available when needed for the second conflict. The apportionment of strategic airlift and sealift assets is inadequate and should be based on a different concept for deploying forces. Both commands will likely require many of the same support forces; however, the scenario only addresses combat forces. A higher level of mobilization of reserve forces than called for in the scenario will likely be required. Because of these concerns, the two commands, in February 1994, initiated a joint study to assess the feasibility of responding to two nearly simultaneous major regional conflicts with the bottom-up review force. They are using a scenario and deployment concept that differs from the defense planning guidance scenario. Command officials emphasized that, by initiating the study, they are not suggesting that the United States cannot accomplish the two-conflict strategy. Rather, they are examining options that they believe (1) lessen the possibility that U.S. forces will be required to engage in two major regional conflicts at the same time and (2) put U.S. forces in a better position to be successful in both conflicts if deterrence fails. This study will examine various aspects of the two-conflict strategy, including the number and type of assets required to shift between conflicts and at what point such a shift could reasonably occur. As of January 1995, the commands had reached preliminary conclusions and did not expect to complete the study until sometime later in 1995. However, according to command officials, the study thus far has validated many of their concerns about the defense planning guidance scenario and raised questions about DOD’s bottom-up review assumptions, including the availability of strategic airlift and support forces, shifting assets between conflicts, and how and when enhanced brigades would be needed. Based on their preliminary study results, the commands hope to influence DOD and Joint Staff thinking in apportioning forces and preparing future defense planning guidance for developing program and budgetary requirements. Command officials emphasized that their study will not address detailed operational planning for executing the two-conflict strategy or determine specific operational requirements. This process will occur after the Joint Staff formally apportions forces and missions in the Joint Strategic Capabilities Plan—expected to be issued in early 1995. Command officials expect the plan to task them to develop plans and deployment schedules for a single regional conflict scenario in their respective areas, assuming no other conflicts are occurring, and for two nearly simultaneous conflicts, assuming that their command is involved in the second of the two conflicts. In the past, commands have been tasked only to prepare a concept summary on how they would respond if they were in the second conflict. Based on the tasking, the commands will develop operational plans followed by detailed deployment schedules for their respective regional conflicts. As part of this process, the commands will determine their specific requirements for executing the plans and schedules, such as combat forces, mobility, sustainability, and munitions. The commands estimate that it would take about 18 months, from the time the Joint Strategic Capabilities Plan is issued, to complete the plans and deployment schedules. DOD officials agreed that the commands’ concept of executing the two-conflict strategy differs from the defense planning guidance and that the commands’ study could generate a different baseline for determining defense requirements, budgets, and plans. They stated that reconciling the differences when the study becomes available may be necessary, but until then, the defense planning guidance remains the basis of DOD planning for the two-conflict strategy. In developing the defense planning guidance scenario that military planners will use to develop program and budget requirements for the two-conflict strategy, DOD used a specific concept for deploying forces and supporting capabilities. Key war-fighting commands believe that the scenario may not reflect the most effective deployment and use of U.S. capabilities and are analyzing alternatives. Their analysis is addressing many of DOD’s key bottom-up review assumptions regarding key aspects of the two-conflict strategy and could provide useful insights for determining the validity of these assumptions. We recommend that in the congressionally mandated examination of the bottom-up review, the Secretary of Defense thoroughly examine the assumptions related to the (1) redeployment of forces from other operations to major regional conflicts, availability of strategic mobility assets and Army support forces, deployability of Army National Guard enhanced brigades, and planned enhancements to strategic mobility and U.S. firepower and (2) consider the options being examined by the war-fighting commands. DOD agreed with our recommendations and noted that it is conducting detailed studies to address many of the issues raised. DOD stated that it will reflect the results of these studies in its response to the congressionally mandated report on the bottom-up review. As discussed in chapter 2, DOD stated that in raising questions about the bottom-up review’s assumptions, we did not recognize the difference between broad conceptual force planning and operational planning for using specific forces to undertake specific operations. We note that DOD’s comments imply that the war-fighting commands’ study is similar to detailed operational planning. As discussed in chapter 3, the commands are examining options for executing the strategy on a macro scale rather than developing specific detailed plans and requirements.
GAO reviewed the key assumptions the Department of Defense (DOD) has made during its bottom-up review to determine whether they reasonably support the execution of a two-conflict strategy. GAO found that: (1) the strategy of fighting and winning two nearly simultaneous conflicts will require a significant change in military planning; (2) DOD has not fully analyzed key bottom-up review assumptions about the ability of forces to redeploy from other operations to regional conflicts or between conflicts, availability of strategic lift and support forces, or the deployability of Army National Guard combat brigades; (3) war-fighting command officials believe that the DOD plan for responding to two simultaneous major regional conflicts is questionable; (4) official estimates of the amount of warning time between the onset of each conflict, mix of combat forces needed to respond to each conflict, and timing of force deployments differ significantly from DOD estimates; (5) the military commands believe that the DOD scenario may not reflect the most effective deployment of U.S. forces and they are examining options they believe may maximize the use of U.S. capabilities; and (6) until DOD fully analyzes its bottom-up review assumptions and considers the war fighting commands' options, it will not be able to determine force size and mix, the supporting capabilities and funding needed for the two-conflict strategy, or if the strategy should be changed.
In the United States, DOE’s EIA collects and reports data provided voluntarily by the states on the amount of natural gas flared and vented, but the data are incomplete, inconsistent, and not as useful as they could be from an environmental perspective. Information on gas flared and vented outside the United States is also limited, since international reporting generally is voluntary and there is no single organization that is responsible for collecting and reporting this data. EIA could improve flaring and venting data by enhancing its guidance to states and collecting data directly from oil and gas producers; EIA, MMS, and BLM could also improve data by collecting and reporting data on venting separately from data on flaring. By taking these actions, the federal government could serve as a model for global data collection and reporting. EIA could also investigate improvements in global data collection and reporting by using satellite images and data to better estimate the volume of natural gas flared in other countries and by continuing to support international efforts to improve data. In the United States, data collected and reported on the flaring and venting of natural gas associated with oil and gas production are incomplete, inconsistent, and not as useful as they could be from an environmental perspective. Regarding the completeness of the data, although MMS and BLM require companies that lease federal lands and offshore areas for oil and gas production to report flaring and venting statistics, EIA does not use its authority to require information on flaring and venting from all other U.S. oil- and gas-producing companies. According to EIA officials, the data on flaring and venting that EIA collects as part of overall oil and gas production data represent a relatively small portion of EIA’s energy data reporting program. Since the agency has limited resources, these data are a relatively low priority. As a result, rather than requiring that the estimated 20,000 domestic oil- and gas-producing companies provide flaring and venting data—which would consume considerable resources—EIA collects this information from the oil- and gas-producing states on a voluntary basis. Many states do not provide this information, however, and EIA has no authority to require them to do so. Consequently, EIA’s flaring and venting information is incomplete. In addition to being incomplete, the data the states provide to EIA are also inconsistent. Since flaring and venting data are not a high priority for EIA, the agency has provided only limited guidelines to states to promote consistency in the information that they voluntarily submit. As a result, only 8 of the 32 oil- and gas-producing states provide data that EIA considers consistent, leaving EIA to estimate the amount of flaring and venting in the other 24 states. When we asked state officials about EIA guidelines for reporting, officials from 15 states said they were unsure what information EIA wanted and how EIA wanted it presented. Most officials from the states answering our questions about the guidelines thought they needed improvement, and some officials said they would like to participate in developing improved guidelines to ensure that the states would be able to meet EIA’s requests. The data that EIA, MMS, and BLM collect are further limited because they do not distinguish between gas that is flared and gas that is vented. As a result, from an environmental perspective, the information is not as useful as it could be. EIA, MMS, and BLM do not collect separate flaring and venting data because their focus is on the amount of gas produced for the market and not on the gas that is lost through flaring and venting. EPA, on the other hand, considers flared and vented gas in the context of the damage it could inflict on the environment: vented gas emissions (methane), and to a lesser extent flared gas emissions (carbon dioxide), contribute to total greenhouse gases. According to EPA officials, differentiating data on flaring and venting could improve EPA estimates of each gas’s contribution to total greenhouse gases in the atmosphere. Because EPA does not collect its own flaring and venting data, however, it must rely on the combined data that these other agencies collect. EIA believes that the data the states voluntarily provide on production— from which the data on flaring and venting are taken—could be improved as well. In particular, because some states do not report information at all, EIA is considering using its authority to collect information on production directly from natural gas well operators. Toward that end, EIA has published for comment a proposed sample survey of monthly natural gas production in the Federal Register. If the survey were implemented, well operators would be required to provide EIA with production data and the form would include a category for flaring and venting data. Among other things, EIA has sought comments from well operators as to whether they can provide reliable measures of gas flared and vented. However, even if the proposed sample form is implemented and, as proposed, collects information on flared and vented data, the focus will be on improving production data and not on flaring and venting data. Outside the United States, information on gas flared and vented is even more limited. Generally, international reporting is also voluntary, and no single organization is responsible for collecting and reporting flaring and venting data. Although several organizations collect data voluntarily provided by countries with which they have a working relationship, the numbers countries report are sometimes questionable. For example, the United Nations requests data as part of its work on climate change, but few countries report meaningful data. In addition to U.S. data, EIA also reports worldwide information largely based on estimates developed by Cedigaz, an oil and gas industry association that gathers what is generally recognized as the best flaring and venting information available. A Cedigaz official told us that they rely on submissions from countries and companies around the world to make their estimates and that they accept the information the countries report unless Cedigaz has knowledge of a country’s operations that could be used to improve the accuracy of the amounts reported. For example, on the basis of submissions by Russia and China—two important petroleum-producing countries—Cedigaz has reported that these countries do not flare or vent. World Bank officials told us, however, it is generally known that Russia regularly flares and vents gas. Satellite images created by NOAA have confirmed that Russia does, in fact, participate in flaring. In addition, as in the United States, the global data are limited from an environmental perspective because they do not distinguish between the amounts flared and the amounts vented. More accurate worldwide data would provide a clearer understanding of both the extent to which flaring and venting emissions contribute to total greenhouse gases and the countries that do the most flaring and venting. This would provide a basis for targeting actions designed to prevent the waste of a potentially valuable resource while at the same time reducing harmful emissions into the atmosphere. Federal agencies have a number of opportunities available to them to improve the information on flaring and venting. EIA could clarify its guidelines to states for collecting and reporting flaring and venting data. Currently, EIA assumes that about 75 percent of the reports it receives from oil- and gas-producing states contain inconsistent data. State officials believe that they could better meet EIA’s data needs—that is, provide EIA with more consistent data—if they had more comprehensive guidance from EIA on the data it wants and how to report them. Similarly, as the reporting of greenhouse gas emissions has become more widespread globally, the oil and gas industry has begun to recognize the need for guidance on how emissions, such as carbon dioxide and methane, should be collected and reported. In December 2003, three petroleum industry associations jointly issued a report, “Petroleum Industry Guidelines for Reporting Greenhouse Gas Emissions,” to promote consistency in collecting and reporting petroleum industry greenhouse gas emissions. EIA could consider using these guidelines while working with industry and state officials to improve their state guidelines for reporting emissions from flaring and venting of natural gas. Another opportunity for improving the data is for EIA to consider using its general energy information collecting authority to collect data on flaring and venting directly from the oil and gas producers, rather than relying on voluntary submissions by states. (Producers with federal leases are already required to collect and report this information to MMS and BLM.) While EIA considers these data a relatively low priority, and while collecting data from all 20,000 domestic producers could involve extensive resources, there may be efficient and cost-effective methods of collecting sample data. In addition, from an environmental perspective, the federal government could broaden the usefulness of flaring and venting information by distinguishing between the amounts of gas flared and the amounts vented. Since natural gas that is vented has a more significant effect on the atmosphere than natural gas that is flared, reporting these emissions separately could enable EPA to better estimate methane’s and carbon dioxide’s contributions to greenhouse gases. Finally, on a worldwide basis, the federal government could improve flaring and venting data in several ways. First, the U.S. government could continue to improve its own data, thereby providing an example for other countries. Second, EIA, working with NOAA, could investigate the feasibility of supplementing the data already available by using U.S. satellite images. For example, figure 2 (see p. 14) shows worldwide flaring identified in 2002 using satellite technology. According to a NOAA physical scientist, analyzing these satellite data could validate the amount of flaring reported by countries. For example, some countries, like Russia, report no flaring, while satellite images show substantial flaring is actually occurring. Third, the federal government could continue to support efforts such as the World Bank Global Gas Flaring Reduction Partnership (GGFR) that, in part, seeks to develop guidance for data collection and reporting. According to a 2004 GGFR report, such guidance could improve natural gas flaring and venting data. According to the limited data available, the amount of natural gas emitted through flaring and venting is small compared with overall natural gas production, but these emissions represent a significant amount of lost energy. Flaring and venting are concentrated in several parts of the world and, in the United States, in four states and the Gulf of Mexico. Although worldwide estimates of flaring and venting constitute a small portion of total greenhouse gas emissions, many countries have undertaken efforts to reduce flaring and venting. While flaring and venting represent only 3 percent of the total natural gas production, the natural gas flared and vented—about 100 billion cubic meters a year—is enough to meet the annual natural gas consumption of both France and Germany. In general, the amount of flaring and venting emissions is related to the amount of oil produced: the higher the production, the more gas flared and vented. Since 1990, the quantity of oil produced has increased, but because of various global reduction initiatives, the quantity of natural gas flared and vented has remained constant. Consequently, natural gas emissions as a percentage of oil production have decreased. Flaring and venting of natural gas are concentrated in certain parts of the world, with Africa, the Middle East, and the former Soviet Union contributing about two-thirds of the global emissions from flaring and venting (see fig. 3). Working with available data, the World Bank has estimated that three countries—Nigeria (16%), Russia (11%), and Iran (10%)—are responsible for over one-third of global flaring and venting (see table 1). According to World Bank estimates, in 2000 eight nations accounted for 60 percent of the natural gas flared and vented: Algeria, Angola, Indonesia, Iran, Mexico, Nigeria, Russia, and Venezuela. In addition, some countries— for example, Angola and Cameroon—flare and vent most of the natural gas they produce. In contrast, EIA estimates that the United States flares or vents about 0.4 percent of its production annually. Within the United States, most of reported flaring and venting has occurred in the active oil and gas production states and in the Gulf of Mexico (see table 2). Four states—Alaska, Louisiana, Texas, and Wyoming—plus the federal leases in the Gulf of Mexico account for almost 80 percent of all reported U.S. flaring and venting. None of these states flare and vent more than 0.8 percent of their total natural gas production. Worldwide flaring and venting is estimated to contribute, respectively, about 4 percent of the total methane and about 1 percent of the total carbon dioxide emissions caused by human activity. Despite these small contributions, several countries have undertaken efforts to reduce flaring and venting emissions that have the potential to reduce greenhouse gases while saving an energy resource. Specifically, many countries have imposed requirements on oil and gas producers to eliminate emissions of gas within the next few years. For example, Norway no longer allows the burning of petroleum in excess of the quantity needed for normal operational safety without the approval of the Ministry of Petroleum and Energy, and in 2003 Canada reported having achieved, through monitoring and regulation, a 70 percent reduction in flaring and venting emissions. In addition, corporations in several countries, in order to market their associated natural gas, either have constructed or are planning LNG plants to liquefy the gas for export or have developed on-site and local uses for the gas. For example, corporations operating in Nigeria currently have six LNG projects in development and have also begun using gas that otherwise would have been flared or vented to operate the platform equipment as well as to produce cement and fertilizer and gas that is usable as fuel for automobiles. Finally, some countries are exploring the potential of reinjecting carbon dioxide into wells instead of emitting it into the atmosphere. According to an oil company official, carbon dioxide reinjection in Algeria has prevented over one million tons of emissions— the equivalent of taking 200,000 cars off the road. The federal government has opportunities in several areas to help reduce flaring and venting both within the United States and globally. First, in the United States, on federal lands and offshore areas leased to producers, the federal government could consider regulatory changes to reduce the most harmful emissions resulting from venting and improve oversight of oil and gas production. Second, the federal government could promote programs that identify, and help industry implement, best practices for reducing natural gas emissions. On a global basis, exploring ways to address market barriers affecting associated natural gas, and continuing to work with other countries, could help reduce flaring and venting. The government has an opportunity to help reduce flaring and venting in the United States by considering regulatory changes and improving oversight of oil and gas production on federal lands and offshore areas leased to producers. If MMS and BLM required producers to flare gas (rather than allowing them to vent gas) when emitting the gas for operational purposes, the emissions impact on the atmosphere could be reduced. Since the impact of methane (venting) on the earth’s atmosphere is about 23 times greater than that of carbon dioxide (flaring), a small change in the ratio of flaring to venting could cause a disproportionate change in the impact of emissions. For example, if 90 percent of the associated gas volume was flared and 10 percent was vented, the amount vented would have more than twice the effect on the atmosphere as the amount flared. In addition, although MMS and BLM require federal lessees to self-report estimated flaring and venting volumes, the agencies could require the use of flare and vent meters at production facilities, which could improve oversight by detecting how much gas is actually flared and vented. Currently, such meters are used by some oil- and gas-producing companies. The meters are placed in or adjacent to the stream of flared or vented gas to measure the volume emitted. While the identification of unauthorized flaring or venting is not commonplace on federal lands and offshore leases in the United States, unauthorized flaring and venting do occur. A major oil and gas producer recently paid a $49 million settlement in response to charges of unauthorized flaring and venting in the Gulf of Mexico that went undetected for several years. The lawsuit alleged that the company both improperly flared and vented gas under various offshore federal leases and failed to properly report the flared and vented gas. Improved oversight could help prevent similar problems in the future. The International Energy Agency has recommended such meters for accurately monitoring emissions from production facilities, and some producers that we interviewed said they already use these meters on many of their offshore production facilities. According to these producers, flare and vent meters are fairly inexpensive if installed when the facility is initially built, but retrofitting facilities that are already producing may not be cost effective. To improve oversight, MMS and BLM, working with producers, could consider the cost and benefit of requiring the installation of these meters, particularly at new, major production facilities. The federal government could also help reduce flaring and venting by continuing to support programs that identify, and help industry implement, best practices for reducing natural gas emissions. EPA has sponsored the Natural Gas STAR program, which identifies and promotes the implementation of cost-effective technologies and practices that reduce methane emissions. About 65 percent of the U.S. natural gas industry is represented in this program. From 1993 through January 2004, according to EPA officials the STAR program identified over 100 best practices and technologies that reduce methane emissions from the oil and natural gas industry. In addition, STAR participants have reported total emissions reductions of more than 275 billion cubic feet worth over $825 million— which EPA estimates is enough natural gas to heat more than 4 million homes for 1 year or is comparable to the removal of the emissions of 24 million cars from the nation’s highways for 1 year. The two largest sources of overall oil and gas emissions identified by the STAR program are pneumatic devices and compressors. According to EPA officials, the STAR Program has identified 15 practices and technologies that would reduce methane emissions from these sources. The government could also have an effect on flaring and venting worldwide by (1) addressing market barriers affecting gas produced outside the United States that would otherwise be flared or vented and (2) continuing to work with other countries. First, the government could identify regulatory barriers to economically feasible infrastructure development, such as building pipelines or LNG facilities for transporting gas that is usually flared or vented. For example, currently, four LNG import facilities exist in the United States—three on the East Coast and one on the Gulf of Mexico coast—but more than 15 federal permit applications are awaiting approval (from either FERC or the Coast Guard), which can take more than a year. In addition, the government could investigate the public’s perceptions of the risk associated with new infrastructure. For example, some communities have resisted LNG facilities because they are worried about the safety and security procedures in place to protect them from an accidental explosion or a terrorist attack. Finally, the federal government could continue to work with other countries and corporations to reduce flaring and venting. For example, USAID provided much of the funding for training regulators in Kazakhstan, where improved regulation has virtually eliminated the routine flaring of natural gas. In addition, the United States could continue to support the work of the World Bank’s Global Gas Flaring Reduction Partnership (GGFR), which recently issued standards on how to achieve reductions in the flaring and venting of gas worldwide. By itself, the reduction of natural gas flaring and venting will not solve the problem of meeting increasing natural gas demands or eliminate all greenhouse gas emissions; however, it would be a helpful step in that direction. Although the emissions from flaring and venting are small in comparison with those from other sources, such as fossil fuel combustion, reducing flaring and venting from oil and gas production would help eliminate harmful emissions and possibly preserve an energy resource that is currently being lost. Although the role of the federal government in reducing flaring and venting of natural gas during oil and gas production may be limited, especially at the international level, opportunities exist for the government to make worthwhile contributions in this area. Given the immense challenges the government faces in the energy and environmental arenas and the limited resources available, however, any actions to reduce flaring and venting will have to be based on careful consideration of the potential costs and benefits of such actions. Moreover, since flaring and venting are not viewed as major problems in this country, it may be difficult to justify devoting much attention to them. Nonetheless, the government could consider several potentially cost-effective actions that would improve data reporting, help reduce the most harmful effects of flaring and venting, and improve oversight of federal leases. Improving flaring and venting data reporting would be an important first step because it would add information about the scope of the problem and allow actions to be targeted to those areas where flaring and venting are most significant. However, our review identified several limitations of the data currently reported that hinder such targeting. First, reliance on oil- and gas-producing states to voluntarily provide flaring and venting information has led to incomplete data, since some states do not submit reports. Second, because of the limited guidance on what data to report, EIA considers most of the information it receives from the states to be inconsistent. Third, the data collected do not distinguish between flared gas and vented gas—an important distinction, since the methane emitted during venting is potentially much more harmful to the atmosphere than the carbon dioxide emitted during flaring. Consequently, these combined data are less useful than they could be in shedding light on the extent of methane emitted into the atmosphere during venting. Finally, on a global basis, some countries that flare and vent considerable amounts of gas report that they do not, and others underreport their flaring and venting. For these and other reasons, the worldwide data are considered even less accurate than U.S. data. In addition, for federal lands and offshore areas leased for oil or gas production, opportunities exist to help reduce the most harmful effects of flaring and venting and improve oversight on these leases. Specifically, although flaring and venting are authorized only under certain circumstances, when these circumstances occur, oil and gas producers may choose to either flare or vent. Since the emissions released during venting are much more harmful to the atmosphere than those associated with flaring, this choice is an important one. Moreover, although flaring and venting are generally not authorized, no oversight mechanism currently exists for routinely monitoring the amount of flaring and venting that actually takes place. As a result, MMS and BLM cannot always be assured that companies are appropriately restricting their flaring and venting. Overall, however, as is evident in the example of the United States, a robust natural gas market, along with a supporting infrastructure, would have the most significant impact on the reduction of flaring and venting. Therefore, changes to natural gas markets, and to the transportation infrastructure for moving gas to these markets, will likely be needed to offer producers an economic incentive to sell the associated gas rather than flare or vent it. We are making four recommendations to the Secretary of Energy to explore opportunities for improving data on flaring and venting and to weigh the cost and benefit of making such improvements. Such opportunities could include considering the use of the department’s authority to collect flaring and venting information directly from the producing companies; working with industry and state officials, and using guidance already in existence, to enhance guidelines for reporting consistent flaring and venting data; considering, in consultation with EPA, MMS, and BLM, how to best collect separate statistics on flaring and venting; and considering working with the Secretary of Commerce to provide EIA access to NOAA’s analysis of satellite data to improve the accuracy of worldwide data on flaring. We are also making two recommendations to the Secretary of the Interior. Specifically, for federal oil and gas leases, we are recommending that the Secretary direct MMS and BLM to consider the cost and benefit of requiring that companies flare the natural gas, whenever possible, when flaring or venting is use flaring and venting meters to improve oversight. We requested comments on a draft of this report from the Secretaries of Energy and the Interior, as well as the Administrator of the Environmental Protection Agency. The Department of Energy and Department of the Interior provided written responses, including technical and clarifying comments, which we included in our report as appropriate. The Administrator of the Energy Information Administration (EIA), who responded on behalf of the Department of Energy, concurred with our findings and recommendations while acknowledging that the implementation of the recommendations would need to be balanced with other priorities. EIA stated it has revised the instructions on all natural gas forms and has committed to continue efforts to improve data quality, timeliness and coverage of production data that states collect. Further, EIA agreed to work with agencies with satellite technology to determine the feasibility of improving data worldwide. EIA’s comments are reprinted in appendix II. The Assistant Secretary of Land and Minerals Management, who responded on behalf of the MMS and BLM within the Department of the Interior, generally agreed with our findings and recommendations. MMS and BLM agreed that requiring operators to flare instead of vent whenever possible would reduce greenhouse gases. Further, MMS acknowledged that this would enhance the agency’s ability to enforce existing regulations because inspectors could easily view a flare; whereas inspectors cannot be detected venting visually and thus they must rely on the accuracy of operators’ records. The two agencies also agreed to evaluate the cost effectiveness of installing and maintaining flare tips. In addition, MMS agreed that requiring flaring and venting meters would improve its oversight. MMS acknowledged that recent incidents have shown that reliance on the accuracy of the operators’ calculations and recordkeeping may not sufficiently or accurately capture actual flaring and venting volumes. MMS stated that it is currently revising its Federal Outer Continental Shelf regulations to require installation of meters. Finally, MMS agreed that flaring and venting data should be reported separately and has taken initial steps to update its database and reporting requirements to accommodate this change. However, MMS stated that such a change would take time to complete and would only minimally add to the reporting burden of operators. The Department of the Interior’s comments are reprinted in appendix III. EPA provided oral comments and agreed with our findings and recommendations. In addition, EPA noted that our recommendations, if implemented, would greatly advance flaring and venting data availability and quality. EPA also provided technical comments, which we included in the report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of Energy, the Secretary of the Interior, the EPA Administrator, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge at GAO’s Web site at http://www.gao.gov. Questions about this report should be directed to me at (202) 512-3841. Key contributors to this report are listed in appendix IV. Regarding natural gas flaring and venting from oil and gas production in the United States and the rest of the world, you asked us to (1) describe the data collected and reported on natural gas flaring and venting and what the federal government could do to improve it; (2) report, on the basis of available information, on the extent of flaring and venting and its contribution to greenhouse gases; and (3) identify opportunities for the federal government to reduce such flaring and venting. To do this work, we obtained currently available data on natural gas production and estimates of flaring and venting in the United States from EIA, MMS, and BLM. We determined that all the data we reviewed were sufficiently reliable for inclusion in this report after acknowledging the limitations of these data. We also interviewed officials from EIA, EPA, MMS, BLM, the World Bank, the United Nations, various private corporations and organizations, and state governments regarding data collection, quality of the data collected, and reporting practices. In addition, we contacted natural gas-producing states to determine their assessment of the reliability of data they collect and report. To determine what the federal government could do to improve data collecting and reporting, we interviewed officials from EIA, MMS, BLM, EPA, NOAA, and state officials. In addition, we obtained international data from Cedigaz, a French oil and gas industry association that gathers worldwide information on natural gas; the World Bank; the International Energy Agency (IEA), an intergovernmental energy policy body; and the United Nations. We discussed with officials of these organizations the reliability as well as the methods of collecting and estimating these data. Finally, to determine what the federal government could do to help reduce flaring and venting of natural gas, we reviewed the literature and interviewed officials from private industry, MMS, BLM, state governments, EPA’s Natural Gas STAR program, USAID, and various world organizations, including the World Bank, the United Nations, and IEA. In May 2004, we also attended the World Bank Global Gas Flaring Reduction Partnership’s Second International Gas Flaring Reduction Conference in Algeria with delegates from numerous other countries concerned about gas flaring and venting. This conference featured speakers from countries and corporations around the world who addressed data collection and reporting issues, current regulations and oversight issues, and best practices within their respective countries. We conducted our work from October 2003 through June 2004 in accordance with generally accepted government auditing standards. In addition to the individuals named above, James W. Turkett, James Rose, Carol Bray, and Nancy Crothers made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
Since 1995, the average price of natural gas in the United States has almost tripled as demand has grown faster than supply. Despite this increase, natural gas is regularly lost as it is burned (flared) and released into the atmosphere (vented) during the production of oil and gas. GAO was asked to (1) describe flaring and venting data and what the federal government could do to improve them; (2) report, on the basis of available information, on the extent of flaring and venting and their contributions to greenhouse gases; and (3) identify opportunities for the federal government to reduce flaring and venting. U.S. and global data on natural gas flaring and venting are limited. First, the Department of Energy's Energy Information Administration (EIA) collects and reports data voluntarily provided by oil- and gas-producing states. Because EIA has no authority to require states to report, some do not, leading to incomplete data. Second, EIA has provided limited guidance to states to promote consistent reporting. As a result, only about one-fourth of the states reporting provide data that EIA considers consistent. Third, the data EIA collects do not distinguish between flared gas and vented gas--an important distinction since they have dramatically different environmental impacts. Data on flaring and venting outside the United States are also limited, since many countries report unreliable data or none at all. To improve data on flaring and venting, EIA could use its authority to collect data directly from oil and gas producers; to obtain more consistent data, EIA could improve its guidelines for reporting. From an environmental perspective, EIA, the Minerals Management Service, and the Bureau of Land Management could require flaring and venting data to be reported separately from each other. Globally, the federal government could set an example by continuing to improve U.S. data, continuing to support global efforts, and using U.S. satellite data to detect unreported flaring. On the basis of the limited data available, the amount of gas emitted through flaring and venting worldwide is small compared with global natural gas production and represents a small portion of greenhouse gas emissions. Nevertheless, flaring and venting have adverse environmental impacts and result in the loss of a significant amount of energy. Annually, over 100 billion cubic meters of gas are flared or vented worldwide--enough to meet the natural gas needs of France and Germany for a year. While flaring and venting do occur in the United States, less than 1 percent of global production is flared and vented. Opportunities exist in several areas to help reduce flaring and venting, both in the United States and globally. For example, exploring ways to address market barriers affecting associated gas could help identify approaches to reduce global flaring and venting.
About 9 months prior to Katrina’s landfall, the NRP was issued to frame the federal response to domestic emergencies ranging from smaller, regional disasters to incidents of national significance. The plan generally calls for a reactive federal response following specific state requests for assistance. However, the NRP also contains a catastrophic incident annex that calls for a proactive federal response when catastrophes overwhelm local and state responders. The NRP generally assigns DOD a supporting role in disaster response, but even in this role, DOD has specific planning responsibilities. For example, the NRP requires federal agencies to incorporate the accelerated response requirements of the NRP’s catastrophic incident annex into their own emergency response plans. Within DOD, the Strategy for Homeland Defense and Civil Support, which was issued in June 2005, envisions a greater reliance on National Guard and Reserve forces for homeland missions. The military response to domestic disasters typically varies depending on the severity of an event. During smaller disasters, an affected state’s National Guard may provide a sufficient response, but larger disasters and catastrophes that overwhelm the state may require assistance from out-of-state National Guard or federal troops. For Katrina, the response heavily relied on the National Guard, which is consistent with DOD’s Strategy for Homeland Defense and Civil Support. This represents a departure from past catastrophes when active duty forces played a larger role in response efforts. During disaster response missions, National Guard troops typically operate under the control of the state governors. However, the National Guard Bureau has responsibility for formulating, developing, and coordinating policies, programs, and plans affecting Army and Air National Guard personnel, and it serves as the channel of communication between the U.S. Army, the U.S. Air Force, and the National Guard in U.S. states and territories. Although the Chief of the National Guard Bureau does not have operational control of National Guard forces in the states and territories, he has overall responsibility for National Guard Military Support to Civil Authorities programs. The U.S. Northern Command also has a mission to provide support to civil authorities. Because of this mission, U.S. Northern Command was responsible for commanding the federal military response to Hurricane Katrina. During its massive response to Hurricane Katrina the military faced many challenges, which provide lessons for improving the future military response to catastrophic natural disasters. Issues arose with damage assessments, communications, search and rescue efforts, logistics, and the integration of military forces. In the wake of Hurricane Katrina, the military mounted a massive response that saved many lives and greatly assisted recovery efforts. Military officials began tracking Hurricane Katrina when it was an unnamed tropical depression and proactively took steps that led to a Katrina response of more than 50,000 National Guard and more than 20,000 federal military personnel, more than twice the size of the military response to 1992’s catastrophic Hurricane Andrew. By the time Katrina made landfall in Louisiana and Mississippi on August 29, 2005, the military was positioned to respond with both National Guard and federal forces. Prior to Katrina’s landfall, active commands had published warning and planning orders and DOD had already deployed Defense Coordinating Officers to all the potentially affected states. DOD also deployed a joint task force; medical personnel; helicopters; ships from Texas, Virginia, and Maryland; and construction battalion engineers. Many of these capabilities were providing assistance or deploying to the area within hours of Katrina’s landfall. DOD also supported response and recovery operations with communications equipment and many other critically needed capabilities. Growing concerns about the magnitude of the disaster prompted DOD to deploy large active duty ground units beginning on September 3, 2005, 5 days after Katrina’s landfall. Prior to landfall, anticipating the disruption and damage that Hurricane Katrina could cause, the governors of Louisiana and Mississippi activated their National Guard units. In addition, National Guard officials in Louisiana and Mississippi began to contact National Guard officials in other states to request assistance. While National Guard forces from Louisiana and Mississippi provided the bulk of the military support in the first days after landfall, most of the Guard response to Hurricane Katrina came later from outside the affected states. The National Guard Bureau acted as a conduit to communicate requirements for assistance in Louisiana and Mississippi to the adjutants general in the rest of the country. The adjutants general of other states, with the authorization of their state governors, then sent their National Guard troops to Louisiana and Mississippi under emergency assistance agreements between the states. Requirements for out-of-state National Guard or federal assistance were increased because thousands of National Guard personnel from Mississippi and Louisiana were already mobilized for other missions and thus unavailable when Hurricane Katrina struck their states. The National Guard troops that had been mobilized from within the affected states were able to quickly deploy to where they were needed because they had trained and planned for disaster mobilizations within their states. The deployment of out-of-state forces, though quick when compared to past catastrophes, took longer because mobilization plans were developed and units were identified for deployment in the midst of the crisis. At the peak of the military’s response, however, nearly 40,000 National Guard members from other states were supporting operations in Louisiana and Mississippi—an unprecedented domestic mobilization. While the military response to Katrina was massive, it faced many challenges, which provide lessons for the future, including the need for the following: Timely damage assessments. As with Hurricane Andrew, an underlying problem in the response was the failure to quickly assess damage and gain situational awareness. The NRP notes that local and state officials are responsible for damage assessments during a disaster, but it also notes that state and local officials could be overwhelmed in a catastrophe. Despite this incongruous situation, the NRP did not specify the proactive means necessary for the federal government to gain situational awareness when state and local officials are overwhelmed. Moreover, DOD’s planning did not call for the use of the military’s extensive reconnaissance assets to meet the NRP catastrophic incident annex’s requirement for a proactive response to catastrophic incidents. Because state and local officials were overwhelmed and the military’s extensive reconnaissance capabilities were not effectively leveraged as part of a proactive federal effort to conduct timely, comprehensive damage assessments, the military began organizing and deploying its response without fully understanding the extent of the damage or the required assistance. According to military officials, available reconnaissance assets could have provided additional situational awareness during Hurricane Katrina, and in September 2005, considerable surveillance assets were made available to assess damage from Hurricane Rita, primarily because of the lessons learned from Hurricane Katrina. Improved communications. Hurricane Katrina caused significant damage to the communication infrastructure in Louisiana and Mississippi, which further contributed to a lack of situational awareness for military and civilian officials. Even when local officials were able to conduct damage assessments, the lack of communication assets caused delays in transmitting the assessments. Under the NRP, the Department of Homeland Security has responsibility for coordinating the communications portion of disaster response operations. However, neither the NRP, the Department of Homeland Security, nor DOD fully identified the extensive military communication capabilities that could be leveraged as part of a proactive federal response to a catastrophe. DOD’s plan addressed internal military communications requirements but not the communication requirements of communities affected by the disaster. Because state and local officials were overwhelmed and the Department of Homeland Security and DOD waited for requests for their assistance rather than deploying a proactive response, some of the military’s available communication assets were never requested or deployed. In addition, some deployed National Guard assets were underutilized because the sending states placed restrictions on their use. Communications problems, like damage assessment problems, were also highlighted following Hurricane Andrew. Coordinated search and rescue efforts. While tens of thousands of people were rescued after Katrina, the lack of clarity in search and rescue plans led to operations that according to aviation officials, were not as efficient as they could have been. The NRP addressed only part of the search and rescue mission, and the National Search and Rescue Plan had not been updated to reflect the NRP. As a result, the search and rescue operations of the National Guard and federal military responders were not fully coordinated, and military operations were not integrated with the search and rescue operations of the Coast Guard and other rescuers. At least two different locations were assigning search and rescue tasks to military helicopter pilots operating over New Orleans, and no one had the total picture of the missions that had been resourced and the missions that still needed to be performed. Clear logistics responsibilities. DOD had difficulty gaining visibility over supplies and commodities when FEMA asked DOD to assume a significant portion of its logistics responsibilities. Under the NRP, FEMA is responsible for coordinating logistics during disaster response efforts, but during Hurricane Katrina, FEMA quickly became overwhelmed. Four days after Katrina’s landfall, FEMA asked DOD to take responsibility for procurement, transportation, and distribution of ice, water, food, fuel, and medical supplies. However, because FEMA lacked the capability to maintain visibility—from order through final delivery—of the supplies and commodities it had ordered, DOD did not know the precise locations of the FEMA-ordered supplies and commodities when it assumed FEMA’s logistics responsibilities. As a result of its lack of visibility over the meals that were in transit, DOD had to airlift 1.7 million meals to Mississippi to respond to a request from the Adjutant General of Mississippi, who was concerned that food supplies were nearly exhausted. Better integration of military forces. The military did not adequately plan for the integration of large numbers of deployed troops from different commands during disaster response operations. For example, a Louisiana plan to integrate military responders from outside the state called for the reception of not more than 300 troops per day. However, in the days following Hurricane Katrina, more than 20,000 National Guard members from other states arrived in Louisiana to join the response effort. In addition, the National Guard and federal responses were coordinated across several chains of command but not integrated, which led to some inefficiencies and duplication of effort. Because military plans and exercises had not provided a means for integrating the response, no one had the total picture of the forces on the ground, the forces that were on the way, the missions that had been resourced, and the missions that still needed to be completed. Also, a key mobilization statute limits DOD’s Reserve and National Guard units and members from being involuntarily ordered to federal active duty for disaster response. As a result, all the reservists who responded to Hurricane Katrina were volunteers, and they made up a relatively small portion of the response compared to the National Guard and active component members. Moreover, the process of lining up volunteers can be time-consuming and is more appropriate for mobilizing individuals than it is for mobilizing entire units or capabilities that may be needed during a catastrophe. After Hurricane Andrew, we identified this issue in two 1993 reports. Operational challenges are inevitable in any large-scale military deployment, but the challenges that the military faced during its response to Hurricane Katrina demonstrate the need for better planning and exercising of catastrophic incidents in order to clearly identify military capabilities that will be needed and the responsibilities that the military will be expected to assume during these incidents. Prior to Katrina, plans and exercises were generally inadequate for a catastrophic natural disaster. The National Response Plan. The NRP, which guides planning of supporting federal agencies, lacks specificity as to how DOD should be used and what resources it should provide in the event of a domestic natural disaster. The NRP makes little distinction between the military response to smaller, regional disasters and the military response to large- scale, catastrophic natural disasters. Even though past catastrophes, such as Hurricane Andrew in 1992 and the 1989 earthquake in the San Francisco area, showed that the military tends to play a much larger role in catastrophes, the NRP lists very few specific DOD resources that should be called upon in the event of a catastrophic natural disaster. Given the substantial role the military is actually expected to play in a catastrophe— no other federal agency brings as many resources to bear—this lack of detailed planning represents a critical oversight. The DOD plan. When Hurricane Katrina made landfall, DOD’s plan for providing defense assistance to civil authorities was nearly 9 years old and was undergoing revision. The plan had not been aligned with the NRP and had been written before the 2005 Strategy for Homeland Defense and Civil Support, which called for a focused reliance on the reserve components for civil support missions. The plan did not account for the full range of tasks and missions the military could need to provide in the event of a catastrophe and had little provision for integrating active and reserve component forces. It did not address key questions of integration, command and control, and division of tasks between National Guard resources under state control and federal resources under U.S. Northern Command’s control. Moreover, the plan did not establish time frames for the response. National Guard plans. At the state level, the plans of the Louisiana and Mississippi National Guards were inadequate for Katrina and not well coordinated with those of other National Guard forces across the country. The Mississippi and Louisiana National Guard plans appeared to be adequate for smaller disasters, such as prior hurricanes, but they were insufficient for a catastrophe and did not adequately account for the outside assistance that could be needed during a catastrophe. For example, Joint Forces Headquarters Louisiana modified its plan and reassigned disaster responsibilities when thousands of Louisiana National Guard personnel were mobilized for federal missions prior to Hurricane Katrina. However, the Louisiana plan did not address the need to bring in thousands of military troops from outside the state during a catastrophe. Similarly, Mississippi National Guard officials told us that even their 1969 experience with Hurricane Camille, a category 5 storm that hit the same general area, had not adequately prepared them for a catastrophic natural disaster of Katrina’s magnitude. For example, the Mississippi National Guard disaster plan envisioned the establishment of commodity distribution centers, but it did not anticipate the number of centers that could be required in a catastrophic event or following a nearly complete loss of infrastructure. In addition, the National Guard Bureau had not coordinated in advance with the governors and adjutants general in the states and territories to develop plans to provide assistance for catastrophic disasters across the country. Specifically, the bureau had not identified the types of units that were likely to be needed during a catastrophe or worked with the state governors and adjutants general to develop and maintain a list of National Guard units from each state that would likely be available to meet these requirements during catastrophic natural disasters. Exercises. An underlying reason that insufficient plans existed at all levels is that the disaster plans had not been tested and refined with a robust exercise program. Such exercises are designed to expose weaknesses in plans and allow planners to refine them. As a result, when Hurricane Katrina struck, a lack of understanding existed within the military and among federal, state, and local responders as to the types of assistance and capabilities that the military might provide, the timing of this assistance, and the respective contributions of the National Guard and federal military forces. The Homeland Security Council has issued 15 national planning scenarios—including a major hurricane scenario—that provide the basis for disaster exercises throughout the nation. While DOD sponsors or participates in no less than two major interagency field exercises per year, few exercises led by the Department of Homeland Security or DOD focused on catastrophic natural disasters, and none of the exercises called for a major deployment of DOD capabilities in response to a catastrophic hurricane. In addition, although DOD has periodically held modest military support to civil authorities exercises, the exercises used underlying assumptions that were unrealistic in preparing for a catastrophe. For example, DOD assumed that first responders and communications would be available and that the transportation infrastructure would be navigable in a major hurricane scenario. Finally, the First U.S. Army conducted planning and exercises in response to six hurricanes in 2005. These exercises led to actions, such as the early deployment of Defense Coordinating Officers, which enhanced disaster response efforts. However, DOD’s exercise program was not adequate for a catastrophe of Hurricane Katrina’s magnitude. Based on our evaluation of the aforementioned plans and exercises, we made several recommendations to the Secretary of Defense. First, we called for DOD to work with the Department of Homeland Security to update the NRP to fully address the proactive functions the military will be expected to perform during a catastrophic incident. Second, we recommended that DOD develop detailed plans and exercises to fully account for the unique capabilities and support that the military is likely to provide during a catastrophic incident, specifically addressing damage assessments, communication, search and rescue, and logistics as well as the integration of forces. Third, we called for the National Guard Bureau to identify the National Guard capabilities that are likely to respond to catastrophes in a state status and to share this information with active commands within DOD. Finally, we recommended that DOD identify the scalable federal military capabilities it will provide in response to the full range of domestic disasters and catastrophes. We also raised a matter for congressional consideration, suggesting that Congress consider lifting or modifying the mobilization restriction—10 U.S.C. § 12304 (c)(1)—that limits reserve component participation in catastrophic natural disasters. DOD has collected lessons learned following Hurricane Katrina from a variety of sources. Within the department, DOD has a formal set of procedures to identify, capture, and share information collected as a result of operations in order to enhance performance in future operations. Even in the midst of the Hurricane Katrina response operation, officials from various military organizations were collecting information on lessons learned and this continued well after most operations had ceased. For example, communications issues that had surfaced were studied by both active and National Guard commands that had responded to Hurricane Katrina. DOD also formed a task force to study the response and is compiling and analyzing various military and other lessons-learned reports to help design an improved response to future natural catastrophic events. According to DOD officials, they have also reviewed White House and congressional reports identifying lessons to be applied or challenges to be addressed in future response operations. As of today, DOD has also begun taking actions to enhance the military’s preparedness for future catastrophic events. Specifically, in responding to our recently issued report, DOD generally concurred with our recommendations for action and told us that it had developed plans to address them. DOD noted, for example, that the NRP would be revised to plan for a significant DOD role in a catastrophe and a more-detailed DOD operational plan that has been in draft would be finalized. Our recommendations and DOD’s response to them are shown in appendix I. In addition, DOD said that it was taking several additional actions, including colocating specially trained defense department personnel at FEMA regional offices; folding support from federal reconnaissance agencies into the military’s civil support processes; developing “pre-scripted” requests that would ease the process for civilian agencies to request military support; conducting extensive exercises, including the recently completed Ardent Sentry and other planned events, with FEMA; and delegating authority for deploying defense coordinating elements and placing on “prepare to deploy” orders communications, helicopter, aerial reconnaissance, and patient-evacuation capabilities. The department plans to complete many of these steps by June 1, 2006— the start of the next hurricane season—but acknowledged that some needed actions will take longer to complete. Since details about many of the department’s actions were still emerging as we completed our review, we were unable to fully assess the effectiveness of DOD’s plans, but they do appear to hold promise. In conclusion, while DOD’s efforts to date to address the Hurricane Katrina lessons learned are steps in the right direction—and the department deserves credit for taking them—these are clearly only the first steps that will be needed. The issues cut across agency boundaries, and thus they cannot be addressed by the military alone. The NRP framework envisions a proactive national response involving the collective efforts of responder organizations at all levels of government. Looking forward, part of DOD’s challenge is the sheer number of organizations at all levels of government that are involved, both military and civilian. In addition, many of the problems encountered during the Katrina response are long-standing and were also reported after Hurricane Andrew in 1992. Because of the complexity and long-standing nature of these problems, DOD’s planned and ongoing actions must receive sustained top- management attention, not only at DOD but across the government, in order to effect needed improvements in the military’s support to civil authorities. While the issues are complex, they are also urgent, and experience has illustrated that the military has critical and substantial capabilities that will be needed in the wake of catastrophic events. For further information regarding this statement, please contact me at (202) 512-9619 or pickups@gao.gov. Individuals making key contributions to this statement include John Pendleton, Assistant Director, Michael Ferren, Kenya Jones, and Leo Sullivan. Department of Defense (DOD) Response (dated May 5, 2006) Provide the Secretary of the Department of Homeland Security with proposed revisions to the National Response Plan (NRP) that will fully address the proactive functions the military will be expected to perform during a catastrophic incident, for inclusion in the next NRP update. DOD said that it is working with the Department of Homeland Security to revise the NRP. While DOD stated that the long-term focus of the U.S. government should be to develop more robust domestic disaster capabilities within the Department of Homeland Security, it acknowledged that DOD will need to assume a more robust response role in the interim period and when other responders lack the resources and expertise to handle a particular disaster. Establish milestones and expedite the development of detailed plans and exercises to fully account for the unique capabilities and support that the military is likely to provide to civil authorities in response to the full range of domestic disasters, including catastrophes. The plans and exercises should specifically address the use of reconnaissance capabilities to assess damage, use of communications capabilities to facilitate support to civil authorities, integration of active component and National Guard and Reserve forces, use of search and rescue capabilities and the military’s role in search and rescue, and role the military might be expected to play in logistics. DOD listed a number of steps it is taking to improve its disaster response planning and exercises and said that consistent with its Strategy for Homeland Defense and Civil Support, the active component should complement, but not duplicate, the National Guard’s likely role as an early responder. DOD also said that planning and exercises should include local, state, and federal representatives and should stress the responders with the highest degree of realism possible—to the breaking point if possible. Direct the Chief of the National Guard Bureau to work with the state governors and adjutants general to develop and maintain a list of the types of capabilities the National Guard will likely provide in response to domestic natural disasters under state-to-state mutual assistance agreements along with the associated units that could provide these capabilities, and make this information available to the U.S. Northern Command, U.S. Joint Forces Command, and other organizations with federal military support to civil authority planning responsibilities. DOD listed steps the U.S. Northern Command is taking to better understand the capabilities of National Guard units, and it stated that the National Guard is creating a database to facilitate planning its employment in support of the homeland. Establish milestones and identify the types of scalable federal military capabilities and the units that could provide those capabilities in response to the full range of domestic disasters and catastrophes covered by DOD’s defense support to civil authorities plans. DOD noted that it has developed scalable capability packages in conjunction with pre-scripted requests for assistance and U.S. Northern Command’s Contingency Plan 2501, which is scheduled to be signed in the spring of 2006. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Hurricane Katrina was one of the largest natural disasters in U.S. history. Despite a large deployment of resources at all levels, many have regarded the federal response as inadequate. GAO has a body of ongoing work that covers the federal government's preparedness and response to hurricanes Katrina and Rita. This statement summarizes key points from GAO's report on the military's response to Katrina (GAO-06-643), which was issued earlier this month. It addresses (1) the support that the military provided in responding to Hurricane Katrina along with some of the challenges faced and key lessons learned; (2) actions needed to address these lessons, including GAO's recommendations to the Secretary of Defense; and (3) the extent to which the military is taking actions to identify and address the lessons learned. In its report, GAO made several recommendations to improve the military response to catastrophic disasters. The recommendations called for updating the National Response Plan to reflect proactive functions the military could perform in a catastrophic incident; improving military plans and exercises; improving National Guard, Reserve, and active force integration; and resolving response problems associated with damage assessment, communication, search and rescue, and logistics issues. The Department of Defense (DOD) partially concurred with all of the recommendations. The military mounted a massive response to Hurricane Katrina that saved many lives, but it also faced several challenges that provide lessons for the future. Based on its June 2005 civil support strategy, DOD's initial response relied heavily on the National Guard, but active forces were also alerted prior to landfall. Aviation, medical, engineering, and other key capabilities were initially deployed, but growing concerns about the disaster prompted DOD to deploy active ground units to supplement the Guard beginning about 5 days after landfall. Over 50,000 National Guard and 20,000 active personnel participated in the response. However, several factors affected the military's ability to gain situational awareness and organize and execute its response, including a lack of timely damage assessments, communications problems, uncoordinated search and rescue efforts, unexpected logistics responsibilities, and force integration issues. A key lesson learned is that additional actions are needed to ensure that the military's significant capabilities are clearly understood, well planned, and fully integrated. As GAO outlined in its recommendations to the Secretary of Defense, many challenges that the military faced during Katrina point to the need for better plans and more robust exercises. Prior to Katrina, disaster plans and exercises did not incorporate lessons learned from past catastrophes to fully identify the military capabilities needed to respond to a catastrophe. For example, the National Response Plan made little distinction between the military response to smaller regional disasters and catastrophic natural disasters. In addition, DOD's emergency response plan for providing military assistance to civil authorities during disasters lacked adequate detail. It did not account for the full range of assistance that DOD might provide, address the respective contributions of the National Guard and federal responders, or establish response time frames. National Guard state plans were also inadequate and did not account for the level of outside assistance that would be needed during a catastrophe, and they were not synchronized with federal plans. Moreover, none of the exercises that were conducted prior to Katrina had called for a major deployment of DOD capabilities to respond to a catastrophic hurricane. Without actions to help address planning and exercise inadequacies, a lack of understanding will continue to exist within the military and among federal, state, and local responders as to the types of assistance and capabilities that DOD might provide in response to a catastrophe; the timing of this assistance; and the respective contributions of the active, Reserve, and National Guard forces. DOD is examining the lessons learned from a variety of sources and is beginning to take actions to address them and prepare for the next catastrophe. It is too early to evaluate DOD's actions, but many appear to hold promise. However, some issues identified after Katrina, such as damage assessments, are long-standing, complex problems that cut across agency boundaries. Thus, substantial improvement will require sustained attention from the highest management levels in DOD and across the government.
Private sector participation and investment in highways is not new. In the 1800s, private companies built many roads that were financed with revenues from tolls, but this activity declined due to competition from railroads and greater state and federal involvement in building tax- supported highways. Private sector involvement in highways was relegated to contracting with states to build roads. In the absence of private toll roads, states and local governments were responsible for road construction and maintenance. In the 1930s many states began creating public authorities that built toll roads such as the Pennsylvania Turnpike that relied on loans and private investors buying bonds to finance construction. The Federal-Aid Highway Act of 1956 established a federal tax-assisted National System of Interstate and Defense Highways, commonly know as the Interstate Highway System. Further, the federal Highway Revenue Act of 1956 established a Highway Trust Fund to be funded using revenue from, among other sources, motor fuel taxes. The Federal-Aid Highway Act of 1956 generally prohibited the use of federal funds for the construction, reconstruction, or improvement of any toll road. States retain the primary responsibility for building and maintaining highways. While states collect revenues to finance road construction and maintenance from a variety of sources, including fuel taxes, they also receive significant federal funding. For example, in 2005, of the $75.2 billion spent on highways by all levels of government, about $31.3 billion (about 42 percent) was federal funding. Federal highway funding is distributed mostly through a series of formula grant programs, collectively known as the federal-aid highway program. Funding for the federal-aid highway program is provided through the Highway Trust Fund—a fund that was used to finance construction of the Interstate Highway System on a “pay as you go” basis. Receipts for the Highway Trust Fund are derived from two main sources: federal excise taxes on motor fuel and truck- related taxes. Receipts from federal excise taxes on motor fuel constitute the single largest source of revenue for the Highway Account. Funds are provided to the states for capital projects, such as new construction, reconstruction, and many forms of capital-intensive maintenance. These funds are available for eligible projects and pay 80 percent of the costs on most projects. Additionally, the responsibility for planning and selecting projects is handled by the states and metropolitan planning organizations. Over time, federal programs and legislation have gradually become more receptive to private sector participation and investment. For example, the Surface Transportation and Uniform Relocation Assistance Act of 1987 established a pilot program allowing federal participation in financing the construction or reconstruction of seven toll facilities, excluding highways on the Interstate Highway System. Construction costs for these projects were eligible for a 35 percent federal-aid match. The Intermodal Surface Transportation Efficiency Act of 1991 (ISTEA) removed the pilot project limitation on federal participation in financing the initial construction or reconstruction of tolled facilities, including the conversion of nontolled to tolled facilities. ISTEA raised the federal share of construction costs on toll roads to 50 percent and allowed federal participation in financing privately owned and operated toll roads, provided that the public authority remained responsible for ensuring that all of its title 23 responsibilities to the federal government were met. ISTEA also included a congestion pricing pilot program that allowed the Secretary of Transportation to enter into cooperative agreements with up to five state or local governments or public authorities to establish, maintain, and monitor congestion pricing projects. In 1998, the Transportation Equity Act for the 21st Century (TEA-21) renamed the congestion pricing pilot, calling it a “value-pricing pilot program,” and expanded the number of projects eligible for assistance to 15. TEA-21 also created a pilot program for tolling roads in the Interstate Highway System. Under this pilot, up to three states can toll interstates if the purpose is to reconstruct or rehabilitate the road and the state could not adequately maintain or improve the road without collecting tolls. Finally, the Transportation Infrastructure Finance and Innovation Act of 1998 (TIFIA) created a new federal program to assist in the financing of major transportation projects, in part by encouraging private sector investment in infrastructure. The TIFIA program permits the Secretary of Transportation to offer secured loans, loan guarantees, and lines of credit. In 2005, SAFETEA-LU reauthorized appropriations to fund all of the previously established toll programs. SAFETEA-LU also allowed the combining of public and private sector funds, including the investment of public funds in private sector facility improvements for purposes of eligibility for TIFIA loans. SAFETEA-LU also created the Express Lanes Demonstration Program, which authorizes the Secretary of Transportation to fund 15 demonstration projects to use tolling of highways, bridges, or tunnels—including facilities on the Interstate Highway System—to manage high congestion levels, reduce emissions in nonattainment or maintenance areas under the Clean Air Act, or finance highway expansion to reduce congestion. Finally, SAFETEA-LU amended the Internal Revenue Code to add qualified highway or surface freight transfer facilities to the types of privately developed and operated projects for which exempt facility bonds (also called private activity bonds, PABs) may be issued. According to FHWA, passage of the PAB provisions reflected the federal government’s desire to increase private sector investment in U.S. transportation infrastructure. SAFETEA-LU authorized the Secretary of Transportation to allocate up to $15 billion in PABs for qualifying highway and freight transfer facilities. As of January 2008, about $3.2 billion in PABs had been approved by DOT. The private sector has historically been involved in the construction phase as a contractor. Over time, the private sector has been increasingly involved in other phases of projects serving as either contractors or managers (see fig. 2). The private sector has become more involved in a wide range of tasks, including design, planning, preliminary engineering, and maintenance of highways. In addition, contractors have been given more responsibility for project oversight and ensuring project quality through increased use of contractors for engineering and inspection activities, as well as quality assurance activities. This increasing use of contractors can, in part, be attributed to the need for staff and expertise by state highway agencies. Existing surveys of state highway departments from 1996 to 2002 show an increase of tasks completely outsourced from about 26 percent to about 36 percent. Private sector participation can also involve highway public-private partnerships. As highway public-private partnerships can be defined to include any private sector involvement beyond the traditional contracting role in construction, there are many types of highway public-private partnership models. For example, design-build contracts, in which a private partner both designs and then constructs a highway under a single contract, is considered by DOT to be a highway public-private partnership. Some highway public-private partnerships involve equity investments by the private sector (see fig. 3). In construction of new infrastructure, commonly called “greenfield projects,” the private sector may provide financing for construction of the facility and then has responsibility for all operations and maintenance of the highway for a specified amount of time. The private operator generally makes its money through the collection of tolls. Private investments have also been made in existing infrastructure through the long-term leases of currently existing toll roads. These transactions, often called “brownfield” projects, usually involve a private operator assuming control of the asset—including responsibilities for maintenance and operation and collection of toll revenues—for a fixed period of time in exchange for a concession fee provided to the public sector. The concession fee could be in the form of an up-front payment at the start of the concession, or could be provided over time through a revenue sharing arrangement, or both. While many long-term public- private partnerships involve tolled highways, that is not necessarily always the case. For example, under a “shadow tolling” arrangement, the private sector finances, constructs, and operates a nontolled highway for a period of time and is paid a predetermined fee per car by the public sector. The projects included as part of our review primarily involved the long- term concessions of toll roads involving private sector equity. This model has seen strong interest in the past few years as many states have considered using this model to construct new highway infrastructure. For example, Texas is currently developing a number of new highways through this model. In addition, many states have explored private involvement for the long-term operation and maintenance of existing toll roads. For example, the city of Chicago and the state of Indiana recently entered into long-term leases with the private sector for the Chicago Skyway and Indiana Toll Road, respectively. Since we began our review, other states have begun exploring leasing existing toll roads to the private sector. For example, Pennsylvania has considered many options, including a long-term lease, for extracting value from the Pennsylvania Turnpike. In 2006, Virginia entered into a long-term lease agreement with a private company for the Pocahontas Parkway in the Richmond area and, in 2007, the Northwest Parkway Public Highway Authority entered into a long-term concession in the Denver region. The U.S. highway public-private partnership projects included in our review were varied (see table 1). Two of the projects—the TTC and Oregon—involved construction of infrastructure. The Texas project, in particular, was envisioned as an extensive network of interconnected corridors that involved passenger and freight movement, as well as passenger and freight railroads. The Oregon projects were primarily in the Portland area and involved capacity enhancement. Two of the projects we reviewed also involved leases of existing facilities—the Indiana Toll Road and the Chicago Skyway. In both instances, local or state officials were looking to extract value from the assets for reinvestment in transportation or other purposes. (See app. II for more information about the highway public-private partnerships that were included in our review.) There has been considerable private participation in highways and other infrastructure internationally. Europe, in particular has been a leader in use of these arrangements. Spain and France pioneered the use of highway public-private partnerships for the development of tolled motorways in Europe. Spain began inviting concessionaires to build a national autopista network in the 1960s, while private autoroute concessions in France date from the 1970s. Public-private partnership arrangements for infrastructure project financing or delivery of highway-related projects is widespread among the regions of the world. Highway public-private partnership initiatives support continued economic growth in more developed parts of the world or foster economic development in the less developed parts of the world. Over the period 1985 to 2004, the highest investment in road projects (includes roads, bridges, and tunnels) funded and completed using public-private partnerships was in Europe ($58.1 billion) followed by Asia ($44.5 billion) and North America ($32.2 billion). (See fig. 4.) FHWA attributed the predominant role of Europe to the absence of a dedicated funding source for highways and a rapid transition in the 1990s from a largely public infrastructure system to a more privately financed, developed, and operated system, among other things. While highway public-private partnerships have the potential to provide numerous benefits, they also entail costs and trade-offs to the public sector. The advantages and potential benefits of highway public-private partnerships, as well as their costs and trade-offs are summarized in table 2. Highway public-private partnerships that involve tolling may not be suited to all situations. In addition to potential benefits to the public sector, highway public-private partnerships can potentially provide private sector benefits as well through investment in a long-term asset with steady income generation over the course of a concession and availability of various tax incentives. Highway public-private partnerships have resulted in advantages from the perspective of state and local governments, such as the construction of new facilities without the use of public funding and extracting value—in the form of up-front payments—from existing facilities for reinvestment in transportation and other public programs. In addition, highway public- private partnerships can potentially provide other benefits to the public sector, including the transfer of project risks to the private sector, increased operational efficiencies through private sector operation and life-cycle management, and benefits of pricing and improved investment decision making that result from increased use of tolling. In the United States and abroad, public-sector entities have entered highway public-private partnership agreements to finance the construction of new roadways. As we reported in 2004, by relying on private sector sponsorship and investment to build the roads rather than financing the construction themselves, states (1) conserved funding from their highway capital improvement programs for other projects, (2) avoided the up-front costs of borrowing needed to bridge the gap until toll collections became sufficient to pay for the cost of building the roads and paying the interest on the borrowed funds, and (3) avoided the legislative or administrative limits that governed the amount of outstanding debt these states were allowed to have. All of these results were advantages for the states. For example, the TTC is a project that Texas plans to finance, construct, operate, and maintain through various private sector investors. The project is based on competitive bidding and procurement processes, and it will be developed in individual segments as warranted over 50 years. While relatively new in the United States, leveraging private resources to obtain highway infrastructure is more common abroad. Since the 1960s, Spain has been active in highway public-private partnerships, using approximately 22 toll highway concessions to construct its 3,000- kilometer (approximately 1,860 mile) national road network at little cost to the national government. By keeping the capital costs off the public budget, Spain mitigated budgetary challenges and met macroeconomic criteria for membership in the European Union’s Economic Monetary Union. More recently, Australian state governments have entered into highway public-private partnerships with private sector construction firms and lenders to finance and construct several toll highways in Sydney and Melbourne. Officials with the state of Victoria, Australia, have said that government preferences to limit their debt levels, particularly following a severe recession in the early 1990s, would have made construction of these roads difficult without private financing, even though some of the roads had been on transportation plans for several years. Some governments in the United States and Canada are also using highway public-private partnerships to extract value from existing infrastructure and raise substantial funds for transportation and other purposes. For example, in 2005 the city of Chicago received about $1.8 billion by leasing the Chicago Skyway to a concession consortium of Spanish and Australian companies for 99 years. The city used the lease proceeds to fund various social services; pay off remaining debt on the Chicago Skyway (about $400 million) and some of the city’s general obligation debt; and, create a reserve fund which, according to the former Chief Financial Officer of Chicago, generates as much net revenue in annual interest as the highway had generated in annual tolls. By paying off the city’s general obligation debt, the city’s credit rating improved, thus reducing the cost of debt in the future. In another example of extracting value from existing infrastructure, the state of Indiana signed a 75-year, $3.8 billion lease of the Indiana Toll Road in 2006 with the same consortium of private sector companies that had leased the Chicago Skyway. The proceeds will primarily be used to fund the governor’s 10-year statewide “Major Moves” transportation plan. Indiana officials told us that Indiana was the only state with a fully funded transportation plan for the next 10 years. Indiana also established reserves from the lease proceeds to provide future funding. Finally, the Provincial Government of Ontario, Canada, preceded both of these concession agreements in 1999 when it entered into a long-term lease with a private consortium for the Highway 407 ETR in the Toronto area in exchange for $3.1 billion Canadian dollars (approximately $2.6 billion U.S. dollars in 1999, or $3.2 billion U.S. dollars in 2007). According to Ontario officials, proceeds from the 407 ETR lease were added to the province’s general revenue fund but were not dedicated to a long-term investment or other specific capital projects. The public sector may also potentially benefit from transferring or sharing risks with the private sector. These risks include project construction and schedule risks. Various government officials told us that because the private sector analyzes its costs, revenues, and risks throughout the life cycle of a project and adheres to scheduled toll increases, it is able to accept large amounts of risk at the outset of a project, although the private sector prices all project risks and bases its final bid proposal, in part, on the level of risk involved. The transfer of construction cost and schedule risk to the private sector is especially important and valuable, given the incidence of cost and schedule overruns on public projects. Between 1997 and 2003, we and others identified problems with major federally funded highway and bridge projects and with FHWA’s oversight of them. We have reported that on many projects for which we could obtain information, costs had increased, sometimes substantially, and that several factors accounted for the increases, including less than reliable initial cost estimates. We further reported that cost containment was not an explicit statutory or regulatory goal of FHWA’s oversight and that the agency had done little to ensure that cost containment was an integral part of the states’ project management. Since that time both Congress and DOT have taken action to improve the performance of major projects and federal oversight; however, indications of continuing problems remain. In 2004, DOT established a performance goal that 95 percent of major federally funded infrastructure projects would meet cost and schedule milestones established in project or contract agreements, or achieve them within 10 percent of the established milestones. While federally funded aviation and transit projects have met this goal, federally funded highway projects have missed the goal in each of the past 3 years. Overseas, an example of a successful transfer of construction risk involves the CityLink highway project in Melbourne, Australia. This project faced several challenges during construction, including difficult geological conditions and a tunnel failure, which caused project delays and added costs. According to officials from the government of Victoria, Australia, because construction risks were borne by the private sector, all cost and schedule overruns came at the expense of the private concessionaire, and no additional costs were imposed on the government. Another benefit of highway public-private partnerships related to the costs of construction is that because highway public-private partnership contracts are public and cost and schedule overruns are generally assumed by the private sector, there can be more public transparency about project costs and timelines than under public projects. Traffic and revenue risks can also be transferred to the private sector. In some highway public-private partnership projects, traffic and revenues have been low, imposing costs on the private sector but not leading to direct costs to the public sector. For example, the Pocahontas Parkway opened to traffic in stages beginning in May 2002. Revenues have been less than projected on this road because traffic has been lower than projected. Virginia used public and private funds for operating and maintaining the Parkway until it had sufficient revenue to repay initial state funds used for construction and pay for the operation and maintenance through tolls. Traffic projections for 2003 indicated there would be about 840,000 transactions per month (about $1.4 million in revenue). However, as of January 2004, traffic was about 400,000 transactions per month (about $630,000 in revenue). In June 2006, under an amended and restated development agreement, a private concessionaire that believed the road was a good long-term investment assumed responsibility for the road for a period of 99 years. The private concessionaire is now responsible for all debt on the Pocahontas Parkway and the risk that revenues on the highway might not be high enough to support all costs. Similarly, in Australia, construction of the Cross City Tunnel in Sydney was privately funded; but, the project began to experience financial problems when actual traffic and revenues were lower than forecasted. Within the first 2 years of operation, the private operator went into receivership. In September 2007, the Cross City Tunnel project was sold to new owners following a competitive tender process. Government officials from New South Wales told us that, as of spring 2007, there had been no costs to the government because the traffic and revenue risks were borne by the private sector. Highway public-private partnerships may also yield other potential benefits, such as management of assets in ways that may yield efficiencies in operations and life-cycle management that may reduce total project costs over a project’s lifetime. For example, in 2004, FHWA reported that, in contrast to traditional highway contracting methods that have sometimes focused on costs of individual project segments, highway public-private partnerships have more flexibility to maximize the use of innovative technologies. Such technologies will lead to increases in quality and the development of faster and less expensive ways to design and build highway facilities. According to DOT, highway public-private partnerships can also reduce project life-cycle costs. For example, in the case of the Chicago Skyway, the private concession company invested in electronic tolling technologies within the first year of taking over management of the Chicago Skyway. This action was taken because, in the long term, the up- front cost of new technologies would be paid off through increased mobility, higher traffic volumes, a reduced need for toll collectors, and decreased congestion at the toll plaza by increasing traffic throughput. According to the Assistant Budget Director for Chicago, the high initial cost for installing electronic tolling was likely a prohibiting factor for the city to make the same investment, based on the city’s limited annual budget. Foreign officials with whom we spoke also identified life-cycle costing and management as a primary benefit of highway public-private partnerships. Highway public-private partnerships can also better ensure more predictable funding for maintenance and capital repairs of the highway. Under more traditional publicly financed and operated highways, operations and maintenance and capital improvement costs are subject to annual appropriations cycles. This increases the risk that adequate funds may or may not be available to public agencies. However, under a highway public-private partnership, concessionaires are generally held, through contractual provisions, to maintain the highway up to a certain level of standard (sometimes as good as or better than a state would hold itself to) throughout the course of the concession, and the concessionaire must fund all maintenance costs itself. Furthermore, capital improvements, including possible roadway expansions, may also be contractually required of concessionaires ensuring that such works will be conducted as needed. Finally, the desire for a safe and well-maintained roadway in order to attract traffic (and, therefore, revenues) may incentivize a private operator to useful and efficient operations and maintenance techniques and practices. Highway public-private partnerships can also potentially provide mobility and other benefits to the public sector, through the use of tolling. The highway public-private partnerships we reviewed all involved toll roads. Highway public-private partnerships potentially provide benefits by better pricing infrastructure to reflect the true costs of operating and maintaining the facility and thus realizing public benefits of improved condition and performance of public infrastructure. In addition, through the use of tolling, highway public-private partnerships can use tolling techniques designed to have drivers readily understand the full cost of decisions to use the road system during times of peak demand and potentially reduce the demand for roads during peak hours. Through congestion pricing, tolls can be set to vary during congested periods to maintain a predetermined level of service. Such tolls create financial incentives for drivers to consider costs when making their driving decisions. In response, drivers may choose to share rides, use transit, travel at less congested (generally off-peak) times, or travel on less congested routes to reduce their toll payments. Such choices can potentially reduce congestion and the demand for road space at peak periods, thus allowing the capacity of existing roadways to accommodate demand with fewer delays. For example, a representative of the government of Ontario, Canada, told us that the 407 ETR helped relieve congestion in Toronto by attracting traffic from a parallel publicly financed untolled highway. In fact, advisors to the government said that the officials established a tolling schedule for the 407 ETR based on achieving predetermined optimal traffic flows on the 407 ETR. Tolling can also potentially lead to targeted, rational, and efficient investment decisions. National roadway policy has long incorporated the user pays concept, according to which roadway users pay the costs of building and maintaining roadways, generally in the form of excise taxes on motor fuels and other taxes on inputs into driving, such as taxes on tires or fees for registering vehicles or obtaining operator licenses. Increasingly, however, decision makers have looked to other revenue sources—including income, property, and sales tax revenues—to finance roads in ways that do not adhere to the user pays principle. Tolling, however, is more consistent with user pay principles because tolling a particular road and using the toll revenues collected to build and maintain that road more closely aligns the costs with the distribution of the benefits that users derive from it. Furthermore, roadway investment can be more efficient when it is financed by tolls because the users who benefit will likely support additional investment to build new capacity or enhance existing capacity only when they believe the benefits exceed the costs. In addition, toll project construction is typically financed by bonds sold and backed by future toll revenues, and projects must pass the test of market viability and meet goals demanded by investors, thus better ensuring that there is sufficient demand for roads financed through tolling. However, even with this test there is no guarantee that projects will always be viable. The private sector, and in particular, private investment groups, including equity funds and pension fund managers, have recently demonstrated an increasing interest in investing in public infrastructure. They see the sector as representing long-term assets with stable, potentially high yield returns. While these private sector investors may benefit from highway public- private partnerships, they can also lose money through a highway public- private partnership. Although profits are generally not realized in the first 10 to 15 years of a concession agreement, the private sector receives benefits from highway public-private partnerships over the term of a concession in the form of a return on its investment. Private sector investors generally finance large public sector benefits early in a concession period, including up-front payments for leases of existing projects or capital outlays for the construction of new, large-scale transportation projects. In return, the private sector expects to recover any and all up-front costs (whether construction costs of new facilities or concession fees paid to the public sector for existing facilities), as well as ongoing maintenance and operation costs, and generate a return on investment. According to investment firms with whom we spoke, future toll revenue from tolled transportation projects can provide reliable long- term investment opportunities. Furthermore, any cost savings or operational efficiencies the private sector can generate, such as introducing electronic tolling, improving maintenance practices, or increasing customer satisfaction in other ways can further boost the return on investment through increased traffic flow and increased toll revenue. The private sector can also receive potential tax deductions from depreciation on assets involving private sector investment and the availability of these deductions were important incentives to the private sector to enter some of the highway public-private partnerships we reviewed. Obtaining these deductions, however, may require lengthy concessions periods. In the United States, federal tax law allows private concessionaires to claim income tax deductions for depreciation on a facility (whether new highways or existing highways obtained through a concession) if the concessionaire has effective ownership of the property. Effective ownership requires, among other things, that the length of a concession be greater than or equal to the useful economic life of the asset. Financial and legal experts, including those who were involved in the Chicago and Indiana transactions, told us that since the concession lengths of the Chicago Skyway and the Indiana Toll Road agreements each exceed their useful life, the private investors can claim full tax deductions for asset depreciation within the first 15 years of the lease agreement. The requirement to demonstrate effective asset ownership contributed to the 99-year and 75-year concession terms for the Chicago Skyway and Indiana Toll Road, respectively. One tax expert told us that, in general, infrastructure assets (such as highways) obtained by the private sector in a highway public-private partnership may be depreciated on an accelerated basis over a 15-year period. Private investors can also potentially benefit from being able to use tax- exempt financing authorized by SAFETEA-LU in 2005. Private activity bonds have been provided for private sector use to generate proceeds that are then used to construct new highway facilities under highway public- private partnerships. This exemption lowers private sector costs in financing highway public-private partnership projects. As of January 2008, DOT had approved private activity bonds for 5 projects totaling $3.2 billion and had applications pending for 3 projects totaling $2.2 billion. DOT said it expects applications for private activity bond allocations from an additional 12 projects totaling more than $10 billion in 2008. Finally, the private sector can potentially benefit through gains achieved in refinancing their investments. Both public and private sector officials with whom we spoke agreed that refinancing is common in highway public- private partnerships. Refinancing may occur early in a concession period as the initial investors either attempt to “cash out” their investment—that is, sell their investment to others and use the proceeds for other investment opportunities—or obtain new, lower cost financing for the existing investment. Refinancing may also be used to reduce the initial equity investment in highway public-private partnerships. Refinancing gains can occur throughout a concession period; as project risks typically decrease after construction, the project may outperform expectations, or there may be a general decrease in interest rates. In the case of the Chicago Skyway, the concession company had to secure a large amount of money in a short period of time to close on the agreement with the city. According to the Chief Executive Officer of the Skyway Concession Company, the company obtained a loan package with the best interest rates available at the time and refinanced within 7 months of financial close on the agreement. He said this refinance resulted in a better deal, including better leverage and interest rates. An investment banker involved in the Chicago Skyway concession told us that refinancing plans are often incorporated into the original investment business case and form an important part of each bidders’ competitive offer. For example, if the toll road is not refinanced, the investment will underperform against its original business case. The investment banker said that there was no refinancing gain on the Chicago Skyway because the gain was already planned for as part of the initial investment case and was reflected in the financial offer to the city of Chicago. In some cases, refinancing gains may not be anticipated or incorporated into the financial offer and may be realized later in a concession period. The governments of the United Kingdom and Victoria and New South Wales, Australia, have acknowledged that gains generated from lower cost financing can be substantial, and they now require as a provision in each privately financed contract that any refinancing gains achieved by concessionaires—and not already factored into the calculation of tolls—be shared equally with the government. For example, the state of Victoria, Australia, shared in refinancing gains from the private investor’s refinancing of a highway public-private partnership project in Melbourne called EastLink project. Highway public-private partnerships may not be applicable to all situations, given the challenges of tolling and the private sector’s need to make profits. While tolling has promise as an approach to enhance mobility and finance transportation, officials face many challenges in obtaining public and political support for implementing tolling. As we reported in June 2006, based on interviews with 49 state departments of transportation, opposition to tolling stems from the contention that fuel taxes and other dedicated funding sources are used to pay for roads, and thus tolling is seen as a form of double taxation. In addition, concerns about equity are often raised, including the potential unequal ability of lower-income and higher-income groups to pay tolls, as well the use of tolling to address the transportation needs in one part of a state while freeing up federal and state funding in tolled areas to address transportation needs in another part of a state. State officials also face practical challenges in implementing tolling, including obtaining the statutory authority to toll and addressing the traffic diversion that might result when motorists seek to avoid toll facilities. Our June 2006 report concluded that state and local governments may be able to address these concerns by (1) honestly and forthrightly addressing the challenges that a tolling approach presents, (2) pursuing strategies that focus on developing an institutional framework that facilitates tolling, (3) demonstrating leadership, and (4) pursuing toll projects that provide tangible benefits to users. Although highway public-private partnerships could conceivably be used for reconstructing existing roadways, in practice this could be very difficult, due, in part, to public and political opposition to tolling existing free roads. Aside from bridges and tunnels, existing Interstate Highway System roads generally cannot be tolled, except under specific pilot programs. One such program, the Interstate System Reconstruction and Rehabilitation Pilot Program, was authorized in 1998 to permit three states to toll existing interstate highways to finance major reconstruction or rehabilitation needs. Two states applied for and received preliminary approval to do so—Virginia in 2003 and Missouri in 2005—and Pennsylvania submitted an application in 2007. While Virginia’s toll project is proceeding through environmental review, Missouri’s project remains on hold, and Pennsylvania’s application awaits approval. In addition, three other states submitted applications and withdrew them, owing in part to public and political opposition to tolls. A fourth state sent in an “Expression of Interest” for this pilot program, but the state never formally submitted an application. An official with the metropolitan planning organization for Chicago said tolling highways is difficult in Illinois, especially when the public is use to free alternatives, and an official with the California DOT echoed this sentiment, saying that highway public- private partnerships are not a substitute or final solution for ongoing funding of transportation infrastructure. FHWA officials agreed that highway public-private partnerships are not suitable in all situations. Another reason highway public-private partnerships may not be applicable to all situations is that the private sector has a profit motive and is likely to only enter highway public-private partnerships for new construction projects that are expected to produce an adequate rate of return on investment. Therefore, highway public-private partnerships appear to be most suited for construction of new infrastructure in areas where congestion may be a problem and traffic is expected to be sufficient to generate net profits through toll revenues. For example, we found that Oregon has decided to forego a highway public-private partnership for one possible highway public-private partnership project in the Portland area because the forecasted revenues were not high enough to make the route toll viable for private investors. Similarly, Texas has concluded that not all segments of the TTC are toll viable; these segments might not receive direct private interest and might need to be subsidized with concession fees from other segments or other funds, including public dollars, if they are available. According to the Texas DOT, some projects will be partially toll viable and may require both public and private funds. DOT officials told us that, in both Oregon and Texas, funds are currently not available to procure these projects through a public procurement. Highway public-private partnerships come with potential costs and trade- offs to the public sector. The costs include the potential for higher user tolls than under public toll roads and potentially more expensive project costs than publicly procured projects. While the public sector can benefit through the transfer or sharing of some project risks with the private sector, not all risks can or should be transferred; and, the public sector may lose some control through a highway public-private partnership. Finally, because there are many stakeholders with interests in a public- private partnership as well as many potential objectives—and many governments affected—there are trade-offs in protecting the public interest. Although highway public-private partnerships can be used to obtain financing for highway infrastructure without the use of public sector funding, there is no “free money” in highway public-private partnerships. Rather, this funding is a form of privately issued debt that must be repaid. Private concessionaires primarily make a return on their investment by collecting toll revenues. Though concession agreements can limit the extent to which a concessionaire can raise tolls, it is likely that tolls will increase on a privately operated highway to a greater extent than they would on a publicly run toll road. For example, during the time the Chicago Skyway was publicly managed, tolls changed infrequently and actually decreased by approximately 25 percent in real terms (2007 dollars) between 1989 and 2004 (see fig. 5). According to the former Chief Financial Officer of Chicago, the Chicago Skyway had not historically increased its tolls unless required by law, even though the Skyway had been operating at a loss and had outstanding debt. On the other hand, under private control, maximum tolls are generally set in accordance with concession agreements and, in contrast to public sector practices, allowable toll increases can be frequent and automatic. The concession agreements for both the Chicago Skyway and Indiana Toll Road permit toll rates to increase each year, based on a minimum of 2 percent and a maximum of the annual change of either the CPI or per capita U.S. nominal gross domestic product (GDP), whichever is higher. Based on estimated increases in nominal gross domestic product and population, the tolls on the Chicago Skyway will be permitted to increase in real terms nearly 97 percent from 2007 through 2047—from $2.50 to $4.91 in 2007 dollars. This is also shown in figure 5. These future toll projections reflect the maximum allowable toll rates, which have been authorized by the public sector in the concession agreements. Depending on market conditions, the potential exists that the public could pay higher tolls than those that would more appropriately reflect the true costs of operating and maintaining the facilities, including earning a reasonable rate of return. Within the maximum allowable toll rates authorized by the public sector in the concession agreements, toll rate changes will be driven by such market factors as the demand for travel on the road, which, in turn, will be influenced by the level of competition that toll road concessionaires will face. This competition will vary from facility to facility. In cases where an untolled public roadway or other transportation mode (e.g., bus or rail) is a viable travel alternative to the toll road, these competing alternatives may act to constrain toll rates. In other instances, where there are not other viable travel alternatives to a toll road that would not require substantially more travel time, there may be few constraints on toll rates other than the terms of the concession. In such instances, a concessionaire may have substantial market power, which could give the concessionaire the ability to set toll rates that exceed the costs of the toll road, including a reasonable rate of return, as long as those toll rates are below the maximum rates allowed by the concession agreement. We have not determined the extent to which any concessionaire would have substantial market power due to limited alternatives, although this is an appropriate consideration when entering possible highway public-private partnerships. In addition to potentially higher tolls, the public sector may give up more than it receives in a concession payment in using a highway public-private partnership with a focus on extracting value from an existing facility. Conversely, because the private sector takes on substantial risks, the opposite could also be true—that is, the public sector might gain more than it gives up. In exchange for an up-front concession payment, the public sector gives up control over a future stream of toll revenues over an extended period of time, such as 75 or 99 years. It is possible that the net present value of the future stream of toll revenues (less operating and capital costs) given up can be much larger than the concession payment received. Concession payments could potentially be less than they could or should be. In Indiana the state hired an accounting and consulting firm to conduct a study of the net present value of the Indiana Toll Road and deemed its value to the state to be slightly under $2 billion. This valuation assumed that future toll increases would be similar to the past—infrequent and in line with the road’s history under public control. An alternative valuation of the toll road lease performed by an economics professor on behalf of opponents of the concession changed certain assumptions of the net present value model and produced a different result—about $11 billion. This valuation assumed annual toll rate increases by the public authority of 4.4 percent, compared with the 2.8 percent used in the state’s valuation. We did not evaluate this study and make no conclusions about its validity; other studies may have reached different conclusions; however, the results of this study illustrate how toll rate assumptions can influence asset valuations and, therefore, expected concession payments. Similarly, unforeseen circumstances can dramatically alter the relative value of future revenues compared with the market value of the facility. In 1999, the government of Ontario, Canada received a $3.1 billion concession fee in exchange for the long-term lease for the 407 ETR. In the years following the concession agreement, as commercial and residential development along the 407 ETR corridor exceeded initial government projections, the value of the roadway increased. In 2002, a valuation conducted by an investor in the concession estimated that the market value of the facility had nearly doubled—from $3.1 billion Canadian to $6.2 billion Canadian. This valuation included a new 40 kilometers that had been added to the 407 ETR since it was originally built, as well as additional parking lots and increased tolls. Using a highway public-private partnership to extract value from an existing facility also raises issues about the use of those proceeds and whether future users might potentially pay higher tolls to support current benefits. In some instances, up-front payments have been used for immediate needs, and it remains to be seen whether these uses provide long-term benefits to future generations who will potentially be paying progressively higher toll rates to the private sector throughout the length of a concession agreement. Both Chicago and Indiana used their lease fees, in part, to fund immediate financial needs. Chicago, for example, used lease proceeds to finance various city programs, while Indiana used lease proceeds primarily to fund its “Major Moves” 10-year transportation program. However, Chicago also used the proceeds to retire both Chicago Skyway and some city debt, and both Chicago and Indiana established long-term reserves from the lease proceeds. Conversely, proceeds from the lease of Highway 407 ETR in Toronto, Canada, went into the province’s general revenue fund, and officials in the Ministry of Transport were unaware of how the payment was spent. Consequently, it is not clear if those uses of proceeds will benefit future roadway users. Highway public-private partnerships also potentially require additional costs to the public sector compared with traditional public procurement. These costs include potential additional costs associated with (1) required financial and legal advisors, and (2) private sector financing compared with public sector financing. A June 2007 study by the University of Southern California found that because the U.S. transportation sector has little experience with long-term concession agreements, state departments of transportation are unlikely to have in-house expertise needed to plan, conduct, and execute highway public-private partnerships. FHWA has also recognized this issue—in a 2006 report it noted that, in several states, promising projects have been delayed for lack of staff capacity and expertise to confidently conclude agreements. Furthermore, public sector agencies must also exercise diligence to prevent potential conflicts of interest, if the legal and financial firms also advise private investors. In addition, highway public-private partnership projects are likely to have the higher cost of private finance because public sector agencies generally have access to tax-exempt debt, while private companies generally do not. Financial trade-offs can also involve federal tax issues. As discussed earlier, unlike public toll authorities, the private sector pays income taxes to the federal government and the ability to deduct depreciation on assets involved with highway public-private partnerships for which they have effective ownership for tax purposes can reduce that tax obligation. The extent of these deductions and amounts of foregone revenue, if any, to the federal or state governments are difficult to determine, since they depend on such factors as the taxable income, total deductions, and marginal tax rates of private sector entities involved with highway public-private partnerships. Nevertheless, foregone revenue can also amount to millions of dollars. For example, there may be foregone tax revenue when the private sector uses tax-exempt private activity bonds. As we reported in 2004, the 2003 cost to the federal government from tax-exempt bonds used to finance three projects with private sector involvement—Pocahontas Parkway, Southern Connector, and the Las Vegas Monorail—was between $25 million and $35 million. There can also be potential costs of highway public-private partnerships using public finance since state and local debt is also tax deductible. Regardless of the tax impact on government revenues, the availability of depreciation deductions can be important to private sector concessionaires. As discussed earlier, financial experts with whom we spoke said that depreciation deductions associated with the Chicago Skyway and Indiana Toll Road transactions were significant, and that it is likely that in the absence of the depreciation benefit, the concession payments to Chicago and Indiana would have been less than $1.8 and $3.8 billion, respectively. In highway public-private partnerships the public sector may lose some control over its ability to modify existing assets or implement plans to accommodate changes over time. For example, concession agreements may contain noncompete provisions designed to limit competition from or elicit compensation for highways or other transportation facilities that may compete and draw traffic from a leased toll road. The case of SR-91 in California illustrates an early and extreme example of a noncompete provision’s potential effect. In 1991, the California DOT used a highway public-private partnership to construct express lanes in the middle of the existing SR-91. The express lanes were owned and operated by a private concessionaire, and the public sector continued to own the adjacent lanes. The concession contained provisions that prevented improvements or expansions of the adjacent public lanes. Eight years after signing the concession agreement, the local transportation authority purchased the concessionaire’s rights to the tolled express lanes, thus enabling transportation improvements to be made. It appears that noncompete clauses in projects that followed SR-91 have generally provided more flexibility to modify nearby existing roads and build new infrastructure when necessary. This issue is discussed further in the next section of the report. The public sector may also lose some control of toll rate setting by entering into highway public-private partnerships. Highway public-private partnership agreements generally allow the private operator to raise tolls in accordance with provisions outlined in the concession contract. The private operator may be able to raise tolls on an annual basis, without prior approval. To the extent that the public sector may want to adjust toll rates—for example, to manage demand on their highway network—they may be unable to do so because the toll setting capability is defined exclusively by the concession contract and the private operator. While the public sector may benefit from the transfer of risk in a highway public-private partnership, not all risks can or should be transferred and there may be trade-offs. There are costs and risks associated with environmental issues, which often cannot or should not be transferred to the private sector in a highway public-private partnership. For example, if a project is to be eligible for federal funds at any point throughout the project lifetime, a lengthy environmental review process must be completed, as required for all federally funded projects, by the National Environmental Policy Act (NEPA). There can also be various federal permits and approvals required. The financial risk associated with the environmental assessment process (and whether the project will be approved) generally resides with the public sector, in part, because the environmental review process can add to project costs and can cause significant project delays. In addition, the private sector may be unwilling to accept the risk and project uncertainty associated with the publicly controlled environmental review process. An example of the delay that can be experienced in projects undergoing environmental review includes the South Bay Expressway in California. The state selected a private sponsor for this project in 1991. However, litigation challenging the final record of decision on the environmental impact statement for the project was not resolved until March 2003, and construction did not begin until July 2003. In another example, private sector officials in Texas have told us they are not involved with the environmental assessment process for the TTC, given the added costs and the increased project delivery times. According to the Texas DOT, environmental review is a core function of government and a risk that to date appears best suited to the public sector. Finally, there may also be political trade-offs faced by the public sector when involved in highway public-private partnerships. For example, public opposition to the TTC and other highway public-private partnerships in Texas remains strong. Although the governor of Texas has identified a lack of funds as a barrier to meeting the state’s transportation needs, public outcry over the TTC and the lack of involvement of local governments was so substantial that in June 2007 the state legislature enacted a 2-year moratorium on future highway public-private partnerships in the state. In the case of the 407 ETR in Toronto, a consultant to the Ontario Ministry of Transportation told us the government was publicly criticized for the transaction and road users had little understanding of the reasons the government entered the agreements or what the future toll rates could be. As a result, the government suffered public backlash. Similarly, the New South Wales government, as part of its agreement with the concession company of the Cross City Tunnel in Sydney, Australia, closed some city streets in order to mitigate local congestion in the downtown area as part of the tunnel project. Although the government’s intent was to alleviate congestion from downtown Sydney, many drivers felt that they were diverted into the tolled tunnel, and the government was criticized for its actions. The diversity and uncertainty of both the benefits and costs of highway public-private partnerships of the type we reviewed—long-term concessions—are complex and suggest that the merits of future partnerships will need careful evaluation on a case-by-case basis. As noted above, highway public-private partnerships have the potential to provide benefits, such as construction of new facilities, without the use of public finance, the transfer or sharing of project risks, and achievement of increased operational efficiencies through private sector operation and life-cycle management. However, also as discussed earlier, there are costs and trade-offs involved, including loss of public-sector control of toll setting and potentially more expensive project costs than publicly procured projects. State and local governments pursue highway public- private partnerships to achieve specific public objectives, such as congestion relief and mobility or increasing freight mobility. In some instances, the potential benefits of highway public-private partnerships may outweigh the potential costs and trade-offs, and the use of highway public-private partnerships and long-term concessions would serve the public well into the future. In other instances, the potential costs and trade-offs may outweigh the potential benefits, and the public interest may not be well served by using such an arrangement. In instances where public officials choose to go with a highway public-private partnership accomplished through a long-term concession, realizing potential benefits will require careful structuring of the public-private partnership agreement and identifying and mitigating the direct risks of the project. From a public perspective, an important component of any analysis of potential benefits and costs of highway public-private partnerships and long-term concessions is consideration of the public interest. As with any highway project, there can be many stakeholders in highway public- private partnerships, each of which may have its own interests. Stakeholders include regular toll road users, commercial truck and bus drivers, emergency response vehicles, toll road employees, and members of the public who may be affected by ancillary effects of a highway public- private partnership, including users of nearby roads, land owners, special interest groups and taxpayers, in general (see fig. 6). Identification of the public interest is a function of scale and can differ based on the range of stakeholders and the geographic and political domain considered. At the national level, the public interest may include facilitating interstate commerce, as well as meeting mobility needs. State and regional public interest, however, might prioritize new infrastructure to meet local demand or maximum up-front payments to reduce debt or finance transportation plans above and beyond national mobility objectives. With competing interests over the duration of the concession agreement, trade- offs will be necessary. For example, if mobility is an objective of the project, high toll rates at times of peak travel demand may be necessary to deter some users from driving during peak hours and thus mitigate congestion. But, if rates are too high, traffic diversion to free alternate public routes may be an unintended outcome that could adversely affect drivers on those roads. The public interest in highway public-private partnerships can and has been considered and protected in many ways. State and local officials in the projects we reviewed heavily relied on concession terms. Most often, these terms were focused on ensuring performance of the asset, dealing with financial issues such as toll rates, maintaining the public sector’s accountability and flexibility to provide transportation services to the public, addressing workforce issues, and maintaining the ability to address these concession terms over the life of the contract. Additionally, oversight and monitoring mechanisms were used to ensure that private partners fulfill their obligations. In addition to concession terms, certain financial analyses were used to protect the public interest. For example, PSCs, which attempt to compare estimated project costs as a highway public-private partnership with undertaking a project publicly, have been used for some highway projects. We found that some foreign governments have also used formal public interest tools as well as public interest criteria tests. However, use of these tests and tools has been more limited in the United States. Not using formal public interest criteria and assessment tools can potentially allow aspects of the public interest to be overlooked and use of formal analyses before entering into highway public-private partnerships can help lay out the expected benefits and costs of the project. The highway public-private partnerships we reviewed have used various mechanisms to protect the public interest by holding concessionaires to requirements related to such things as performance of an asset, financial aspects of agreements, the public sector’s ability to remain accountable as a provider of public goods and services, workforce protections, and concession oversight. Because agreeing to these terms may make an asset less valuable to the private sector, public sector agencies might have accepted lower payments in return for these terms. Public sector agencies involved in highway public-private partnerships have attempted to protect the public interest by ensuring that the performance of the asset is upheld to high safety, maintenance, and operational standards and can be expanded when necessary (see table 3). Operating and maintenance standards were incorporated in the Indiana Toll Road and Chicago Skyway concession agreements. Based on documents we reviewed, the standards on the Indiana Toll Road detail how the concessionaire must maintain the road’s condition, utility, and level of safety with the intent to ensure that the public would not see any reduction in the performance of the highway over the 75-year lease term. The standards also detail how the concessionaire must address a wide range of roadway issues, such as signage, use of safety features such as barrier walls, snow and ice removal, and the level of pavement smoothness that must be maintained. According to a Deputy Commissioner with the Indiana DOT, the standards actually hold the lessee to a higher level of performance than when the state operated the highway, because the state did not have the funding to maintain the Indiana Toll Road to its own standards. For the Chicago Skyway, the concessionaire is required to follow detailed maintenance and operations standards that are based on industry best practices and address maintenance issues such as roadway maintenance, drainage maintenance, and roadway safety features, as well as operational issues such as toll collection procedures, emergency planning, and snow and ice control procedures. According to an engineering consultant with the city of Chicago who was involved in writing the standards used in the concession, when the Chicago Skyway had been under public control, employees were not required to follow formal standards. Concessions may include requirements to maintain performance in terms of mobility and capacity by ensuring a certain level of traffic throughput and avoiding congestion. Highway public-private partnerships may also require that a concessionaire expand a facility once congestion reaches a certain level and some agreements can include capacity and expansion triggers based on LOS forecasts. LOS is a qualitative measure of congestion; according to the concession agreement, on the Indiana Toll Road, when LOS is forecasted to fall below certain levels within 7 years, the concessionaire must act to improve the LOS, such as by adding additional capacity (such as an extra lane) at its own cost, to ease the future projected congestion. Because the provisions call for expansions in advance of poor mobility conditions, it appears this agreement aims to prevent a high level of congestion from ever happening. According to Texas DOT officials, the concessionaire for the State Highway 130, segments 5 and 6 project (see table 1) will be required to add capacity through expansion, or better manage traffic, to improve traffic flow if the average speed of vehicles on the roadway falls below a predetermined level. According to government officials in Toronto, Canada, the private operator of the 407 ETR is also required to maintain a certain vehicle flow and traffic growth on the road or face financial penalties. Public sector agencies have also sought to protect the public interest in highway public-private partnerships through financial mechanisms such as toll rate setting limitations (see table 4). However, the toll limitations used in U.S. highway public-private partnerships that we reviewed may be sufficiently generous to the private sector that they might not effectively limit toll increases. Toll limitations constrain the high profit-maximizing toll levels that a private concessionaire might otherwise set. As discussed earlier, tolls on the Chicago Skyway can be increased at predetermined levels for the first 12 years of the lease (rising from $2.50 to $5 per 2-axle vehicle). Afterward, tolls can then increase annually at the highest of three factors: 2 percent, increase in CPI, or increase in nominal per capita GDP. According to the concession agreement, tolls on the Indiana Toll Road can be increased at set levels until mid-2010 and then can rise by a minimum of 2 percent or a maximum of the prior year’s increase in CPI or nominal per capita GDP. In general, these limitations are meant to restrict the rate of toll increases over time. Since nominal GDP has generally increased at an annual rate of between 4 and 7 percent over the last 20 years, the restrictions may not effectively limit toll increases. Some foreign governments have taken a different approach to limiting toll increases that may create more constraining limits. For example, in Spain, we were told that concessionaires are limited to increasing tolls by roughly the rate of inflation in Spain every year (although slight adjustments may be made based on traffic levels). In contrast, since the annual rate of inflation in the United States has typically been lower than nominal GDP growth (except during years of negative real GDP change), the maximum allowable toll increases in Chicago and Indiana will likely exceed the U.S. inflation rate. We were told that in the EastLink project in Australia, toll rates have been kept low by having prospective bidders for a concession bid down the level of toll rates; the contract is awarded to the bidder that agrees to operate the facility with the lowest toll. Government officials told us that this process resulted in the lowest per kilometer toll rate of any toll road in Australia. However, using a process that constrains bidders to the lowest tolls may involve government subsidies. Although no closure of competing roads or government subsidies were involved with the EastLink project in Victoria, Australia, the potential for government subsidies was involved in the Cross City Tunnel project in Sydney, Australia. An official with the New South Wales government said the government was adopting a new policy in light of the Cross City Tunnel project specifying that the government should be prepared to provide subsidies on toll road projects to keep tolls at certain predetermined levels. In commenting on a draft of this report, DOT officials said that different government agencies may have different goals for highway public-private partnerships besides keeping tolls low. These other goals could include maximizing the number of new facilities provided, earning the largest up-front payment or annual revenue sharing, or using higher tolls to maximize mobility and choice. Revenue-sharing mechanisms have also been used to protect the public interest by requiring a concessionaire to share some level of revenues with the public sector. For example, in Texas, revenues on the State Highway 130, segments 5 and 6, concession will be shared with the state so that the higher the return on investment of the private concessionaire, the higher the share with the state. For example, after a one-time, up-front payment of $25 million, if the annual return on investment of the private concessionaire is at or below 11 percent, then the state could share in 5 percent of all revenues. If it is over 15 percent, then Texas could receive 50 percent of the net revenues. Higher returns would warrant higher revenue shares for the state. Officials with the Texas DOT said they see revenue sharing, as opposed to one large up-front payment at lease signing, as protecting the public interest in the long run and ensuring that the public and private sectors share common goals. Both Chicago and Indiana officials told us there were no revenue sharing arrangements in either the Chicago Skyway or Indiana Toll Road concessions. Foreign governments have also used other financial mechanisms, such as controls on public subsidies to private projects and the sharing of refinancing gains, to protect the public interest in highway public-private partnerships. For example, in Spain, we were told that concessionaires for highway projects that require public subsidies often bid for the lowest subsidy possible to lower costs to the government. In other highway projects, the government of Spain will provide loans for private projects for which the interest rate on repayment is based on traffic levels: the lower the traffic level the lower the interest rate. According to documents we reviewed, in highway public-private partnerships in both Victoria and New South Wales, Australia, any profits the concessionaire earns by refinancing of the asset must be shared with the government. In May 2007, the government of New South Wales, Australia, issued guidance in relation to refinancing gains. According to a New South Wales official, the general position of the government on highway public-private partnership refinancing is that all refinancings, other than those contemplated at financial close, require government consent. Government consent plays a fundamental role in project refinancing since refinancing may increase project risk by increasing debt burden and reducing investors’ long-term financial incentives, among other things. In Canada, federal policy requires that any federal funds used to construct a road that is then leased to a private concessionaire must be repaid to the federal government. Governments entering into highway public-private partnerships have also acted to protect the public interest by ensuring that they are not fully constrained by the concession and are still able to provide transportation infrastructure (see table 5). This flexibility has been achieved in part by avoiding fully restrictive noncompete clauses. Since Orange County bought back the SR-91 managed lanes because it was no longer willing to be bound by the restrictive noncompete clause it originally agreed to, governments entering into highway public-private partnerships have sought to avoid such restrictive clauses. Some more recent noncompete clauses can be referred to as “compensation clauses” because they require that the public sector compensate the concessionaire if the public sector proceeds (in certain instances) with an unplanned project that might take revenues from the concessionaire’s toll road. For example, for the State Highway 130 concession in Texas, both the positive and negative impacts that new public roads will have on the toll road will be determined and, potentially, Texas DOT will compensate the concessionaire for losses of revenues on the concession toll road. However, that payment might be counterbalanced by Texas DOT receiving credits for new publicly constructed roads that are demonstrated to increase traffic on the concession toll road. Additionally, according to the Texas DOT, on the State Highway 130 concession, projects already on the state’s 20-year transportation plan when the concession was signed are exempt from any such provisions. Certain other projects are also exempt, such as expansions or safety improvements made to I-35 (a parallel existing highway on the Interstate Highway System); any local, city, or county improvements; or, any multimodal rail projects. According to the Texas DOT, in no case is it, or any other governmental authority, precluded from building necessary infrastructure. A noncompete clause lowers potential competition from other roadways for a private concessionaire, thereby increasing their potential revenues. Therefore, a contract without any noncompete provisions, all else equal, is likely to attract lower concession payments from the private sector. According to an Indiana official, a noncompete clause for the Indiana Toll Road requires the state to compensate the concessionaire an amount equal to the concessionaire’s lost revenue from a new highway if the state constructs a new interstate quality highway with 20 or more continuous miles within 10 miles of the Indiana Toll Road. Indiana officials told us that the concession agreement for the Indiana Toll Road does not prevent the state from building competing facilities and provides great latitude in maintaining and expanding the state’s transportation network around the toll road and that they do not expect this restriction to place serious constraints on necessary work near the toll road. Others have suggested that the state could face difficulties if toll rates on the Indiana Toll Road begin to divert significant levels of traffic to surrounding roads. In such a case, the state could be constrained in making necessary improvements or constructing new facilities to handle the additional traffic. City of Chicago officials did not sign a noncompete provision in the Chicago Skyway contract. While city officials decided not to have a noncompete provision in order to keep their options open for future work they might find necessary, city officials told us that the concessionaire agreed to a lease agreement without such a provision because geographic limitations (the Chicago Skyway being located in a very heavily developed urban area and close to Lake Michigan) make construction of a competing facility very unlikely. Spanish officials told us that they preserve flexibility by retaining the ability to renegotiate a concession agreement if it is in the public interest to do so. They referred to this as “rebalancing” a concession agreement. For example, if the government believes that adding capacity to a certain concession highway is in the public interest, it can require the concessionaire to do so as long as the government provides adequate compensation for the loss of revenues. Likewise, the government may rebalance a contract with a concessionaire if, for example, traffic is below forecasted levels, to help restore economic balance to the concession. In this case, the government might offer an extension to the concession term to allow the concessionaire more time to recover its investments. An executive of one concessionaire in Spain told us that it is important for the government to have that ability of renegotiation and concessionaires generally agree to the government’s requests. Protection of the public interest has also extended to the workforce, and concession provisions have been used in this area as well. In some cases, public sector agencies entering into highway public-private partnerships with existing toll roads have contractually protected the interest of the existing toll road workforce by ensuring that workers are able to retain their jobs, or are offered employment elsewhere. Some public sector agencies have also addressed benefits issues. For example, in the Chicago Skyway concession there were 105 city employees when the concession began. According to the concession agreement, the city required the concessionaire to (1) comply with a living wage requirement; (2) pay prevailing wages for all construction activities; and (3) make its best effort to interview (if requested), though not necessarily offer employment to, all Chicago Skyway employees for jobs before the asset was transferred. A Chicago official told us that once the concessionaire commenced operation five employees chose to maintain employment with the Chicago Skyway, while 100 took other city jobs. Those employees that took other city jobs retained their previous benefits. The state of Indiana also used concession provisions to help protect the workforce on the Indiana Toll Road. According to the concession agreement, these provisions required the concessionaire to follow certain laws such as nondiscrimination laws and minority-owned business requirements. Indiana officials told us that, prior to the lease agreement, the Governor of Indiana had made a commitment that each Indiana Toll Road employee would either be offered a job with the private concession company or with the state without a reduction in pay or benefits occurring with the new job. According to the Indiana DOT, all employees of the Indiana Toll Road (about 550 employees at the time the lease agreement commenced) were interviewed by the concessionaire; and about 85 percent of the employees transitioned to the private operator, but did so at equal or higher pay. According to an official with the toll road concessionaire, the average wages of an Indiana Toll Road employee increased from $11.00 per hour to between $13.55 and $16.00 per hour. Indiana officials indicated about 115 employees were offered placement with the state of Indiana and those that retained employment with merit or nonmerit state agencies maintained all outstanding vacation and sick time. Those toll road employees that left state agencies (including moving to the concessionaire) were paid for outstanding vacation time they had accrued, up to 225 hours. Indiana officials also indicated that, although those employees that left state agencies no longer are part of the state’s pension plan, their contributions and their vested state contributions were preserved, and these employees are now offered a 401(k) plan by the concessionaire. Another highway public-private partnership we examined, the TTC, involved new construction and, at the time of our review, had not yet reached the point of a concession. Oregon also involved new construction and was not at the point of a concession. Unlike existing facilities, new construction does not involve an existing workforce that could lose its jobs or face significantly different terms of work when the private sector takes over operations. However, concession terms can be used to protect the future workforce that is hired to construct and operate a highway built with a highway public-private partnership. For example, in a different highway public-private partnership project in Texas that has signed a concession, State Highway 130, segments 5 and 6, the concession agreement states that prevailing wage rates will be set by the Texas DOT and that the concessionaire should meet goals related to the hiring of women, minorities, and disadvantaged business enterprises. According to the Texas DOT, the concessionaire is also required to establish and implement a small business mentoring program. Other countries have also acted to protect employees in highway public- private partnerships. For example, the United Kingdom has taken actions to ensure that the value gained in its highway public-private partnership projects is not done so at the expense of its workforce. According to the United Kingdom’s Code of Practice on workforce matters, new and transferred employees of private concessionaires are to be offered “fair and reasonable” employment conditions, including membership in a pension plan which is at least equivalent to the public sector pension scheme that would apply. According to an official with the United Kingdom Treasury Department, this Code of Practice has been agreed to by both employers and trade unions and was implemented in 2003. The public sector also undertakes oversight and monitoring of concessionaires to ensure that they fulfill their obligations to protect the public interest. Such mechanisms can both identify when requirements are not being met, and also provide evidence to seek remediation when the private sector does not do so. In Indiana, an Indiana Toll Road Oversight Board was created as an advisory board composed of both state employees and private citizens to review the performance and operations of the concessionaire and potentially identify cases of noncompliance. This Oversight Board meets on at least a quarterly basis and has discussed items dealing with traffic incidents, concerns raised by state residents and constituents, and the implementation of electronic tolling on the facility. The Chicago Skyway concession also incorporates oversight. Oversight includes reviewing various reports, such as financial statements and incident reports filed by the concessionaire, and hiring independent engineers to oversee the concessionaire’s construction projects. In both Indiana and Chicago the concessionaire reimburses the public sector for oversight and monitoring costs—in Indiana up to $150,000 per year adjusted for inflation. Oversight and monitoring also encompass penalties if a concessionaire breaches its obligations. For example, the highway public-private partnership contracts in Chicago and Indiana allow the public sector to ultimately regain control of the asset at no cost if the concessionaire is in material breach of contract. Additionally, the public sector has sometimes retained the ability to issue fines or citations to concessionaires for nonperformance. For example, according to the Texas DOT, in Texas an independent engineer will be assigned to the TTC concessionaire who will be able to issue “demerits” to the concessionaire for not meeting performance standards. These demerits, if not remedied, could lead to concessionaire default. Foreign governments have also taken steps to provide oversight and monitoring of concessionaires. In Spain, the Ministry of Public Works assigns public engineers to each concession to monitor performance. These engineers not only monitor performance during construction to ensure that work is being done properly, but also monitor performance during operation. They do so by recording user complaints and incidents in which the concessionaire does not comply with the terms of the concession. Accountability and oversight mechanisms have also been incorporated in Australian concessions. In both Victoria and New South Wales, projects must demonstrate that they incorporate adequate information to the public on the obligations of the public and private sectors and that there are oversight mechanisms. In some instances, a separate statutory body, which may be chaired by a person outside of government, provides oversight, as was done on the CityLink toll road in Melbourne, Australia. Officials with a private concessionaire in Australia told us that they generally meet monthly with the state Road and Traffic Authority to review concession performance. In addition, both the Victoria and New South Wales Auditor Generals are also involved with oversight. In both states the Auditor General reviews the contracts of approved highway public-private partnerships. In New South Wales, the law requires publication of these reviews and contract summaries. In Victoria, government policy requires publication of the contracts, together with project summaries, including information regarding public interest considerations. Governments have also used financial analyses, such as asset valuations, and procurement processes to protect the public interest. We found that states and local governments entering into the two existing highway public-private partnerships that we reviewed largely limited their analyses to asset valuation. For example, both the city of Chicago and the state of Indiana hired consultants to value the Chicago Skyway and the Indiana Toll Road, respectively, before signing concessions for these assets. In Indiana, the state’s consultant performed a net present valuation of the toll road that determined that the toll road was worth about $2 billion to the state. Because the winning bid of $3.85 billion that the state received was far more than the consultant’s assessed value, Indiana used that valuation to justify that the transaction was in the public interest. The assistant budget director for Chicago told us that in Chicago an analysis showed the city could leverage only between $800 and $900 million from the toll road. The officials then compared that amount to the $1.8 billion that the city received from the winning bidder and determined that the concession was in the public interest. Both valuations assumed that future toll rates would increase only to a limited extent under public control. Additionally, steps have been taken to protect the public interest through procurement processes. Both Chicago and Indiana used an auction bidding process in which qualified bidders were presented with the same contract and bid on the same terms. This process ensured that the winning bidder would be selected on price alone (the highest concession fee offered) since all other important factors and public interest considerations—such as performance standards and toll rate standards— would be the same for all bidders. Texas has also taken steps to protect the public interest through the procurement process for the TTC. While the Texas DOT signed the comprehensive development agreement with a private concessionaire for the TTC-35, it does not guarantee that the private firm will be awarded the concession for any segment of the TTC. All segments may be put out for competitive procurement; and, while the master development concessionaire has a right of first negotiation for some segments, it must negotiate with Texas and present a detailed facility plan. Additionally, according to the Texas DOT, the concessionaire is required to put together a facility implementation plan that, among other things, analyzes the projected budget and recommends a method for project delivery. Some foreign governments have recognized the importance of public interest issues in public-private partnerships and have taken a systematic approach to these issues. This includes developing processes, procedures, and criteria for defining and assessing elements of the public interest and developing tools to evaluate the public interest of public-private partnerships. These tools include the use of qualitative public interest tests and criteria to consider when entering into public-private partnerships, as well as quantitative tests such as Value for Money (VfM) and PSCs, which are used to evaluate if entering into a project as a public-private partnership is the best procurement option available. According to a document from one state government in Australia (New South Wales), guidelines for private financing of infrastructure projects (which includes the development of public interest evaluation tools) supports the government’s commitment to provide the best practicable level of public services by providing a consistent, efficient, transparent, and accountable set of processes and procedures to select, assess, and implement privately financed projects. Some governments have laid out elements of the public interest in public- private partnerships and criteria for how those elements should be considered when entering into such agreements. These steps help ensure that major public interest issues are transparently considered in the public-private partnerships from the outset of the process, including highway public-private partnerships. For example, the state of Victoria in Australia requires all proposed public-private partnership projects to evaluate eight aspects of the public interest to determine how they would be affected. These eight aspects include the following: Effectiveness. Whether the project is effective in meeting the government’s objectives. Those objectives must be clearly determined. Accountability and transparency. Whether public-private partnership arrangements ensure that communities are informed about both public and private sector obligations and that there is oversight of projects. Affected individuals and communities. Whether those affected by public- private partnerships have been able to effectively contribute during the planning stages and whether their rights are protected through appeals and conflict resolution mechanisms. Equity. Whether disadvantaged groups can effectively use the infrastructure. Public access. Whether there are safeguards to ensure public access to essential infrastructure. Consumer rights. Whether projects provide safeguards for consumers, especially those for which the government has a high level of duty of care or are most vulnerable. Safety and security. Whether projects provide assurance that community health and safety will be secured. Privacy. Whether projects adequately protect users’ rights to privacy. Similarly, the government of New South Wales, Australia, also formally considers the public interest before entering into public-private partnerships. Public interest focuses on eight factors that are similar to Victoria’s: effectiveness in meeting government objectives, VfM, community consultation, consumer rights, accountability and transparency, public access, health and safety, and privacy. The public interest evaluation is conducted up front prior to proceeding to the market and is updated frequently, including prior to the call for detailed proposals, after finalizing the evaluation of proposals, and prior to the government signing contract documents. Additionally, foreign governments have also used quantitative tests to identify and evaluate the public interest and determine if entering into a project as a public-private partnership is the best option and delivers value to the public. In general, VfM evaluations examine total project costs and benefits and are used by some governments to determine if a public- private partnership approach is in the public interest for a given project. VfM tests are often done through a PSC, which compares the costs of doing a proposed public-private partnership project against the costs of doing that project through a public delivery model. VfM tests examine more than the financial value of a project and will examine factors that are hard to quantify, such as design quality and functionality, quality in construction, and the value of unquantifiable risks transferred to the private sector. VfM tests are commonly used in Australia, the United Kingdom, and British Columbia, Canada. PSCs are often used as part of VfM tests. Generally speaking, a PSC test examines life-cycle project costs, including initial construction costs, maintenance and operation costs, and additional capital improvement costs that will be incurred over the course of the concession term. A PSC can also look at the value of various types of risk transfer to the private sector, whereby the more risk transferred to the private sector the more value to the public sector. For example, in the United Kingdom, use of the PSCs is mandated for all public-private partnership projects at both the national as well as local levels. British Columbia, Canada, also conducts a PSC for all public-private partnership proposals that compares the full life- cycle costs of procuring the proposed project as a public-private partnership, compared with a traditional design-bid-build approach. The British Columbia PSC not only compares the project costs but also evaluates the value of various risks. According to a Partnerships British Columbia official, the more risk transferred from the public to the private sector in a public-private partnership proposal, all else being equal, the better the value for the public. For example, this official said that the PSC they use will value a certain level of construction risk and determine the value (based on the costs and probability of that risk occurring) to the public sector of having the private sector assume that risk through a public-private partnership. The Partnerships British Columbia official also told us that the values of risks occurring are often not included in traditional public cost estimates, which is a reason that cost overruns are so common in public sector infrastructure projects. British Columbia uses the results of PSCs to help determine a project’s procurement method. An official with British Columbia told us that many projects have been done through a traditional public procurement rather than privately because the results of the PSCs indicated that there was not enough value for money in the private approach. Although PSCs can be helpful in identifying and evaluating the public interest, they have limitations. According to officials in Australia, Canada, and the United Kingdom, PSCs are composed of numerous assumptions, as well as projections years into the future. PSCs may have difficulty modeling long-term events and reliably estimating costs. Additionally, discount rates used in PSCs to calculate the present value of future streams of revenue may be arbitrarily chosen by the procuring authority if not mandated by the government. Officials with the Audit Office of New South Wales, Australia, raised similar concerns and said the volume and volatility of assumptions raise questions about the validity and accuracy of PSCs. A government official with the U.K. told us that a limitation of its PSC is that it is a generic tool that applies to all privately financed projects, from transportation to hospitals, and therefore, there are some standard assumptions built into the model that may not be accurate for a transportation project. The official added that the government is considering working on creating a sector-specific PSC. However, despite these concerns there was general agreement among those with whom we talked that PSCs are useful tools. While foreign governments may have extensive experience using PSCs and other public interest assessment tools, these tools continue to evolve based on experience and lessons learned. The use of formal tools and processes also does not guarantee that highway public-private partnerships will not face significant challenges and problems. For example, although a document we reviewed indicated that a formal assessment process and PSC was used to evaluate the Cross City Tunnel in Sydney, Australia, before it was built and operated through a concession agreement, this evaluation did not prevent the problems of low traffic, public opposition to the toll road, and bankruptcy that were discussed earlier in this report. The problems experienced led to changes in how public-private projects will be handled and evaluated in the future. According to the Director of the New South Wales Department of Treasury and Finance, one of the big lessons learned from the Cross City Tunnel experience was the importance of public outreach and communication. Documents from the New South Wales government also showed that public interest tools were strengthened. For example, in December 2006, the New South Wales guidelines for public-private partnerships were updated to, among other things, strengthen VfM tests by conducting them from the perspective of the user or taxpayer and requiring updates of the tests through the tender process. In addition, the New South Wales Department of Treasury and Finance issued new guidance on how to determine appropriate discount rates—an important component of PSCs. Evolution of tools has occurred in other countries as well. According to an official with British Columbia, the methodology of their PSC tests is reviewed by an independent auditor, and improvements to the methodology are continually made. Change in public interest evaluation tools has also occurred elsewhere. According to an official with the United Kingdom Treasury Department, after criticism about potential VfM benefits and the use of PSC models developed by consultants, the United Kingdom has moved from an advisor-driven PSC to a Treasury-driven two- part, four-stage VfM model that involves a simple spreadsheet and qualitative assessment. Even this new model is being considered for change due to complex contracting issues. We found a more limited use of systematic, formal processes and approaches to the identification and assessment of public interest issues in the United States. Both Oregon and Texas have used forms of PSCs. For example, Oregon hired a consultant to develop a PSC that compared the estimated costs of the private sector proposal for the Newburg-Dundee project with a model of the public sector’s undertaking the project, using various public financing sources, including municipal debt and TIFIA loans. According to the Innovative Partnerships Project Director in the Oregon DOT, the results of this model were used to determine that the added costs of undertaking the project as a public-private partnership (given the need for a return on investment by the private investors) were not justifiable given the limited value of risk transfer in the project. While this PSC was conducted before the project was put out for official concession, the PSC was prepared after substantial early development work was done by private partners. Similar to a PSC, Texas has developed “shadow bids” for two highway public-private partnerships in the state. These shadow bids included detailed estimates of design and construction costs, as well as operating costs and a detailed financial model, the results of which were compared against private sector proposals. While the model used by Texas is unique to each individual project, the methodology used (such as the estimation of future costs) is similar. In addition, the Director of the Texas Turnpike Authority of the Texas DOT told us that, while there are no statutory or regulatory provisions defining the public interest in public-private partnerships, when procuring public-private partnerships, the department develops specific evaluation procedures and criteria for that specific procurement, as well as contract provisions that are determined to be in the interests of the state. Public-private partnership proposals the department receives are then evaluated against those project criteria. However, these criteria are project-specific, and there are no standard criteria that are equally applied to all projects. Neither Chicago nor Indiana had developed public interest tests or used PSCs prior to leasing of the Chicago Skyway or the Indiana Toll Road. Instead, analyses for these deals were largely focused on asset valuation and development of specific concession terms. Other state and local governments we spoke with said they have limited experience with using formal public interest criteria tools and tests. For example, the Chief Financial Officer of the California DOT told us that while the department is currently working with the California Transportation Commission to develop guidelines for public interest issues, this effort has not been finalized. Additionally, officials in New Jersey and Pennsylvania, two states that are exploring options, including private involvement, to better leverage existing toll roads, said that they have not yet created any formal public interest criteria or assessment tools such as PSCs. An official with the Illinois DOT also said that his state had not yet developed public interest criteria or assessment tools. Not using formal public interest tests and tools means that aspects of the public interest can potentially be overlooked. For example, because VfM tests can allow the government to analyze the benefits and costs of doing a project as a public-private partnership, as opposed to other more traditional methods, not using such a test might mean that potential future toll revenues from public control of toll roads are not adequately considered. Neither Chicago nor Indiana gave serious consideration to the potential toll revenues they could earn by retaining control over their toll roads. In contrast, Harris County, Texas, in 2006 conducted a broad analysis of options for its public toll road system. This analysis was somewhat analogous to a VfM test. The analysis evaluated and conducted an asset valuation under three possible scenarios, including public control and a concession. This analysis was used by the county to conclude that it would gain little through a long-term concession and that through a more aggressive tolling approach, the county could retain control of the system and realize similar financial gains to those that might be realized through a concession. Since public interest criteria and assessment tools generally mandate that certain aspects of the public interest are considered in public-private partnerships, if these criteria and tools are not used, then aspects of public interest might be overlooked. These aspects include such things as the following: Transparency. According to documents we reviewed, both Victoria and New South Wales, Australia, require transparency in public-private partnership projects so that communities and the public are well informed. Officials in Toronto, Canada, however, told us there was no such requirement and a lack of transparency about the 407 ETR concession— including information about the toll rate structure—meant that some people did not understand the objectives of the concession, as well as the tolling structure, and led to significant opposition to the project. The former Director of the Indiana Office of Management and Budget (OMB) told us that the Indiana legislature, as well as others, complained that the Indiana Toll Road lease was done in “secrecy.” Consideration of communities and affected interests. Local and regional governments believe that there was limited coordination with them as well as the public on the TTC project. This lack of consideration of local and regional interests and concerns led to opposition by local and regional governments. That reaction helped drive statewide legislation that requires the state to involve local and regional governments to a greater extent in public-private partnerships. While Chicago considered the city’s interests in the Chicago Skyway lease, it did not necessarily consider the interests of other parties, such as regional mobility. The Executive Director of the Chicago Metropolitan Agency for Planning (the metropolitan planning organization for the greater Chicago area) told us that regional interest issues, such as the traffic diversion onto local streets that might occur as a result of higher tolls on the Chicago Skyway, were not addressed in consideration of the lease. He added that, as a result, other routes near the Chicago Skyway might not be able to absorb the diverted traffic, causing regional mobility problems. The use of formal public interest tests can also allow public agencies to evaluate the projected benefits, as well as the costs and trade-offs, of public-private partnerships. In addition, such tests can help determine whether or not the benefits outweigh the costs and if proceeding with the project as a partnership is the superior model, or if conducting the project through another type of procurement and financing model is better. Direct federal involvement in highway public-private partnerships has generally been limited to projects in which federal requirements must be followed because federal funds have or will be used. While federal funding in highway public-private partnerships to date has been limited, the administration and DOT have actively promoted such partnerships through policies and practices, including developing experimental programs that waive certain federal regulations and encourage private investment. Although federal involvement with highway public-private partnerships is largely limited to situations where there is direct federal investment, recent highway public-private partnerships may, or could, have implications on national interests such as interstate commerce and homeland security. However, FHWA has given little consideration of potential national public interests in highway public-private partnerships. We have called for a fundamental reexamination of federal programs, including the highway program to identify specific national interests in the transportation system to help restructure existing programs to meet articulated goals and needs. This reexamination would provide an opportunity to define any national public interest in highway public- private partnerships and develop guidance for how such interests can best be protected. The increasing role of the private sector in financing and operating transportation infrastructure raises potential issues of national public interest. We also found that highway public-private partnerships that have, or will, use federal funds and involve tolling may be required by law to use excess toll revenues (revenues that are beyond that needed for debt service, a reasonable return on investment to a private party, and operation and maintenance) for projects eligible for federal transportation funding. However, the methodology for calculating excess toll revenues is not clear. Direct federal involvement in highway public-private partnership projects is generally determined by whether or not federal funds were or will be involved in a highway project. As a result, FHWA has had a somewhat different involvement in each of the four U.S. highway public-private partnership projects we reviewed. Since June 2006, the Indiana Toll Road has been operated by a private concessionaire under a 75-year lease. The Indiana Toll Road was constructed primarily with state funds and then incorporated into the Interstate Highway System. Although about $1.9 million in federal funds were used to build certain interchanges on the highway, Indiana subsequently repaid these funds. FHWA officials told us they did not review the lease of the highway to the private sector because there were no federal funds involved and no obligation on FHWA under title 23 of the U.S.C. to do so. The Chicago Skyway was leased in October 2004 to a private concessionaire. FHWA officials told us that they did not review the Chicago Skyway lease agreement before it was signed. Only a limited amount of federal funding was invested in the Chicago Skyway. According to FHWA, the state of Illinois received about $1 million in 1961 to construct an off-ramp from the Chicago Skyway to Interstate 94. In addition, about $14 million in federal funds were received in 1991 through an earmark in ISTEA. The Assistant Budget Director for Chicago told us the latter was for painting and various other improvements. FHWA officials stated that since the lease transaction did not involve any new expenditure of federal funds, there was no requirement that FHWA review and approve the lease before it was executed. According to FHWA officials, FHWA’s primary role in the transaction was the modification of a 1961 toll agreement to allow Chicago to continue collecting tolls on the facility. However, because federal funds were involved, FHWA did determine that two portions of federal law were applicable, one governing how proceeds from the lease of the asset—the up-front payment of $1.8 billion—were used and the other governing use of toll revenues. Use of lease proceeds. Proceeds from the lease of property acquired, even in part, with federal funds would be governed by section 156 of title 23 U.S.C. This section requires that states charge fair market value for the sale or lease of such assets and that the percentage of the income from the proceeds obtained from a sale or lease that represents the federal share of the initial investment (about $15 million in this case) be used by the state for title 23 eligible projects. Title 23 eligible projects can include construction of new transportation infrastructure. According to FHWA, the federal share in the Chicago Skyway ranged between 0.88 percent and 2.95 percent, depending on whether money from the ISTEA earmark was considered an addition to the real property or not and assuming control over the I-94 connector had been transferred to the contractor. Title 23 of the U.S.C. covers a broad range of activities that are eligible for federal-aid highway funds, including reconstruction, restoration, rehabilitation, and resurfacing activities and the payment of debt service for a title 23 eligible project. FHWA determined that Chicago met its obligations under title 23 section 156 merely by retiring the Chicago Skyway debt ($392 million or nearly 25 percent of the lease proceeds). Use of toll revenue. When tolling is allowed on federally funded highways, the use of toll revenues is generally governed by section 129 of title 23 U.S.C. Under section 129, toll revenue must first be used for (1) debt service, (2) to provide a reasonable return on investment to any private party financing a project, and (3) the operations and maintenance of the toll facility. If there are any revenues in excess of these uses, and if the state or public authority certifies that the facility is adequately maintained, then the state or public authority may use any excess revenues for any title 23 eligible purpose. According to FHWA, since federal funds were expended in the Chicago Skyway, a toll agreement has been executed between FHWA, the Illinois DOT, city of Chicago, and Cook County providing that the toll revenues will be used in accordance with title 23 section 129. Although FHWA determined that provisions governing excess toll revenues were met, it did not independently determine whether the rate of return to private investors would be reasonable. The rate of return is a critical component in determining whether excess revenues exist or not. According to FHWA officials there is no standard definition of what constitutes a “reasonable rate of return.” Therefore, FHWA concluded it had no basis to evaluate the reasonableness of the return. In addition, FHWA officials stated that under guidance issued by the agency’s Executive Director in 1995, the reasonableness of rate of return to a private investor is a matter to be determined by the state. FHWA officials said they relied on assurances from the city of Chicago that the rate of return was reasonable. According to DOT officials, FHWA determined that since the value of a concession was established through fair and open competitive procedures, the rate of return should be deemed to be reasonable. A review of the concession agreement indicates that the lease agreement was expected by the city of Chicago to “produce a reasonable return to the private operator” and that the city pledged “not to alter or revoke that determination” over the 99-year period of the lease. The Assistant Budget Director for Chicago also told us that the rates of return will be reasonable because a competitive bid process was used prior to signing a lease and that the concession agreement contains limitations of how much tolls can change over time—an important limitation since toll levels can significantly affect rates of return. FHWA officials have recognized that concession arrangements governing facilities paid for largely with federal funds face a more difficult time meeting the requirements of sections 156 and 129 of title 23. For example, if a state received a $1 billion up-front payment to lease a highway built with 80 percent federal funds, the state would be required to invest $800 million of that payment in other title 23 eligible projects. According to the Director of the Texas Turnpike Authority Division of the Texas DOT, Texas’s intent is to make all transportation infrastructure projects eligible for federal aid whenever possible. While at the time of our review no federal funds had been expended on the Trans-Texas Corridor (TTC-35) project, Texas is considering using federal funds to complete parts of the corridor. For the project to be eligible for federal funds, unless otherwise specified by FHWA, it must meet all federal requirements, including the environmental review process required under NEPA. The TTC-35 project is currently undergoing a two-tiered review process under NEPA. In Tier I, the Texas DOT has identified a potential 10-mile wide corridor through which the actual corridor will run, completed a draft environmental impact statement, which evaluates the impact of the project on the local and regional environment, and is awaiting federal approval through a record of decision. The record of decision, among other things, identifies the preferred alternative and provides information on the adopted means to avoid, minimize, and compensate for environmental impacts. The Tier I process is expected to be completed by early 2008. Tier II of the process will be used to determine the actual alignment of the road or rail line and will be completed in several parts for each facility, or unique segment of the facility. This process, like Tier I, includes identification of specific corridor segments, solicitation of public comments for each segment, and final approval, which will authorize construction. As we reported in 2003, environmental impact statements on federally funded highway projects take an average of 5 years to complete, according to FHWA. The state of Texas has also entered into a Special Experimental Project No. 15 (SEP-15) agreement with FHWA for the TTC-35. According to FHWA, under this agreement FHWA has permitted the Texas DOT to release a request for proposals (RFP) and award the design-build contract prior to completion of the environmental review process. This sequence would not have been allowed under federal highway regulations existing at the time. In accordance with the SEP-15 agreement, Texas entered into a contract with a private sector consortium to prepare a Master Development Plan for the TTC-35 and to assist in preparing environmental documents and analyses. The Master Development Plan is intended to help the state identify potential development options for the TTC-35 and to begin predevelopment work related to the project. The Master Development Plan also allows the private consortium to develop other highway facilities. In conjunction with this agreement, in March 2007, the private consortium was awarded a 50-year concession to construct, finance, operate and maintain State Highway 130, segments 5 and 6 (a highway that is expected to connect to the TTC-35). Similar to Texas, the Oregon Innovative Public-Private Partnerships Program is a program for the planning, acquisition, financing, development, design, construction, and operation of transportation projects in Oregon using the private sector as participants. Three projects have been identified under this program: (1) a potential widening of a 10- mile section of Interstate 205 (I-205) in the Portland area, (2) development of highways east of Portland serving existing industrial development and future residential and commercial development (called the Sunrise Corridor), and (3) construction of an 11-mile highway in the Newberg- Dundee corridor. Oregon sought and received an FHWA SEP-15 approval for these projects. According to FHWA, the SEP-15 approval was to provide the Oregon DOT the flexibility to release an RFP and award a design-build contract prior to completion of the environmental review process, which was not permitted under federal highway regulations at the time. As discussed above, this requirement has changed. Subsequent to the SEP-15 approval, in October 2005, the state entered into an Early Development Agreement with FHWA that also permitted the state to engage the private sector in predevelopment activities prior to completion of the environmental review process. In January 2006, Oregon entered into preliminary development agreements with a private sector partner (Oregon Transportation Improvement Group) to proceed with predevelopment work on the three proposed projects. As of January 2007, Oregon had decided not to pursue the Sunrise Corridor project because it determined that projected toll revenue was not enough to cover the cost of operation or construction. Rather, Oregon plans to seek traditional funding sources. In July 2007, the state announced that it and the Oregon Transportation Improvement Group had ceased pursuing public-private development of the Newberg- Dundee project. According to the Oregon Department of Transportation, as of November 2007, the third project (I-205 lane widening) was not yet in the regional transportation plan but was expected to be added to the plan without difficulty. As of May 2007, federal funding ($20.9 million) had been used for such things as environmental assessment, planning, and right-of- way acquisition on the Newberg-Dundee project. Although federal involvement with highway projects and highway public- private partnerships is largely governed by whether there is a direct federal investment in a project or not, the administration and DOT have actively encouraged and promoted the use of highway public-private partnerships. This effort has been accomplished through both policies and practices such as developing SEP-14 and SEP-15 procedures and preparing various publications and educational material on highway public-private partnerships. Encouraging highway public-private partnerships is a federal governmentwide initiative articulated in the President’s Management Agenda and implemented through the Office of Management and Budget (OMB). OMB promotes, among other things, increasing the level of competition from the private sector for services traditionally done by the public sector. DOT has followed this lead by incorporating highway public- private partnerships into its own policy statements. Its May 2006 National Strategy to Reduce Congestion on America’s Transportation Network states that the federal government should “remove or reduce barriers to private investment in the construction or operation of transportation infrastructure.” FHWA has used its administrative flexibility to develop three experimental programs to allow more private sector participation in federally funded highway projects. The first, SEP No. 14, or SEP-14, has been in place since 1990 to permit contracting techniques to be employed that deviate from the competitive bidding provisions of federal law required for any highway built with federal funds. As those techniques have been approved for widespread use by FHWA since its enactment, the program has changed to allow other alternative contracting techniques, such as best value contractor selection and the transfer of construction risk to the private construction contractor. States have used the techniques allowed under SEP-14 to allow more private sector involvement in building and maintaining transportation infrastructure than under traditional procurement methods. For example, states used design-build contracting in almost 300 different construction and maintenance projects that were approved by FHWA between 1992 and 2003, including repavement of existing roads, bridge rehabilitation and replacement, and construction of additional highway lanes. The second experimental program, the Innovative Finance Test and Evaluation Program (TE-045), was established in April 1994. This program was initially designed and subsequently operated to give states a forum in which to propose and test those concepts that best met their needs. Since TE-045 did not make any new money available, its primary focus was to foster the identification and implementation of new, flexible strategies to overcome fiscal, institutional, and administrative obstacles faced in funding transportation projects. States were encouraged to consider a number of areas in developing proposals under the program, including income generation possibilities for highway projects and alternative revenue sources, which could be pledged to repay highway debt. States were also encouraged to consider the use of federal-aid to promote highway public-private partnerships. According to FHWA, several types of financing tools were proposed by states and tested under TE-045. These included tools that provided expanded roles for the private sector in identifying and providing financing for projects, such as flexible matches and section 129 project loans. The third experimental program, SEP No. 15, or SEP-15, is broad in scope and was designed to facilitate highway public-private partnerships and other types of innovation in the federal-aid highway process. SEP-15 allows for the modification of FHWA policy and procedure, where appropriate, in four different areas: contracting, compliance with environmental requirements, right-of-way acquisition, and project finance. According to FHWA, SEP-15 enables FHWA officials to review state transportation projects on a case-by-case basis to “increase project management flexibility, encourage innovation, improve timely project construction, and generate new revenue streams for federal-aid transportation projects.” While this program does not eliminate overall federal-aid highway requirements, it is designed to allow FHWA to develop procedures and approaches to reduce impediments to states’ use of public-private partnerships in highway-related and other transportation projects. Table 6 summarizes the highway projects in which FHWA has granted SEP-15 approvals. The SEP-15 flexibilities have been pivotal to allowing highway public- private partnership arrangements we reviewed in Texas and Oregon to go forward while remaining eligible for federal funds. For example, until August 2007, federal regulations did not allow private contractors to be involved in highway design-build contracts with a state department of transportation until after the federally mandated environmental review process under NEPA had been completed. The Texas DOT applied for a waiver of this regulation under SEP-15 for its TTC project to allow its private contractor to start drafting a comprehensive development plan to guide decisions about the future of the corridor before its federal environmental review was complete. FHWA approved this waiver, which allowed the contractor’s work to proceed during the environmental review process and which could ultimately shorten the corridor’s project time line. According to the Texas DOT, at all times, it and the FHWA maintain control over the NEPA decision-making process. The developer’s role is similar to other stakeholders in the project. Similarly, Oregon used the SEP-15 process to experiment with the concept of contracting with a developer early in the project development phase for three potential projects in and around Portland, Oregon. Like Texas, Oregon wanted to involve the private sector prior to completion of the NEPA process. FHWA and DOT have reinforced its legal and policy initiatives with promotional practices as well. These activities include the following: Developing publications. Publications include a public-private partnership manual that has material to educate state transportation officials about highway public-private partnerships and to promote their use. The manual includes sections on alternate federal financing options for highway maintenance and construction and outlines different federal legal requirements relating to highway public-private partnerships, including the environmental review process. It also includes a public-private partnership user guide. The user guide describes the many participants, stages of development, and factors (such as technical capabilities and project prioritization and selection criteria and processes) associated with developing and implementing public-private partnerships for transportation infrastructure projects. Drafting model legislation for states to consider to enable highway public-private partnerships in their states. The model legislation addresses such subjects as bidding, agreement structure, reversion of the facility to the state, remedies, bonds, federal funding, and property tax exemption, among other things. Creating a public-private partnership Internet Web site. This Web site serves as a clearinghouse of information to states and other transportation professionals about public-private partnerships, pertinent federal regulations, and financing options. It has links to FHWA’s model public- private partnership legislation, summaries of selected highway public- private partnerships, key DOT policy statements, and the FHWA public- private partnership manual, among other things. Making public presentations. DOT and FHWA officials have made public speeches and written at least one letter to a state in support of highway public-private partnerships. For example, when Texas was considering modifying its public-private partnership statutes, FHWA’s Chief Counsel, in a letter to the Texas DOT, warned that if Texas lost its initiative on highway public-private partnerships that “private funds flowing to Texas will now go elsewhere.” DOT has also provided congressional testimony in support of highway public-private partnerships. For example, in a recent testimony to Congress, DOT’s Assistant Secretary of Transportation for Policy stated that highway public-private partnerships are “one of the most important trends in transportation” and that DOT “has made expansion of public-private partnership a key component” in DOT’s on-going initiatives to reduce congestion and improve performance. Making tolling a key component of congestion mitigation. Such a strategy could act to promote highway public-private partnerships since tolls provide a long-term revenue stream, key to attracting investors. One major part of DOT’s May 2006 national strategy to address congestion is the Urban Partnership Agreement. Under the Urban Partnership Agreement, DOT and selected metropolitan areas will commit to aggressive strategies to address congestion. The key component of these aggressive strategies is tolling and congestion pricing. Congestion pricing could involve networks of priced lanes on existing highways, variable user fees on entire roadways, including toll roads and bridges, or area-wide pricing involving charges on all roads within a congested area. Although federal involvement with highway public-private partnerships is largely limited to situations where there is a direct federal investment, highway public-private partnerships can have implications on broader national interests, such as interstate commerce. FHWA officials told us that various federal laws and requirements that states must follow to receive federal funds are designed to protect national and public interests—for example, federally funded projects must receive environmental approval through the NEPA process. In addition, TIFIA loans must be investment grade and meet policy considerations they have some public interest criteria. However, FHWA officials told us that no specific federal definition of national public interest or federal guidance on identifying and evaluating national public interest exists. Thus, when federal funds are not involved in a project, there are few mechanisms to ensure that national public interests are identified, considered and protected. As a result, given the minimal federal funding in highway public-private partnerships we reviewed, little consideration has been given to potential national public interests in these partnerships. Recent highway public-private partnerships have involved sizable investments of funds and significant facilities and suggest that implications for national public interests exist. For example, both the Chicago Skyway and the Indiana Toll Road are part of the Interstate Highway System; the Indiana Toll Road is part of the most direct highway route between Chicago and New York City and, according to one study, over 60 percent of its traffic is interstate in nature. However, federal officials had little involvement in reviewing the terms of either of these concession agreements before they were signed. In the case of Indiana, FHWA played no role in reviewing either the lease or national public interests associated with leasing the highway nor did it require the state of Indiana to review these interests. Similarly, development of the TTC may greatly facilitate North American Free Trade Agreement-related truck traffic nationwide. Although the TTC is going through the NEPA process, to date, no federal funding has been expended in the development of the project. In commenting on a draft of this report, DOT correctly noted that many of these same issues could be raised if the states involved had undertaken major projects with potential implications for national interests as publicly funded projects, using only state funds. Nevertheless, both state and DOT officials have also asserted that without a public- private partnership, these projects would not have advanced. In addition, public-private partnerships may present distinct challenges because they can and have involved long-term commitments of up to 99 years and the loss of direct public control—issues that are not present in state financed projects—and the fact that private entities are not accountable to the public in the same way public agencies are. The absence of a clear definition of national public interests in the national transportation system is not unique to highway public-private partnerships. We have called for a fundamental reexamination of the federal role in highways and a clear definition of specific national interests in the system, including in such areas as freight mobility. A fundamental reexamination of federal surface transportation programs, including the highway program, presents the opportunity to address emerging needs, test the relevance of existing policies, and modernize programs for the twenty-first century. The growing role of the private sector in both financing and operating highway facilities raises the question of what role the private sector can and should play in the national transportation system and whether the presence of federal funding is the right criteria for federal involvement or whether other considerations should apply. For example, DOT has recognized the national importance of goods movement and the challenges of large, multimodal projects that cross state lines by establishing a “Corridors of the Future” program to encourage states to think beyond their boundaries in order to reduce congestion on some of the nation’s most critical trade corridors. DOT plans to facilitate the development of these corridors by helping project sponsors reduce institutional and regulatory obstacles associated with multistate and multimodal corridor investments. Whether such corridors, which could be seen as being in the national interest, could be developed if portions of them were under effective private ownership is just one of many questions that could be addressed in identifying national public interests in general and public-private partnerships in particular. Once the national interest in highway public-private partnerships is more clearly defined, then an appropriate federal role in protecting and furthering those defined interests can be established. The recent report by the National Surface Transportation Policy and Revenue Study Commission illustrates the challenges of identifying national public interests both in general and in public-private partnerships in particular. The report encouraged the use of public-private partnerships as an important part of financing and managing the surface transportation system as part of an overall strategy for aligning federal leadership and federal transportation investments with national interests. As discussed earlier, the commission recommended broadening states’ flexibilities to use tolling and congestion pricing on the Interstate system but also recommended that that the public interest would best be served if Congress adopted strict criteria for approving public-private partnerships on the Interstate Highway System, including limiting allowable toll increases, prohibiting non-compete clauses, and requiring concessionaires to share revenues with the public sector. This definition of the public interest stands in sharp contrast to the dissenting views of three commissioners and to comments provided by DOT on a draft of this report. In their minority report, the dissenting commissioners stated that the Commission’s recommendations would replace negotiated terms and conditions with a federal regulation and subject private toll operators to greater federal scrutiny than local public toll authorities. In commenting on a draft of this report, DOT stated that national interests are served by limiting federal involvement in order to allow these arrangements to grow and provide the benefits of which they are capable. These sharply divergent views should assist Congress as it considers the appropriate national interests and federal role in highway public-private partnerships. Highway public-private partnerships show promise as a viable alternative, where appropriate, to help meet growing and costly transportation demands. The public sector can acquire new infrastructure or extract value from existing infrastructure while potentially sharing with the private sector the risks associated with designing, constructing, operating, and maintaining public infrastructure. However, highway public-private partnerships are not a panacea for meeting all transportation system demands, nor are they without potentially substantial costs and risks to the public—both financial and nonfinancial—and trade-offs must be made. While private investors can make billions of dollars available for critical infrastructure, these funds are largely a new source of borrowed funds, repaid by road users over what potentially could be a period of several generations. There is no “free” money in highway public-private partnerships. Many forms of public-private partnerships exist both within and outside the transportation sector, and conclusions drawn about highway public- private partnerships—those involving long-term concession agreements— cannot necessarily be drawn about partnerships of other types and in other sectors. Highway public-private partnerships are fairly new in the United States, and although they are meant to serve the public interest, it is difficult to be confident that these interests are being protected when formal identification and consideration of public and national interests has been lacking, and where limited up-front analysis of public interest issues using established criteria has been conducted. Consideration of highway public-private partnerships could benefit from more consistent, rigorous, systematic, up-front analysis. Benefits are potential benefits—that is, they are not assured and can only be achieved by weighing them against potential costs and trade-offs through careful, comprehensive analysis to determine whether public-private partnerships are appropriate in specific circumstances and, if so, how best to implement them. Despite the need for careful analysis, the approach at the federal level has not been fully balanced, as DOT has done much to promote the benefits, but comparatively little to either assist states and localities weigh potential costs and trade-offs, nor to assess how potentially important national interests might be protected in highway public-private partnerships. This is in many respects a function of the design of the federal program as few mechanisms exist to identify potential national interests in cases where federal funds have not or will not be used. The historic test of the presence of federal funding may have been relevant at a time when the federal government played a larger role in financing highways but may no longer be relevant when there are new players and multiple sources of financing, including potentially significant private money. However, potential federal restrictions must be carefully crafted to avoid undermining the potential benefits, such as operational efficiencies, that can be achieved through the use of highway public-private partnerships. Reexamining the federal role in highways provides an opportunity to identify the emerging national public interests, including the national public interests in highway public- private partnerships. Finally, in the future, states may seek increased federal funding for highway public-private partnerships or seek to monetize additional assets for which federal funds have been used. If this occurs, then it is likely some portion of toll revenues may need to be used for projects that are eligible for federal transportation funding. Clarifying the methodology for determining excess toll revenues and reasonable rates of return in highway public-private partnerships, would give clearer guidance to states and localities undertaking highway public-private partnerships and help reduce potential uncertainties to the private sector and the financial markets. A reexamination of federal transportation programs provides an opportunity to determine how highway public-private partnerships fit in with national programs as well as an opportunity to identify the national interests associated with highway public-private partnerships. In order to balance the potential benefits of highway public-private partnerships with protecting key national interests, Congress should consider directing the Secretary of Transportation to consult with them and other stakeholders to develop and submit objective criteria for identifying national public interests in highway public-private partnerships. In developing these criteria, the Secretary should identify any additional legal authority, guidance, or assessment tools required, as appropriate and needed, to ensure national public interests are protected in future highway public- private partnerships. The criteria should be crafted to allow the department to play a targeted role in ensuring that national interests are considered in highway public-private partnerships, as appropriate. To ensure that future highway public-private partnerships meet federal requirements concerning the use of excess revenues for federally eligible transportation purposes, we recommend that the Secretary of Transportation direct the Federal Highway Administrator to clarify federal-aid highway regulations on the methodology for determining excess toll revenue, including the reasonable rate of return to private investors in highway public-private partnerships that involve federal investment. We provided copies of the draft report to DOT for comment prior to finalizing the report. DOT provided its comments in a meeting with the Assistant Secretary for Transportation Policy and the Deputy Assistant Secretary for Transportation Policy on November 30, 2007. DOT raised substantive concerns with several of the draft report’s findings and conclusions, as well as one of the recommendations. Specifically, DOT commented that the draft report did not analyze the benefits of highway public-private partnerships in the context of current policy and traditional procurement approaches. DOT stated that highway public-private partnerships are a potentially powerful response to current and emerging policy failures in the federal-aid highway program that both DOT and GAO have identified over the years. For example, DOT asserted that the current federal-aid program (1) encourages the misallocation of resources, (2) does not promote the proper pricing of transportation assets, including the costs of congestion, (3) is not tied to achieving defined results and (4) provides weak incentives for innovation. DOT also stated that—in addition to supplying large amounts of additional capital to improve U.S. transportation infrastructure—public-private partnerships are responsive to a crisis of performance in government stewardship of the transportation network and traditional procurement approaches. DOT noted that highway public-private partnerships can bring discipline to the decision- making process, result in more efficient use of resources, and produce lower capital and operating costs, resulting in lower total costs of projects than under traditional public procurement approaches. DOT stated that traditional procurement approaches produce comparatively inferior results. We agree with DOT that highway public-private partnerships have the potential to provide many benefits and that a number of performance problems characterize the current federal-aid highway program. Our draft report discusses the potential benefits cited by DOT, although we revised our draft report to better clarify the potential benefits of pricing and resource efficiencies of highway public-private partnerships that DOT cited in its comments. However, we also believe that all the benefits DOT cited are potential benefits—they are not assured and can be achieved only through careful, comprehensive analysis to determine whether public-private partnerships are appropriate in specific circumstances and, if so, how best to structure them. Among the benefits that DOT cited was the ability of highway public-private partnerships to supply additional capital to improve transportation infrastructure. As our report states, this capital is not free money but is rather a form of privately issued debt that must be repaid to private investors seeking a return on their investment by collecting toll revenues. Regarding DOT’s comment about policy failures in the federal-aid highway program, we believe the most direct strategy to address performance issues is to reexamine and restructure the program considering such factors as national interests in the transportation system and specific performance-related goals and outcomes related to mobility. Such a restructuring would help (1) better align and allocate resources, (2) promote proper pricing, (3) achieve defined results, and (4) provide incentives for innovation. We believe our report places highway public- private partnerships in their proper context as viable potential alternatives that must be considered in such a reexamination and, therefore, made no further changes to the report. Regarding DOT’s characterization of a crisis of performance in government stewardship of the transportation network and assertion that the traditional procurement approaches produce comparatively inferior results, our past work has recognized concerns about particular projects and public agencies, as well as improvements that are needed to public procurement processes in general. It was not within the scope of our review to systematically compare the results of projects acquired through public-private partnerships with those acquired through traditional procurement approaches. Nevertheless, we believe neither our work—nor work by others—provides a foundation sufficient to support DOT’s sweeping characterization of public stewardship as a “crisis,” or its far- reaching conclusion that traditional procurement approaches produce inferior results compared with public-private partnerships. We, therefore, made no further changes to our report. DOT also disagreed with much of our discussion concerning protection of the public interest in highway public-private partnerships. DOT stated that many federal and state laws govern how transportation projects are selected and delivered, including highway public-private partnerships, and that the draft report did not explain why highway projects delivered through public-private partnerships pose additional challenges to protecting the public interest, or why there should be a greater interest in such projects than in highways built and operated by state and local governments. In response to DOT’s comments, we added additional information to the final report about initiatives that certain states have taken to identify and protect the public interest in highway public-private partnerships. We agree that federal and state laws governing traditional highway procurement contain mechanisms to protect the public interest and that many of the public interest concerns are the same regardless of how the project is delivered. However, we continue to believe that additional and more systematic approaches are necessary with highway public-private partnerships given the long-term nature of concession agreements (up to 99 years in some cases), the potential loss of public control, and the fact that private entities are not accountable to the public in the same way public agencies are. Similarly, DOT disagreed with our discussion of national public interests and stated that our draft report did not explain why highway projects undertaken through highway public-private partnerships raise issues of potential national interests more so than if a state or local government undertook them. DOT stated that the report did not adequately explain how highway public-private partnerships impact national interests, such as interstate commerce, that would allow policy makers to clearly understand the nature of those concerns and assess what actions are needed to address them. As stated above, we agree that highway projects delivered through state and local governments raise many of the same concerns but that additional and more systematic approaches are necessary with highway public-private partnerships. Furthermore, it was not the objective of our report to define what the national interest concerns were on particular projects or to suggest what actions were needed to address such concerns. Rather, our report illustrates that such projects may have implications for national interests, and that it is important to consider such interests and their implications up-front as part of the decision-making process in order to ensure that any potential concerns are identified, evaluated, and resolved. At the current time, there is little mechanism to allow such consideration when federal funds are not involved with a project. As discussed in our report, the reexamination of federal transportation programs, which we have called for in previous reports, provides an opportunity to determine the most appropriate structure of these federal programs, where highway public-private partnerships fit into this structure, and the identification of national interests associated with highway public-private partnerships. Finally, DOT indicated that the scope of our work focused primarily on a subset of public-private partnerships involving long-term concession agreements and, as a result, our conclusions cannot be generalized to other types of public-private partnerships. We agree with DOT that the scope of our work only focused on a subset of all types of public-private partnerships. Our report acknowledges that there are also public-private partnerships in nontransportation areas, as well as in other modes of transportation (such as mass transit). We also acknowledge that there are other types of highway public-private partnerships, such as availability payments, that are not included in our scope. In response to DOT’s comments, we made these scope limitations clearer in our report and acknowledged that the findings and conclusions of our report cannot necessarily be extrapolated to other types of public-private partnerships. Our draft report recommended that DOT develop and submit to Congress a legislative proposal that establishes objective criteria for identifying national public interests in highway public-private partnerships, including any additional legal authority required by the Secretary of Transportation necessary to develop regulations, guidance, and assessment tools, as appropriate, to ensure such interests are protected in future highway public-private partnerships. DOT disagreed with this recommendation, stating that the draft report did not provide sufficient evidence to explain why the federal government should intrude on inherently state activities or to justify a more expansive federal role. Instead, DOT stated that federal involvement should be limited in order to allow these arrangements to grow and provide the benefits of which they are capable. As discussed in our report, the reexamination of federal transportation programs provides an opportunity to determine the most appropriate structure of these federal programs, where highway public-private partnerships fit into this structure, and the identification of potential national interests that are associated with highway public-private partnerships. We believe that once these specific national interests have been established, instead of necessarily leading to a more expansive federal role, the federal government can play a more targeted role—including ensuring that identified national interests in highway public-private partnerships are considered by states and localities, as appropriate. We have, therefore, deleted our recommendation but have instead suggested that Congress consider directing DOT to undertake these actions. We also recommended that the Secretary of Transportation direct the Administrator of FHWA to clarify federal-aid highway regulations on the methodology for determining excess toll revenue, including a reasonable rate of return to private investors in highway public-private partnerships. DOT indicated, in response to this recommendation, that it would reexamine the regulations and take appropriate action, as necessary, to ensure the regulations are clear. Therefore, we made no change to the recommendation. DOT also provided technical comments that were incorporated, as appropriate. We also obtained comments from states, localities, and organizations in the foreign countries included in our review. In general, these comments were technical in nature and were incorporated where appropriate. We are sending copies of this report to appropriate congressional committees; the Secretary of Transportation; the Administrator of the Federal Highway Administration; and the Director, Office of Management and Budget. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-2834 or heckerj@gao.gov. Contact points for our Office of Congressional Relations and Public Affairs Office may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix III. Our work was focused on federal surface transportation and highway programs and the issues associated with use of private sector participation in providing public transportation infrastructure. In particular, we focused on (1) the benefits, costs, and trade-offs associated with highway public- private partnerships; (2) how public officials have identified, evaluated, and acted to protect the public interest in public-private partnership arrangements; and (3) the federal role in highway public-private partnerships and potential changes needed in this role. Our scope was limited to identifying the primary issues associated with using public- private partnerships for highway infrastructure and not in conducting a detailed financial analysis of the benefits and costs of specific arrangements. We selected recent projects to review, such as the lease of the Chicago Skyway and the Indiana Toll Road and planning for the Oregon and Trans-Texas Corridor (TTC), to understand decision-making processes. These projects were selected because they were recent examples of highway public-private partnerships, were large dollar projects, or used different approaches to highway public-private partnerships. We also spoke with states that were considering highway public-private partnerships, including California, New Jersey, and Pennsylvania. It was not our intent to review all highway public-private partnerships in the United States. We also did not review all types of highway public- private partnerships. For example, we did not review highway public- private partnerships involving shadow tolling or availability payments. In shadow tolling, the public sector pays a private sector company an amount per user of a roadway as opposed to direct collection of a toll by the private company. In availability payments, a private company is paid based on the availability of a highway to users. These were not included in our scope and the findings and conclusions of this study cannot necessarily be extrapolated to those or other types of public-private partnerships. In reviewing highway public-private partnerships, it was not our intent to either endorse or refute these projects but rather to identify key public policy issues associated with using public-private partnerships to provide highway infrastructure. To identify the benefits, costs, and trade-offs associated with public- private partnerships for tolled highway projects, we collected and reviewed relevant documents including concession agreements, planning documents, toll schedules, guidance, and academic, corporate, and government reports. We obtained toll schedule data from the Chicago Skyway concession company and used them to project a range of future maximum toll rates using Congressional Budget Office estimates of future growth rates for gross domestic product (GDP) and the consumer price index (CPI) and Census Bureau forecasts for population growth (in order to determine forecasted per capita GDP). We also conducted interviews with public-sector representatives from state departments of transportation; elected officials; public-interest groups; municipal planning organizations; Federal Highway Administration (FHWA) representatives; and other representatives at municipal, state, and federal levels. We also spoke with foreign government representatives in the United Kingdom, and we visited relevant public- and private-sector representatives in Canada, Spain, and Australia to understand the foreign perspective and to identify common benefits, costs, and trade-offs experienced in other countries. The countries we visited to obtain information on highway public-private partnerships was based on those countries that had a history of using highway public-private partnerships to obtain highway infrastructure, had highway public-private partnerships in place for a period of time so lessons learned could be determined, or had developed tools to assess public interest issues. These foreign public-private partnership experiences were compared with experiences in the United States. We conducted interviews with the private-sector concessionaires, financial investors, and legal, technical and financial advisors to the public and private sectors. Finally, we visited public-private partnership projects, including the Chicago Skyway, the Indiana Toll Road, and the 407 Express Toll Road (ETR) in Toronto, Canada. To assess the reliability of the Chicago Skyway historic toll data, we (1) reviewed sources containing historic toll information, including the city’s request for qualifications from potential concession companies, an academic paper, and a relevant journal article and (2) worked closely with the Assistant Budget Director for the city of Chicago to identify any data problems. We found a discrepancy in the toll rates and brought it to the official’s attention and worked with him to determine the correct historic toll rates. We determined that the data were sufficiently reliable for the purposes of this report. To estimate each year’s population in order to estimate annual GDP per capita, we used the Census Bureau’s interim population projections, which were created in 2004, and which project population growth in 10-year increments. We computed the average annual rate of increase in estimated population for every 10-year period and then used each 10-year period’s annual average rate of increase to estimate the population for each year in that period. As a base population estimate, we used the Census Bureau’s population estimate of just over 303 million on January 1, 2008. We divided the forecasted nominal GDP for every year by the projected population in that year to determine the forecasted per capita nominal GDP. We determined the Census Bureau data were reliable for use by checking for obvious errors or omissions, as well as anomalies such as unusual data points. We used the CPI to convert past and projected toll rates to 2007 dollars. To convert amounts denominated in foreign currencies, we converted to 2007 U.S. dollars using the Organization for Economic Cooperation and Development’s purchasing power parities for GDPs. To obtain information on the value of concession agreements and the use of lease proceeds, we obtained financial information from the concession companies and state representatives. To determine how public officials have identified, evaluated, and acted to protect the public interest in public-private partnership arrangements, we conducted site visits of highway public-private partnerships and visited selected foreign countries with long-term experience of conducting highway public-private partnerships. We visited the state of Oregon to examine three potential public-private partnership projects in the metropolitan Portland region. We also conducted site visits for the Chicago Skyway and Indiana Toll Road, as well as the TTC in Texas, and the 407 ETR in Toronto, Canada. We also conducted visits to Spain, the states of New South Wales, and Victoria in Australia. For each site visit, we met with relevant officials from public sector agencies, such as state departments of transportation and state financial agencies, consultants and advisors to the public sector, including legal, financial, and technical advisors; the private sector operators; and other relevant stakeholders, such as users groups. Interviews covered a wide range of topics, including a discussion of how the public interest was defined, evaluated and protected in the relevant public-private partnership project. In addition to conducting interviews, we collected relevant documents, including legal contracts, public interest assessment tool guidance, procurement documents, financial statements, and reports, and analyzed them as necessary. Where appropriate, we reviewed contracts for certain public interest mechanisms. In addition to those site and country visits, we met with officials from British Columbia, Canada, and the United Kingdom to discuss their processes and tools for evaluating and protecting the public interest. We also held interviews with officials of FHWA and collected and analyzed policy and legal documents related to public interest issues. To address the federal role in highway public-private partnerships, we reviewed pertinent legislation; prior GAO reports and testimonies; and other documents from FHWA, state department of transportation (DOT), and foreign national and provincial governments. This included policy documents from DOT, the public-private partnership Internet Web site developed by FHWA, model legislation prepared by FHWA, the FHWA public-private partnership manual, and various public presentations made by FHWA officials about highway public-private partnerships issues. We also obtained data from FHWA on the use of the SEP-14 and SEP-15 processes, including a list of projects approved to use these processes. Further, we obtained data from FHWA on the use of private activity bonds in the context of highway-related projects. After checking for obvious errors or omissions, we deemed these data reliable for our use. We discussed federal tax issues, including deduction from income of depreciation for highway public-private partnerships, with both FHWA and a tax expert associated with the Chicago Skyway lease. Our discussion of national interests in highway projects was based on a review of DOT’s fiscal years 2006 to 2011 strategic plan, documentation of the Department of Defense Strategic Highway Network, and pertinent legislation related to the National Highway System. We also interviewed FHWA officials, officials from state DOTs and local governments, officials from private investment firms, and officials from foreign national and provincial governments that have entered into highway and other public-private partnerships. Discussions with FHWA included clarifying how it determines such things as reasonable rates of return on highway projects where there is private investment and the use of proceeds when there is federal investment in a highway facility that is leased to the private sector. Where feasible, we corroborated these clarifications with documents obtained from FHWA. We conducted this performance audit from June 2006 to February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Project description: The Chicago Skyway is a 7.8-mile elevated toll road connecting Interstate 94 (Dan Ryan Expressway) in Chicago to Interstate 90 (Indiana Toll Road) at the Indiana border. Built in 1958, the Skyway was operated and maintained by the city of Chicago Department of Streets and Sanitation. In March 2004, the city of Chicago issued a request for qualifications from potential bidders interested in operating the facility on a long-term lease basis. It received 10 responses and in May 2004 invited five groups to prepare proposals. Bids were submitted in October 2004, with the long-term concession awarded to the Skyway Concession Company (SCC) that included Cintra and Macquarie on October 27, 2004. This was the date the contract was signed. Project concession fee: Cintra/Macquarie bid $1.83 billion. Concession term: 99 years. Institutional arrangements: Cintra is a part of Grupo Ferrovial, one of the largest infrastructure development companies in Europe and Macquarie Infrastructure Group, a subsidiary of Macquarie Bank Limited, Australia’s largest investment bank. SCC assumed operations on the Chicago Skyway on January 24, 2005. SCC is responsible for all operating and maintenance costs of the Chicago Skyway but has the right to all toll and concession revenue. This agreement between SCC and the project sponsor, city of Chicago, was the first long-term lease of an existing public toll road in the United States. Financing: Original financial structure was: Cintra equity—$485 million; Macquarie equity—$397 million; and bank loans—$1 billion (approximately). SCC subsequently refinanced the capital structure in 2005, which reduced the equity holdings of Cintra and Macquarie to approximately $500 million. Originally financed by European banks, the $1.550 billion refinancing also included Citgroup. The refinancing involved capital accretion bonds ($961 million) with a 21-year maturity with an interest rate equivalent to 5.6 percent. There is an additional $439 million in 12-year floating rate notes, and $150 million in subordinated bank debt provided by Banco Bilbao Vizcaya Argentaria and Santander Central Hispano of Spain, together with Calyon of Chicago. Revenue sources: Based on tolls: up to $2.50 until 2008; $3.00 until 2011, $3.50 until 2013, $4.00 until 2015, $4.50 until 2017, $5.00 starting in 2017. Lease proceeds: Proceeds from the agreement paid off $463 million of existing Chicago Skyway debt; $392 million to refund long- and short-term debt and to pay other city of Chicago obligations; $500 million for long- term and $375 million for a medium-term reserve for the city of Chicago, as well as a $100 million neighborhood, human, and business infrastructure fund to be drawn down over 5 years. Project description: The Indiana Toll Road stretches 157 miles across the northernmost part of Indiana from its border with Ohio to the Illinois state line, where it provides the primary connection to the Chicago Skyway and downtown Chicago. The Indiana Toll Road links the largest cities on the Great Lakes with the Eastern Seaboard, and its connections with Interstate 65 and Interstate 69 lead to major destinations in the South and on the Gulf Coast. For the past 25 years, the Indiana Toll Road has been operated by the Indiana DOT. In 2005, the Governor of Indiana tasked the Indiana Finance Authority to explore the feasibility of leasing the toll road to a private entity. A Request for Toll Road Concessionaire Proposals was published on September 28, 2005. Eleven teams submitted proposals by the October 26 deadline. The lease concession was awarded to Indiana Toll Road Concession Company LLC (ITRCC) comprised of an even public-private partnership between Cintra and Macquarie. Project concession fee: ITRCC submitted the highest bid of $3.8 billion. Concession term: 75 years. Institutional arrangements: ITRCC is composed of a 50/50 public- private partnership between Cintra, which is part of Grupo Ferrovial, and Macquarie Infrastructure Group. The Indiana Toll Road lease transaction was contingent upon authorizing legislation. House Enrolled Act 1008, popularly known as “Major Moves,” was signed into law in mid-March 2006. On April 12, 2006, the Indiana Toll Road and the Indiana Finance Authority executed the “Indiana Toll Road Concession and Lease Agreement.” Pursuant to its terms, the Indiana Finance Authority agreed to terminate the current operational lease to the Indiana DOT. A 10-member board of directors oversees ITRCC and its operations of the Indiana Toll Road. ITRCC formally assumed operational responsibility for the toll road on June 29, 2006. Financing: The financing structure is Cintra Equity—$385 million; Macquarie Equity—$385 million; and bank loans—$3.030 billion. Loans were provided by a collection of seven European banks: (1) Banco Bilbao Vizcaya Argentaria SA; (2) Banco Santander Central Hispano SA; and (3) Caja de Ahorros y Monte de Piedad de Madrid, all of Spain; BNP Paribas of France; DEPFA Bank of Germany; RBS Securities Corporation of Scotland, and Dexia Crédit Local, a Belgian-French bank. Revenues: Based on tolls: $8.00 through June 30, 2010, for two-axle vehicles with higher tolls for three- to seven-axle vehicles. From June 30, 2011, tolls can be based on 2 percent or the percentage increase of the CPI or per capita nominal GDP whichever is greater. Lease proceeds: The concession fee will provide funding for the Major Moves program, which will support about 200 new construction and 200 major preservation projects around the state, including beginning construction of Interstate 69 between Evansville and Indianapolis. The proceeds will also fund projects in the seven toll road counties and provide $150 million over 2 years to all the state’s 92 counties for roads and bridges. Project description: The TTC program is envisioned to be a 4,000-mile network consisting of a series of interconnected corridors containing tolled highways for automobile traffic and separate tolled truckways for motor carrier traffic; freight, intercity passenger, and commuter rail lines; and various utility rights-of-way. The Texas Transportation Commission formally adopted a TTC action plan in June 2002, which identified four priority segments of the TTC, which roughly parallel the following existing routes: Interstate 35 from Oklahoma to San Antonio and Interstate 37 from San Antonio south to the border of Mexico; Interstate 69 from Texarkana to Houston to Laredo and the lower Rio Grande Valley; Interstate 45 from Dallas-Fort Worth to Houston; and Interstate 10 from El Paso in the west, to the border of Louisiana at Orange. Plans call for the TTC to be completed over the next 50 years with routes prioritized according to Texas’ transportation needs. Texas DOT, the state transportation agency, will oversee planning, construction, and ongoing maintenance although private vendors can deliver the services including daily operations. In 2005, the Texas DOT selected a consortium led by Cintra and Zachry Construction Corporation under a competitively procured comprehensive development agreement (CDA) to develop preliminary concept and financing plans for TTC-35, including segments comprising the 600-mile Interstate 35 corridor in Texas. Included in this plan are facilities adjacent to Interstate 35 between Dallas and San Antonio consisting of a four-lane toll road that could eventually include separate truck toll facilities, utilities, and freight, commuter, and high-speed rail lines. Under the terms of the CDA, Cintra-Zachry produced the master development and financial plan for TTC-35. Once the master plan is complete, individual project segments—be they road, rail, utilities, or a combination of these—may be developed, as specified in the separate facility implementation plans as part of the master plan. Cintra-Zachry will have the right of first negotiation for development of some facilities developed in the master plan subject to Texas DOT’s approval. According to the Texas DOT, the contract only required the department to negotiate in good faith for possible concession contracts valuing at least $400 million. The award of the State Highway 130, segment 5 and 6 agreement discussed above fully meets the requirements of the CDA. However, Cintra-Zachry is eligible for consideration on future TTC-35 facilities. Project cost: Initial cost estimates for the full 4,000 mile TTC project range from $145 billion to $184 billion in 2002 dollars, as reported in the Texas DOT’s June 2002 TTC Plan. According to the Texas DOT, this would include all highway and rail modes fully built as envisioned in the 2002 plan. The Texas DOT acknowledges that many of the proposed facilities or modes may not be needed. Implementation of this plan includes the flexibility to build only what will be needed. Institutional arrangements: The consortium Cintra-Zachry, LP is 85 percent owned by Cintra Concesiones de Infraestructuras de Transporte, S.A. and 15 percent owned by Zachry Construction Corporation. Zachry Construction Corporation is a privately owned construction and industrial maintenance service company located in San Antonio, Texas. The Cintra- Zachry team produced the master development plan and financial plan for TTC-35. This plan was accepted by the Texas DOT in 2006. The team may opt to perform additional activities such as financing, planning, design, construction, maintenance, and toll collection and operation of segments of the approved development plan for the corridor, as approved by the Texas DOT and FHWA. Project financing: To be determined for entire TTC program. The final Cintra-Zachry TTC-35 proposal called for a capital investment of $6 billion in a tollroad linking Dallas and San Antonio, and $1.2 billion in concession payments to Texas DOT for the right to operate the facility for 50 years. According to the Texas DOT, the current Master Development Plan shows approximately $8.8 billion and $2 billion, respectively. Revenue sources: Tolls. The CDA between Cintra-Zachry and Texas DOT does not specify how toll rates will be set and adjusted or the term of any toll concessions for the corridor. According to the Texas DOT, state statute and department policy require the Texas DOT to approve all rate setting and rate escalating methodologies. The CDA requires Cintra-Zachry to be compliant with these regulations. The State Highway 130 agreement specifically sets toll rates and the formula for future adjustments. Lease proceeds: To be determined. Project descriptions: In January 2006, the Oregon Transportation Commission approved the Oregon DOT agreements with the Oregon Transportation Improvement Group (OTIG) for predevelopment work on three proposed public-private partnership highway projects—Sunrise Corridor, South Interstate 205 Widening, and Newberg-Dundee Transportation Improvement Projects. The proposed Sunrise Corridor is construction of a new four-lane, limited access roadway facility to SE 172nd (segment 1) and additional transportation infrastructure to serve the newly incorporated city of Damascus (segment 2). The proposed South Interstate 205 Corridor Improvements project is a widening of this major north-south freight and commuter route in the Portland metropolitan region. The proposed Newberg-Dundee project is an identified alternative corridor (bypass) that is approximately 11 miles long, starting at the east end of Newberg and ending near Dayton at the junction with Oregon 18. Under an agreement with Macquarie, Macquarie will do the predevelopment work for all three projects as three separate contracts and will internalize the predevelopment costs for each project if that project proceeds into implementation. If the project does not proceed, then Oregon DOT will reimburse Macquarie for the predevelopment work for that project. Sunrise corridor: OTIG and Oregon DOT determined that the Sunrise Corridor would not be toll-viable, and decided to indefinitely postpone the project. This decision was based on the project not offering substantial time savings to other alternative routes in the area and the predictability of traffic on the proposed project was uncertain. According to an Oregon DOT official, the project will be put on hold and may be reconsidered in the future, but it is not considered a priority at this time. Oregon DOT paid Macquarie $500,000 for the study. South Interstate 205 widening: According to an Oregon DOT official, this project is not yet listed in the regional transportation plan but the environmental review process has already begun. Final decisions on whether this project will proceed will not occur until the environmental assessment is completed. Newberg-Dundee: In July 2007, OTIG and Oregon DOT agreed to cease pursuing public-private development of a Newberg-Dundee tolled bypass after an independent analysis confirmed that the plan to charge a toll on the bypass alone would not produce sufficient revenue to finance the planned project under a public-private concession agreement. Instead, according to an Oregon DOT official, the project will likely be continued under a traditional public sector procurement approach using the private sector as contractors. According to this official, the road is still expected to be tolled. Project description: Highway 407 ETR stretches 108 kilometers through the Greater Toronto Area. In 1998, as part of the largest privatization project in Canadian history at that time, the Province of Ontario put out a tender for the operation of the original 68 kilometers of highway and the requirement to build the remaining 40 kilometers. Following an international competition, the 407 ETR consortium led by Cintra of Grupo Ferrovial, SNC-Lavalin and Capital D’Amerique CDPQ was awarded the 99- year contract in 1999. Project cost: $3.1 billion Canadian dollars for a 99-year lease. Institutional arrangements: The 407 ETR consortium was initially led by Cintra of Grupo Ferrovial, SNC-Lavalin and Capital D’Amerique CDPQ. In 2002, Macquarie Infrastructure Group purchased all of Capital D’Amerique CDPG’s interest in the toll road. Revenue sources: Tolls are based on level of traffic flow. Toll rates are guaranteed to increase at 2 percent per year for the first 15 years and by an amount set by the concessionaire if traffic exceeds certain traffic levels. Lease proceeds: Most of the proceeds were deposited into a general consolidated revenue fund and each resident of Ontario received a $200 check from the government for the sale. In addition to the individual named above, Steve Cohen, Assistant Director; Jay Cherlow; Colin Fallon; Greg Hanna; John Healey; Carol Henn; Bert Japikse; Richard Jorgenson; Maureen Luna-Long; Teague Lyons; Matthew Rosenberg; Michelle Su; Richard Swayze; and James Wozny made key contributions to this report.
The United States is at a critical juncture in addressing the demands on its transportation system, including highway infrastructure. State and local governments are looking for alternatives, including increased private sector participation. GAO was asked to review (1) the benefits, costs, and trade-offs of public-private partnerships; (2) how public officials have identified and acted to protect the public interest in these arrangements; and (3) the federal role in public-private partnerships and potential changes in this role. GAO reviewed federal legislation, interviewed federal, state, and other officials, and reviewed the experience of Australia, Canada, and Spain. GAO's work focused on highway-related public-private partnerships and did not review all forms of public-private partnerships. Highway public-private partnerships have resulted in advantages for state and local governments, such as obtaining new facilities and value from existing facilities without using public funding. The public can potentially obtain other benefits, such as sharing risks with the private sector, more efficient operations and management of facilities, and, through the use of tolling, increased mobility and more cost effective investment decisions. There are also potential costs and trade-offs--there is no "free" money in public-private partnerships and it is likely that tolls on a privately operated highway will increase to a greater extent than they would on a publicly operated toll road. There is also the risk of tolls being set that exceed the costs of the facility, including a reasonable rate of return, should a private concessionaire gain market power because of the lack of viable travel alternatives. Highway public-private partnerships are also potentially more costly to the public than traditional procurement methods and the public sector gives up a measure of control, such as the ability to influence toll rates. Finally, as with any highway project, there are multiple stakeholders and trade-offs in protecting the public interest. Highway public-private partnerships we reviewed protected the public interest largely through concession agreement terms prescribing performance and other standards. Governments in other countries, such as Australia, have developed systematic approaches to identifying and evaluating public interest and require their use when considering private investments in public infrastructure. While similar tools have been used to some extent in the United States, their use has been more limited. Using up-front public interest evaluation tools can assist in determining expected benefits and costs of projects; not using such tools may lead to aspects of protecting the public interest being overlooked. For example, while projects in Australia require consideration of local and regional interests, concerns by local governments in Texas that they were being excluded resulted in state legislation requiring their involvement. While direct federal involvement has been limited to where federal investment exists, and while the Department of Transportation has actively promoted them, highway public-private partnerships may pose national public interest implications such as interstate commerce that transcend whether there is direct federal investment in a project. However, given the minimal federal funding in highway public-private partnerships to date, little consideration has been given to potential national public interests in them. GAO has called for a fundamental reexamination of federal programs to address emerging needs and test the relevance of existing policies. This reexamination provides an opportunity to identify and protect potential national public interests in highway public-private partnerships.
In recent years, the ambulance industry has experienced several changes. In 2002, CMS implemented a new Medicare fee schedule for ambulance services, replacing the previous system that paid providers on a reasonable cost or reasonable charge basis. In addition, according to industry experts, many volunteer providers have reported greater difficulty maintaining adequate staff. Rural providers in particular have begun to rely more heavily on paid staff. Experts also told us that while many rural volunteer providers have not billed Medicare—or have billed nominal amounts— more of these providers have begun billing for services. Recently, both the number of ambulance providers that bill Medicare and the number of ambulance trips paid for by Medicare have increased. From 1998 to 2001, the number of ambulance providers that billed Medicare increased from just under 9,300 to over 9,700, and the total number of trips paid for by Medicare rose from roughly 8 million to over 10 million. Medicare ambulance providers include a wide variety of provider types. In 1998, about 8,200 freestanding providers and 1,100 hospitals and other institution-based providers billed Medicare for ground trips. Freestanding providers are a diverse group, including private for-profit, not-for-profit, and public entities. They range from small community one-vehicle operations to large fire and rescue departments serving major metropolitan areas. They include operations staffed almost entirely by community volunteers, public ventures that include a mix of volunteer and paid professional staff, and private firms that use only paid staff. In 1998, volunteer staff accounted for 80 percent or more of full-time equivalent personnel for over one-third of Medicare ambulance providers. About one-third of freestanding Medicare ambulance providers are managed by local fire departments. Medicare ambulance providers also vary in the types of services they provide. Some deliver only basic life support (BLS) while others deliver advanced life support (ALS) services. In addition to responding to emergencies, ambulance providers may provide nonemergency transportation, such as transfers from one hospital to another. For some ambulance providers, nonemergency trips account for a significant share of their trips; for others, such trips account for few or none of their trips. Some ambulance providers are the sole providers serving their communities, while others operate in areas with multiple ambulance providers. Medicare ambulance providers also differ in the percentage of their trips covered by Medicare and in their reliance on Medicare revenue. In 1998, Medicare beneficiaries on average accounted for about half of the total trips by providers that billed Medicare. However, Medicare beneficiaries accounted for less than one-quarter of trips for 13 percent of Medicare providers, and accounted for over 80 percent of annual trips for 9 percent of providers. On average, Medicare revenue accounted for 41 percent of providers’ cash receipts. Other sources of ambulance providers’ revenue include local tax subsidies and payments from private insurers, Medicaid, and individuals. Requirements affecting ambulance providers vary by location. States and localities may require certain training for ambulance staff, establish maximum payment rates that licensed providers are allowed to charge, or specify response times through contracts with providers. Some jurisdictions—such as those that provide financial support to ambulance providers—prohibit providers from billing for services. In addition, some communities require all ambulance providers to maintain ALS capacity on all vehicles. CMS recently implemented a Medicare fee schedule that changed the way Medicare pays for ambulance services. The fee schedule, mandated by the Balanced Budget Act of 1997 (BBA), recognizes seven levels of ground ambulance services, ranging from BLS services to specialty care transports. (See table 1.) Under the previous payment system, Medicare paid institutional providers on a reasonable cost basis and freestanding providers on a reasonable charge basis. This approach led to wide differences in payments across providers for the same services. The new fee schedule standardized payment rates across provider types by applying the same payment rates to both institutional and freestanding providers. The fee schedule’s payment rates are updated annually. Medicare’s payment is based on the lesser of the actual charge or the applicable fee schedule amount. For most ambulance services, the fee schedule payment is the sum of a base payment and a payment for mileage. The base payment for a trip, which is intended to pay for fixed costs such as staff and equipment, reflects both a base rate and a geographic modifier. The base rate varies by the level of ambulance service provided. The geographic modifier, which is applied to 70 percent of the base rate, is intended to account for wage differences across areas. The mileage payment reflects both the length of a trip and the per-mile payment rate. For trips in which the beneficiary is picked up in an urban area, the per-mile rate is $5.53. Because of the fee schedule’s rural adjustment, the per-mile rate for rural trips is 150 percent of the urban mileage rate for each of the first 17 miles ($8.30) and 125 percent of the urban mileage rate for miles 18 through 50 ($6.91). The urban mileage rate applies to every mile over 50 miles. The mileage payment applies only to “loaded miles”—the miles the beneficiary is transported by ambulance. Under the fee schedule, rural areas are defined as areas outside of metropolitan statistical areas (MSA) and New England County Metropolitan Areas, as well as parts of MSAs that are identified as rural by the Goldsmith modification. MSAs are groups of counties containing a core of at least 50,000 people, together with adjacent areas that have a high degree of economic and social integration with that core. The Goldsmith modification identifies small towns and rural areas within large metropolitan counties that are isolated from central areas by distance or other features, such as mountains. About one-quarter of the roughly 3,100 counties in the United States are in MSAs, and about 75 of those counties have areas that are identified as rural under the Goldsmith modification. The ambulance fee schedule will be phased in over several years. During this period, payments will be based in part on the fee schedule’s service- specific payment rates and in part on the amounts that Medicare would have paid under the prior payment system. The proportion of the payment based on the fee schedule will increase each year until 2006, when provider payments will be based entirely on the fee schedule. In 2003, payments are based on 40 percent of the fee schedule payment and 60 percent of the rates under the prior system. Trip volume is the major determinant of differences across providers in the average cost per trip. Ambulance providers’ total costs primarily reflect readiness—having an ambulance and crew available when emergency calls are received. These readiness-related costs are fixed costs, meaning that they do not increase with the number of trips provided, as long as the provider has the excess capacity to make additional trips. Consequently, providers that can spread these fixed costs across more trips have a lower average cost per trip than providers that make fewer trips. The majority of ambulance providers’ total costs are related to readiness— the need to have an ambulance and crew available when emergency calls are received. Readiness-related costs include costs of labor, vehicles, building space, and administration, as well as the cost of any back-up vehicles and crew, which constitute a reserve that permits responses to multiple simultaneous calls as well as scheduled maintenance on other vehicles. (See table 2.) Readiness-related costs are fixed, meaning that they do not vary with the number of trips a provider makes, as long as the provider has excess capacity. For example, total vehicle costs do not increase significantly when a provider makes more trips. Likewise, building and administrative costs are largely unaffected by trip volume. However, if a provider were to add another ambulance and crew to respond to higher volume, its fixed costs would rise substantially. In contrast, an ambulance provider’s costs for fuel and supplies (such as drugs and oxygen) are variable because they increase with the number of trips. These costs, however, account for a small fraction of ambulance providers’ total costs. Providers that make fewer trips tend to have a higher cost per trip than those that make more trips. Figure 1 illustrates the average relationship between ambulance providers’ cost per trip and their total trip volume, for providers that made 5,000 or fewer trips. As trip volume increases, the cost per trip decreases. Our statistical analysis considered other factors that affect providers’ costs, notably trip length, but trip volume was most strongly related to the cost per trip. In addition, we found that providers surveyed by Project HOPE that averaged 3 or fewer trips per day had an average cost per trip that was nearly twice as high as the cost per trip among those that averaged 9 to 12 trips per day. (See table 3.) Providers that averaged 4 to 8 trips per day had a cost per trip that was 1.3 times as high as the average cost among providers with 9 to 12 trips per day. Although Medicare’s payments generally are higher for trips originating in the least densely populated rural counties than in other counties, the payment differential is probably not large enough to account for the higher costs incurred by low-volume providers likely to serve these areas. Far fewer Medicare-covered ambulance trips are typically provided in rural counties than in urban counties. Trip volume also varies widely across rural counties, with the least densely populated generally having substantially fewer trips than the most densely populated. This suggests that the cost per trip is likely higher for providers serving the least densely populated rural counties. Ambulance providers on average are paid more for trips originating in the least densely populated rural counties than for those in the most densely populated rural counties, but the payment differences are modest and unlikely to reflect the higher cost per trip of low-volume providers. Rural counties, as defined by Medicare’s ambulance fee schedule, tend to have a much lower volume of ambulance trips than counties defined as urban. In 2001, rural counties averaged about 1,200 Medicare-covered trips (both emergency and nonemergency), while urban counties averaged about 9,100 trips. The lower number of trips in rural counties suggests that providers that serve these areas likely have a higher cost per trip than other providers. The difference in the volume of Medicare ambulance trips provided in rural and urban counties largely reflects differences in their population density. Not surprisingly, the number of Medicare ambulance trips in a county is strongly related to its population, with counties with fewer residents having fewer trips. Trip volume is also related to a county’s land area, although to a lesser extent. Population density—the ratio of population to land area—reflects both of these measures. (See table 4.) The number of Medicare ambulance trips provided in rural counties varies markedly with population density, with the least densely populated rural counties tending to have fewer trips than other rural counties. For example, the quarter of rural counties that are the most densely populated, with 52 or more persons per square mile, averaged over 2,200 Medicare trips in 2001. (See table 5.) In contrast, only about 300 Medicare trips, on average, were made in the quarter of rural counties that are the least densely populated, with 11 or fewer persons per square mile. Even fewer Medicare trips—only about 200—were made in frontier counties, which are counties with 6 or fewer persons per square mile. This suggests that the cost per trip is likely higher for providers serving the least densely populated rural counties. The dominant providers in the least densely populated rural counties tend to have far fewer trips than the dominant providers serving other rural counties. Overall, rural counties vary little in the number of providers serving them. However, in most rural counties, one or two providers dominate, delivering the bulk of Medicare trips, with others having a much smaller share. We found that in 2001, about 70 percent of the trips in a rural county were typically supplied by two providers. The number of trips made by these dominant providers varied with counties’ population density. In the quarter of rural counties with the lowest population density, the median number of Medicare trips made by each of the top two providers—in all of the counties they served—was 275. (See table 6.) In contrast, the median number of Medicare trips made by the top two providers was much higher—over 2,100 trips—in the quarter of rural counties that were the most densely populated. Ambulance providers on average are paid 16 percent more for trips originating in the least densely populated quarter of rural counties than for trips in the most densely populated quarter. (See table 7.) Payments for those trips are higher because the trips are generally longer, resulting in a higher mileage payment. In 2001, while trips that began in the most densely populated quarter of rural counties averaged 18 miles, trips in the least densely populated quarter averaged 30 miles. The rural adjustment, which provides a higher per-mile rate for the first 50 miles of rural trips, also contributed to the higher mileage payments. The modest difference in Medicare payment across rural counties is dwarfed by the difference in trip volume: The difference in trip volume between the least and most densely populated quarters of rural counties is nearly eightfold. Because trip volume is an indicator of costs, the Medicare payment differences likely do not fully reflect differences across rural counties in providers’ cost per trip. Refining Medicare’s ambulance fee schedule to adequately account for cost differences in providing ambulance services across various geographic areas is important to ensuring beneficiaries’ access to services. Access is a particular concern in rural areas, since providers’ cost per trip is likely to be higher because they provide fewer trips. Moreover, our analysis shows that the cost per trip is likely to be highest in the least densely populated rural counties. While the fee schedule incorporates a rural adjustment to raise payments for trips provided in rural areas, its definition of “rural” is broad. As a result, the fee schedule’s rural payment adjustment does not sufficiently target trips provided in the least densely populated rural counties. In implementing the fee schedule, CMS adjusted the mileage rate for rural trips to account for the higher cost per trip of providers serving rural areas. However, trip volume is a better indicator of providers’ cost per trip than is trip length. Thus, adjusting the base rates for rural trips—the portion of Medicare’s payment that is designed to pay for providers’ fixed costs—is a more appropriate way of accounting for rural low-volume providers’ higher cost per trip than adjusting the mileage rate. To help ensure that Medicare beneficiaries’ access to ambulance services is adequate, we recommend that the Administrator of CMS better target the rural payment adjustment to trips provided in rural counties with particularly low population density by adjusting the base rates, rather than the mileage rate, for ground ambulance services provided in those counties. We received written comments on a draft of this report from CMS. We also received comments from eight ambulance associations: American Ambulance Association, American Hospital Association, Association of Air Medical Services, National Association of State Emergency Medical Services Directors, National Volunteer Fire Council, Rural EMS Advocate, American College of Emergency Physicians, and the National Association of EMS Physicians. CMS stated that the report will be useful as the agency develops a proposed rule to address appropriate payment for ambulance services furnished in rural, low-volume areas. CMS also noted that the report reflects the complexity of the issues and the need for careful analysis to ensure that payment adjustments are made only for those ambulance providers that require additional payment because of their low volume, rather than, for example, inefficiency or competition from another provider. CMS’s comments appear in appendix III. CMS also provided technical comments, which we incorporated as appropriate. The associations that reviewed the draft report generally agreed with our findings and recommendation. All of the associations agreed with the need to address the higher cost of providing ambulance services in rural areas. Six agreed that an area’s ambulance trip volume reflects its population density, while the remaining two associations did not address this issue. The majority of the associations agreed that CMS should adjust the base rates to recognize the higher cost per trip of providing ambulance services in areas with low population density. However, three associations went further, proposing to use both mileage and base rates to address the higher costs in rural areas. While supporting the principle of paying higher base rates to providers in rural areas where costs are high, the state EMS directors’ and EMS physicians’ associations were concerned that higher payments for rural providers could be at the expense of other providers. Four associations raised concerns about using counties as the geographic areas for applying the adjustment. These associations said that a system that used counties would not accurately target rural ambulance payments. Three of these associations noted that, because counties may include both densely and sparsely populated areas, a system that used counties could overpay some providers and underpay others. They proposed using zip codes as the geographic areas for assessing population density and applying the adjustment. The rural ambulance association, in particular, also advocated the use of multiple rural categories based on population density to adjust payments for rural trips. The fourth association emphasized the need for a system that ensures that all areas with sufficiently low population density are eligible for an appropriate payment adjustment. GAO and the ambulance associations agree with the need to adjust payments for rural trips and that the adjustment should be applied to the base rate. With respect to adjusting payments for rural trips in low population density areas, we believe the adjustment should be applied to the base rate. We believe that the mileage rate for any trip, rural or urban, is best suited to compensating ambulance providers for costs that vary with trip length. As stated in the report, a base rate adjustment is a more appropriate way of accounting for rural low-volume providers’ higher costs per trip because base rates reflect fixed costs, and because trip volume is a better indicator of providers’ cost per trip than is trip length. With respect to possible payment reductions for other providers, implementing our recommendation could have this effect. If a revised rural adjustment is implemented in a way to keep total Medicare expenditures the same, some providers could face lower payments. With respect to the geographic unit used to identify trips for the rural adjustment, we agree that, since counties are relatively large geographic units, it is possible for trips in some areas to be overpaid and others underpaid. Moreover, in principle, a rural classification system that uses a smaller geographic unit, such as zip codes, might better target payments to trips in areas with low population density. Yet our analysis indicates that zip codes do not explain variation in trip volume as well as counties. Further, county boundaries tend to be more stable over time than zip code boundaries. In addition, a variety of technical difficulties hinder the use of zip codes for ambulance payments, including the absence of zip codes for some rural areas. With respect to multiple adjustment categories, we did not address whether there should be a single adjustment or whether there should be multiple adjustment amounts to reflect differing levels of population density. A decision on single or multiple categories would require balancing increased precision with increased complexity. We are sending copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions, please call me at (202) 512-7114. Other GAO contacts and staff acknowledgments are listed in appendix IV. 1999 National Survey of Ambulance Providers. To identify the factors that influenced ambulance provider costs, we used the 1999 National Survey of Ambulance Providers. This survey, conducted by the Project HOPE Center for Health Affairs under the sponsorship of the American Ambulance Association, is the only nationally representative source for ambulance providers’ costs. Project HOPE selected a stratified random sample of providers that had billed Medicare in 1997, obtained 421 completed questionnaires, and reported a response rate of 56 percent. The survey included questions on costs, total number of trips by type of service, geographic location, and total mileage. We took several steps to ensure that the Project HOPE data were suitable for our analysis. We examined the accuracy and completeness of the data by testing for implausible values and internal consistency. In addition, we questioned an anomalous result in Project HOPE’s initial analysis of its data, which raised concerns about the credibility of the data: emergency advanced life support (ALS) trips cost less than nonemergency basic life support (BLS) trips. In response, Project HOPE provided us with information about its subsequent analysis, which showed the expected result—ALS trips cost more than BLS trips, after controlling for providers’ volume. This result resolved our major concern about the data. We limited our analysis of the factors affecting differences in providers’ costs to full cost providers—those providers that paid for 80 percent or more of their staff and paid for 80 percent or more of their office and garage space. The costs reported by these providers are more likely to reflect the full cost of providing ambulance services. We also excluded ambulance providers that were part of fire departments, because about half could not separate ambulance costs from other costs. Finally, we excluded one provider that reported implausible values. After these exclusions, we had 114 cases for analysis. Certain analyses that did not pertain to all full cost providers used a smaller number of cases. (See tables 8 and 9.) Area Resource File. The Area Resource File (ARF), which is maintained by the Health Resources and Services Administration (HRSA), is a county- based health resources information database that contains data from many sources, including the U.S. Census. From the 2001 ARF, we obtained county data on land area in 1990 and total population in 2000, which we used to calculate population density. We also obtained data on the number of persons age 65 and over in each county in 1999, which we used as a proxy for Medicare beneficiaries. The ARF is a standard data source that is well documented and widely used, so we did not independently verify its accuracy or completeness. Medicare claims files. We used Medicare claims data to determine the volume and length of all ground-based Medicare-covered trips, as well as Medicare’s payments for those trips. We used the 2001 national claims history 100 percent nearline file for physicians and suppliers to identify claims for ambulance services by freestanding providers, and the 2001 outpatient 100 percent standard analytic file to identify claims for ambulance services by institutional providers. We used the zip code of the beneficiary’s primary address as a proxy for the point where the ambulance picked up the beneficiary because the point of pickup is not recorded in the 2001 data. Although we did not independently verify the reliability of the national claims files, we screened the files and excluded claims that were denied, claims that were superseded by an adjustment claim, and claims for services in other years. We retained all final claims for 2001. Provider interviews. To gain an understanding of the ambulance industry, we interviewed experts from eight industry and professional organizations. We also interviewed several individual ambulance providers. Factors affecting ambulance providers’ costs. To examine the effect of selected factors on ambulance providers’ costs, we analyzed the Project HOPE survey data using a simplified version of a model reported by Project HOPE. In our model, the natural logarithm of total costs is a function of the number of trips, the number of trips squared, and the proportion of the total trips that are ALS. We tested a number of additional terms, including length of trips, but they were all either statistically insignificant or significant but with very small effects. We restricted our model to providers with 5,000 or fewer total trips per year because we were primarily interested in rural providers, which generally have fewer trips. However, our sensitivity analyses showed that the results were broadly similar when the model was applied to all full cost providers. Our model has an adjusted R of 0.48, indicating that the model explains 48 percent of the variance in costs. In general, when trip volume declines, the estimated cost per trip increases, although less than proportionately. That is, a 10 percent decrease in trip volume is associated with an increase in cost per trip of less than 10 percent. Analysis of variation in factors affecting costs across geographic areas. To examine differences between urban and rural areas in factors affecting ambulance costs, we grouped counties with similar characteristics. We followed CMS in classifying counties in metropolitan statistical areas (MSA) as urban counties and counties outside MSAs as rural. However, for our analysis we did not apply the Goldsmith modification that CMS uses to identify as rural certain areas within MSAs. These rural areas are typically small, so we did not treat them as rural counties because that would distort our urban and rural comparisons. Our sensitivity analyses determined that our findings would have been generally the same if we had considered these areas as rural counties, although in some cases the differences between urban and rural counties would have been heightened. To examine differences among rural counties, we grouped them based on their population density. Population density—the ratio of population to land area—is a commonly used measure of rurality. We used population density to group counties into quartiles, and then divided the least densely populated quartile of rural counties into frontier counties—those with six or fewer persons per square mile—and nonfrontier counties, because of our interest in the most sparsely populated rural areas. Using this grouping, we found that ambulance trip volume decreased steadily from the most densely populated rural counties to the least densely populated. We also examined several other classification systems: urban influence codes (UIC), which classify counties based on each county’s largest city and its proximity to other areas with large, urban populations; rural-urban continuum codes (RUCC), which classify metropolitan counties by the size of the urban area and nonurban counties by the size of the urban population and proximity to a metropolitan area; and rural-urban commuting areas (RUCA), which classify census tracts using patterns of urbanization, population density, and daily commuting patterns, and then map the census tracts into zip codes. These systems are more complex than the system we used, and we found that they did not help explain variation in trip volume as well as counties grouped by population density. To confirm the effect of population density on trip volume, we did several additional analyses. We regressed counties’ annual volume of Medicare trips (expressed as natural logarithms) on population and land area (expressed as natural logarithms). Population had a positive effect on the number of trips, while land area had a negative effect. An increase of 1 percent in population increased the number of trips by about 1 percent in a county, while an increase of 1 percent in land area decreased the number of trips by about 0.1 percent. Population density combines the two effects: a 1 percent increase in population density increases the number of trips by 0.7 percent. is a measure of the proportion of the variation in the dependent variable (the natural logarithm of trips) accounted for by the independent variables (the natural logarithms of land area and population). Total population density is strongly related to Medicare population density. (See table 10.) For example, 525 rural counties with the lowest total population density were also lowest in terms of Medicare population density. In total, 83 percent of all rural counties were in the same density quartile, regardless of whether total population or Medicare population was used to group rural counties. Our results with respect to county characteristics and ambulance services would have been similar had we used Medicare population density to group counties rather than total population density. (See tables 11, 12, and 13.) Major contributors to this report were Martha Kelly, Robin Burke, Eric Wedum, Michael Kendix, and Jessica Farb. Ambulance Services: Changes Needed to Improve Medicare Payment Policies and Coverage Decisions. GAO-03-244T. Washington, D.C.: November 15, 2001. Rural Ambulances: Medicare Fee Schedule Payments Could Be Better Targeted. GAO/HEHS-00-115. Washington, D.C.: July 17, 2000. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Centers for Medicare & Medicaid Services (CMS) recently implemented a Medicare ambulance fee schedule in which providers are paid a base payment per trip plus a mileage payment. An adjustment is made to the mileage rate for rural trips to account for higher costs. CMS has stated that this rural adjustment may not sufficiently target providers serving sparsely populated rural areas. The Medicare, Medicaid, and SCHIP Benefits Improvement and Protection Act of 2000 (BIPA) directed GAO to examine rural ambulance costs. GAO identified factors that affect ambulance costs per trip, examined how these factors varied across geographic areas, and analyzed whether Medicare payments account for geographic cost differences. GAO used survey data on ambulance providers and Medicare claims data. Trip volume is the key factor affecting differences in ambulance providers' cost per trip. Ambulance providers' total costs primarily reflect readiness--the need to have an ambulance and crew available when emergency calls are received. Readiness-related costs are fixed, meaning that they do not increase with the number of trips provided, as long as a provider has excess capacity. As a result, providers that make fewer trips tend to have a higher cost per trip than those that make more trips. We also found that the length of providers' trips had little effect on their cost per trip. The modest variation in Medicare payments to ambulance providers that serve rural counties probably does not fully reflect their differences in costs because the key factor affecting provider costs--the number of trips--varies widely across rural counties. In 2001, the least densely populated quarter of rural counties averaged far fewer trips than the most densely populated quarter. This suggests that the cost per trip is likely higher for providers serving the least populated rural counties. On average ambulance providers are paid somewhat more for trips in the least densely populated rural counties than for those in other rural counties. However, those payment differences are dwarfed by the difference in trip volume. Because trip volume is a strong indicator of costs, the Medicare payment differences across rural counties likely do not fully reflect differences in providers' cost per trip. In implementing the fee schedule, CMS adjusted the mileage rate for rural trips to account for the higher cost per trip of providers serving rural areas. However, trip volume is a better indicator of providers' cost per trip than is trip length. Thus, adjusting the base rates for rural trips--the portion of Medicare's payment that is designed to pay for providers' fixed costs--is a more appropriate way of accounting for rural low-volume providers' higher cost per trip than adjusting the mileage rate.
Third-party information reporting dramatically increases the accuracy of tax returns. Third parties—employers, banks, and others—report wages, interest, and other information to both taxpayers and IRS. An IRS study of individual tax compliance found that in tax year 2006, taxpayers accurately reported over 90 percent of income with substantial information reporting requirements, such as interest and dividend income.only 44 percent of income subject to little or no information reporting, such as nonfarm sole proprietor income. In contrast, the same study found taxpayers accurately reported There are more than 40 different types of information returns, 25 of which are directly matched against income tax returns filed by individuals. According to IRS’s analysis of tax year 2009 data, most taxpayers receive at least one of three information return types: Form W-2, Wage and Tax Statement; Form 1099-G, Certain Government Payments; or Form 1098, Mortgage Interest Statement.information returns IRS receives, a few types of returns accounted for the majority of returns in tax year 2011, as shown in figure 1. Consistent with IRS’s practice of issuing refunds promptly after a return is filed, IRS issued 50 percent of 2012 refunds for tax year 2011 to individual taxpayers by the end of February, at which point only 3 percent of all information returns had been received. By April 19, 2012, IRS had issued 82 percent of refunds to individual filers but had received only 30 percent of all information returns. By August 2, 2012, when IRS completed its first match of information return data to tax returns, IRS had issued 92 percent of refunds to individual taxpayers, as shown in figure 3. Figure 4 expands upon figure 3 to show that IRS did not receive any type of information return in significant numbers until March 2012. Of the 25 types of information returns represented in figure 4, IRS had received more than 30 percent of the submissions by March 1, 2012, for one form, Form 1099-G. For all other types, IRS had received less than 15 percent of submissions by March 1, 2012. As shown in figure 4, IRS receives many information returns after their original due dates. To begin matching earlier in the year, one option is to move up information return due dates. However, without an understanding of why information returns are filed after they are due, simply moving the dates may not be effective. As discussed below, IRS frequently grants extensions to information return providers. Data from IRS did not indicate what proportion of the information returns arriving after due dates was attributable to filing extensions, but an official told us that the agency approved 371,000 requests for filing extensions from information return providers for tax year 2011. According to IRS, information providers request extensions for reasons that include complex IRS regulations, changes in tax laws, and changes in information return forms. In focus groups held by IRS, some providers of more complex return types noted they automatically request a filing extension to allow time for taxpayers to review their copy of the information return and notify the provider of any needed corrections. Some providers said that they are hesitant to provide returns to IRS before hearing from taxpayers because IRS can levy penalties on providers that file forms with incorrect information. Moving information return due dates could affect the volume of information return amendments. Amendments are changed information returns resubmitted to IRS and the amendment rate is the percent of total information return volume attributed to amended, duplicate, or corrected returns received by IRS. In focus groups held by IRS, some information return providers noted that accelerating submission due dates for complex information returns would create challenges, as they often need to correct information in consultation with information return recipients before submitting data to IRS. We did not assess the volume or timing of corrections made by providers before sending information returns to IRS. On average, information returns had an amendment rate of less than 0.4 percent. Only 3 of the 25 information returns we reviewed had amendment rates greater than 2 percent. While amendment rates for most information returns were low for tax year 2011, they still represented millions of returns. For example, the amendment rate for Form W-2 was only 0.94 percent, but this represented over 2 million of the 213 million W- 2s for tax year 2011 that IRS received. One reason for low amendment rates may be providers’ fear of penalties, as discussed above. Appendix IV provides additional information on amendment rates. For 2010 income tax returns which were filed starting in 2011, IRS took over 1 year (388 days), on average, to notify taxpayers about AUR discrepancies. This is the elapsed time between when the taxpayer filed his or her income tax return and when IRS issued the first notice. The longest elapsed time was 763 days, just over 2 years. As a consequence, taxpayers may not be notified about a potential error until after filing the following year’s return. IRS officials acknowledge that such a delay may be a burden to taxpayers and said they are interested in pursuing Real Time Tax in order to improve the taxpayer experience. The officials noted that the longer the time between filing an income tax return and receiving a notice, the harder it is for some people to locate the records or other information needed to understand the discrepancy, as well as to respond to IRS and resolve the issue. In addition, if taxpayers spent their refund or tax savings they may not have funds set aside for an unexpected tax debt. Finally, penalties and interest may have accumulated, which increases the amount due. One of the goals of Real Time Tax would be to reduce the number of taxpayers facing these burdens by detecting and resolving discrepancies before refunds are issued. IRS identified nearly 24 million 2010 income tax returns with discrepancies. According to IRS data, IRS selected about 22 percent (or 5.3 million) of these discrepant returns for review. After the review, IRS sent at least one notice to over 4 million of these taxpayers informing them of the discrepancy. Figure 5 shows timelines for 2011 return processing, matching, and taxpayer notification based on the first match at the end of July. The left side of the graphic illustrates when IRS and taxpayers receive most information returns, and the right side illustrates when IRS begins notifying taxpayers of discrepancies. IRS officials and the planning documents they have developed make it clear that Real Time Tax is an exploratory effort. IRS officials emphasized that they have made no final decisions regarding Real Time Tax and that they do not perceive it as a “project” at this time. As part of the exploratory effort, IRS is collecting data to understand the potential disruptions to taxpayers and information return providers that could result should IRS implement a Real Time Tax system. Officials are also seeking to understand whether doing so would be an improvement over current processes, particularly AUR and accelerated matching for wage and withholding data. IRS is conducting tests to determine potential benefits, including whether IRS could match information returns at the time taxpayers file their tax returns in a Real Time Tax environment. IRS officials are analyzing six key questions they see at the heart of the Real Time Tax concept. For each question, IRS has begun developing a list of options and is assessing them (see table 1). Officials stressed that they have made no final decisions regarding which options, if any, they may pursue. Each option presents challenges to IRS as it considers how a Real Time Tax system might operate. For example, as noted previously, moving information return due dates to earlier in the filing season may affect amendment rates for some types of information returns. IRS is exploring options using a three-phased approach that began in 2011. IRS focused its Phase 1 efforts on envisioning the taxpayer experience, incorporating input from internal and external stakeholders. During this phase, IRS held focus group meetings with stakeholders and developed a conceptual model for Real Time Tax. Phase 2 is exploring options for how business processes might change, and includes a gap analysis which considers how business operations work now versus how they may work if Real Time Tax was implemented. IRS officials told us they anticipate completing Phase 2 by the end of April 2013. If IRS proceeds to Phase 3, it will focus on determining what operational and information technology infrastructure changes will be needed. As documented in the draft Conceptual Future Operating Model, Phase 3 would include developing a roadmap for the proof of concept and implementation; comparing Real Time Tax work streams with IRS’s long-term enterprise roadmap; and evaluating competing demands for resources and facilitating decision making around key investments supporting Real Time Tax. Figure 6 shows that IRS is following four of the six leading practices we used as criteria for assessing IRS’s exploratory efforts, and plans to implement the other two practices. Appendix III explains the practices in more detail and the sources we used to develop the leading practices. These criteria are drawn from our previous reports, IRS policy guidance, and other sources. The Commissioner’s Office created a team to lead IRS’s Real Time Tax exploratory efforts. The team is comprised of Real Time Tax executives from the Wage and Investment Division and Information Technology organization; a core team of senior staff who work under the direction of the Real Time Tax executives to lead the exploratory effort; a support network of subject matter experts throughout IRS; and contractors. While some core team members and subject matter experts have transitioned on and off the effort to meet other organizational needs, IRS officials and core team members said that they have developed strategies to ensure the effort’s leadership is consistent and that it has the necessary knowledge and skills. Core team members and other IRS officials confirmed that IRS provides overlapping assignments to allow for team members transitioning off the core team to brief incoming members on the history and progress of the effort. As discussed in more detail below, the core team has also identified external and internal stakeholders (see fig. 7). Key IRS stakeholders identified a vision and program goals for Real Time Tax that link to IRS’s mission and consider taxpayer burden. For example, the draft Conceptual Future Operating Model documents IRS’s vision to reduce taxpayer burden, improve compliance, and increase efficiency. IRS documents specify the following guiding principles and overall program goals for Real Time Tax: Minimize taxpayer burden by including measures to simplify the tax filing experience and not introducing changes that increase the burden for the majority of taxpayers. Build for the future by developing Real Time Tax with a long-term focus and implementing it with a phased approach that provides incremental benefits in the near-term. Leverage tax industry partners to help drive a successful Real Time Tax model. Mitigate vulnerabilities to identity theft and fraud to reduce the risk of IRS issuing credits and refunds to identity thieves and other fraudsters. IRS has particularly emphasized the importance of minimizing taxpayer burden. In addition to identifying it as a guiding principle, officials said that, if they proceed, they will use statistical modeling to better understand the Real Time Tax system’s potential effects on taxpayers. The guiding principles outlined in the draft Conceptual Future Operating Model link closely with IRS’s mission to provide taxpayers top-quality service while enforcing the law and meeting the agency’s performance goals and objectives. Officials told us the core team has provided input for IRS’s next strategic plan. IRS officials have begun planning for performance measurement and have taken steps such as conducting a baseline analysis, hypothesis testing, and considering performance measures. For example, IRS conducted a baseline analysis to understand the current volume, timing, filing patterns, and concentrations of tax filing activity for tax year 2009. During Phase 2, officials said that the core team is using statistical modeling to test possible Real Time Tax models to better understand their potential effects. These tests will help them understand whether they can match information return data at the time individuals file their tax returns and the extent to which IRS can notify taxpayers of a discrepancy at the time of filing. Officials noted that they are still determining what data will be needed to assess Real Time Tax, and whether such a system can provide benefits greater than those IRS and taxpayers receive under the current system. Officials consider it too early in the exploratory effort to define performance outcomes, but anticipate doing this as Real Time Tax exploration and planning progress. IRS documented time frames for its communication strategy and for evaluating matching capabilities, but has not developed a timeline for the exploratory efforts’ critical phases and essential activities. The draft Communications Strategy and Plan documents a high-level timeline for implementing IRS’s stakeholder communications strategy. IRS also documented time frames for live tests that began in March 2013 that will help officials understand whether IRS can match information return data to simple tax returns at the time of filing. However, while officials noted that the core team has a general idea about how long each planning phase should take, IRS has not developed a timeline or planned interim milestones for its overall exploratory effort. Officials noted that IRS management views Real Time Tax as a broad goal, and the core team did not set milestone dates to avoid causing concern among the stakeholder community that IRS has already decided on a path for Real Time Tax. Also, officials said that they want to conduct additional tests before planning next steps and then use the results of those tests to help determine if IRS should proceed with Phase 3. As we have stated in prior reports, the demand for transparency and accountability is a fact that needs to be accepted in an exploratory effort Establishing a timeline that includes critical phases of this magnitude.and essential activities that need to be completed by particular dates to achieve results is important for accountability and success in implementing a new exploratory effort. A full range of stakeholders and interested parties, including Congress, are concerned not only with what results are to be achieved, but also which processes are to be used to achieve those results. Also, an exploratory effort can build momentum internally and externally by demonstrating progress towards these goals. We recognize that IRS’s planned time frames and milestones may evolve as it learns from its exploratory efforts. Also, it may not be feasible for IRS to develop a detailed timeline for the Real Time Tax initiative, as planning efforts are still ongoing and officials have not decided whether to pursue Real Time Tax or how to structure it. At this early stage, it may make sense for IRS to identify contingency-based time frames rather than firm dates for the exploratory effort. However, having a documented timeline that identifies critical phases and milestones for essential activities—such as time frames for developing the “proof of concept” in Phase 3—could help IRS and Congress assess the progress of the exploratory efforts. Without a timeline for the overall exploratory effort, planning may go on endlessly, and IRS cannot know if its efforts are on track or will be completed in even the broad time frames IRS is considering. In addition, Congress may not be able to determine what legislative action may be needed. IRS officials stated that managing risk is a high priority for IRS, but while they have documented potential risks to Phase 2 testing, they have not developed an overall risk management framework for Real Time Tax. A risk management framework helps ensure that managers systematically identify, analyze, and manage risks. A documented risk management framework articulates what managers have done to analyze the consequences of identified risks and the likelihood they will occur, as well Leading practices suggest that as to assess alternatives to mitigate risk.a risk management framework should be developed early so that relevant risks are identified and managed, and that the framework should evolve and be reviewed on an ongoing basis. Such a framework could help IRS identify and mitigate the risks stemming from options it is considering for Real Time Tax, including the risks of moving information return due dates and communicating electronically with taxpayers. Officials stated they have not yet developed a risk management framework because they are still in the early stages of their exploratory efforts. Nevertheless, an IRS official stated that officials discuss potential risks for Real Time Tax regularly and that they are developing descriptions of “pros and cons” of the different approaches. IRS officials plan to further develop the risk strategy if IRS decides to pursue Real Time Tax. Without systematically identifying and evaluating the risks of Real Time Tax options, IRS officials may miss critical factors that could complicate the effort, including the potential costs to IRS, taxpayers, and other stakeholders. Furthermore, without a documented record of risk discussions, IRS may lose its knowledge of what risks and mitigations have been analyzed. For an effort that cuts across as many IRS functions as Real Time Tax and will likely take years to implement, a record of prior risk analyses could help prevent unnecessarily repeating the same analyses. IRS developed a communication strategy that identifies internal and external stakeholders, defines stakeholder communication needs, identifies communication media, and describes how IRS plans to communicate its Real Time Tax efforts to stakeholders. In its draft Communications Strategy and Plan, IRS developed talking points to describe its vision for Real Time Tax and to explain what a possible Real Time Tax system does not include. For example, IRS officials have stated that Real Time Tax will not involve a prefill option (where IRS prepopulates tax returns) or replace all compliance activity that occurs after the filing season, such as AUR. The draft Communications Strategy and Plan also states that IRS will work collaboratively with external stakeholders to outline the vision for Real Time Tax. To obtain the views of external stakeholders on potential frameworks for Real Time Tax, IRS held two public meetings and six focus groups involving individuals representing consumer groups, tax return preparers, the software industry, oversight agencies, payroll providers, and state revenue departments. IRS plans to continue activities aimed at increasing the public’s awareness and understanding of the Real Time Tax exploratory effort. These activities may include responding to media inquiries, posting information to the IRS website, and sending IRS officials to speaking engagements. Real Time Tax has the potential to provide substantial benefits, including reducing taxpayer burden and improving compliance by moving some information matching earlier in the tax season. However, it also may require significant and possibly costly changes to tax administration and impose new burdens on third parties. Careful consideration of risks and alternatives for mitigating those risks is crucial in weighing the potential benefits and costs of Real Time Tax options. While IRS has taken important steps in exploring the feasibility of Real Time Tax, much remains unknown because the exploratory effort is still underway. IRS has not yet developed time frames for the exploratory effort’s critical phases and essential activities, and we anticipate IRS may revise time frames as it obtains new information from its exploratory efforts. In addition, IRS has not created a risk management framework, which would provide valuable information about potential costs and benefits to IRS management. Given the potential scope of a Real Time Tax system, both agreed-upon time frames and a record of risk management considerations are likely to be important management tools that will help inform IRS management’s decisions about the future of Real Time Tax and help Congress oversee IRS’s efforts. Recognizing IRS’s exploratory efforts are in their early stages and the Real Time Tax concept will likely evolve over time, we recommend the Acting Commissioner of Internal Revenue take the following actions to help ensure managers are able to assess the progress of exploratory efforts and have the information needed to weigh the potential risks, costs, and benefits of options: Identify time frames for the Real Time Tax exploratory effort’s critical phases and essential activities. Develop a risk management framework for Real Time Tax that includes a record of risk analyses. We provided a draft of this report to the Acting Commissioner of Internal Revenue for comment. In written comments, reproduced in appendix V, IRS agreed with our recommendations. IRS said that as it continues to engage stakeholders and explore the Real Time Tax concept, it will identify time frames for critical phases and key activities and develop a risk management framework. We are sending copies of this report to the Acting Commissioner of Internal Revenue. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at James R. White at (202) 512-9110 or whitej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix describes our methodology for addressing the following objectives: (1) describe when the Internal Revenue Service (IRS) receives and matches individual tax and information returns and (2) assess the extent to which IRS is following leading practices for managing an exploratory effort of this importance at IRS. To describe when IRS receives and matches individual tax and information returns, we reviewed IRS documents and guidance, including the Internal Revenue Manual and IRS information return forms. We limited the scope of our review to the Form 1040 series and the 25 information returns IRS officials said would likely be most relevant to matching to individual income tax returns under a Real Time Tax system.We list these information returns in appendix II. Due to the manner in which IRS’s Compliance Data Warehouse (CDW) consolidates data for Forms SSA-1099 and RRB-1099, we analyzed the combined data for these two returns. These two types of returns collectively accounted for 3.8 percent (59 million out of 1.6 billion returns) received by IRS for tax year 2011. We generated descriptive statistics by accessing selected data elements from the CDW database, which provides a variety of tax return, enforcement, compliance, and other data. To develop information related to return volume, timing of return receipts and amendments, and refund issuance, we analyzed data for tax year 2011 returns, as this is the most recent year for which relatively complete data are available.when tax returns were received by IRS, we used the cycle posting date, when IRS posts tax return data to the master file, as it represents when the tax return data are available for matching. Officials noted that IRS must cleanse the data prior to posting to IRS systems. This may include identifying and correcting incomplete or inaccurate data before posting the data to IRS systems. To develop information related to the elapsed time between matching information returns to income tax returns and when IRS issued the first notice of discrepancy to taxpayers, we analyzed data for tax year 2010 as this is the most recent year for which IRS has completed the three phases of its matching process. We assessed the reliability of CDW data by (1) performing electronic or manual testing of required data elements to identify obvious errors, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. GAO, Standards for Internal Control in the Federal Government, GAO/AIMD-00-21.3.1 (Washington, D.C.: Nov. 1, 1999). the course of our audit work, and IRS agreed with our approach. Officials noted that they do not yet consider Real Time Tax a “project” and have not decided whether to pursue Real Time Tax. A description of the leading practices is detailed in appendix III. We conducted this performance audit from August 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Description Filed by lenders to report mortgage interest of $600 or more. Certain points (prepaid interest on a mortgage loan) are also reported if the points, plus other interest on the mortgage, are $600 or more. Filed by lenders to report student loan interest of $600 or more received. Filed by eligible educational institutions and insurers (who make reimbursements or refunds) to report payments received or amounts billed for qualified tuition and related expenses. Issued by lenders who acquire an interest in property that was security for a loan, or who know such property has been abandoned, to report income or loss. Filed to report proceeds from broker and barter exchange transactions. Filed by lenders to report cancelled debt of $600 or more. Issued by banks and other financial institutions to report dividends and other distributions. Filed by federal, state, and local government units to report payments of: unemployment compensation; state or local income tax refunds, credits or offsets; taxable grants; and/or agricultural payments. Filed to report interest income or U.S. Savings Bond and Treasury obligation interest of $10 or more, withholding for foreign taxes paid on interest, and backup withholding. Issued by payment settlement entities to report merchant card payments and third-party network payments. Filed by insurance companies, government units, and other providers to report long-term care benefits. Used to report miscellaneous income, such as: royalties or broker payments in lieu of dividends or tax-exempt interest of $10 or more; $600 or more in rents, services (including parts and materials), prizes and awards, medical and health care payments, crop insurance proceeds, cash payments for fish (or other aquatic life); fishing boat proceeds; gross proceeds paid to an attorney; direct sales of at least $5,000 of consumer products for resale from other than a permanent retail establishment; payments to independent contractors; directors’ fees; commissions paid to lottery ticket sales agents; and backup withholding. Description Filed by financial institutions, brokers, and other entities to report the original issue discount (the excess of an obligation’s stated redemption price at maturity over its issue price) includible in gross income of at least $10. Filed by cooperatives to report payments of $10 or more in patronage dividends and other distributions, as well as any federal backup withholding. Filed by states or eligible educational institutions to report earnings or distributions from qualified tuition programs and Coverdell Education Savings Accounts. Used to report distributions of $10 or more from pensions, annuities, individual retirement arrangements (IRAs), survivor income benefit plans, charitable gift annuities, and profit-sharing and retirement plans. Used to report the sale or exchange of real estate. Used to report distributions from a health savings account, Archer Medical Savings Account, or Medicare Advantage Medical Savings Account. Filed by the Railroad Retirement Board (RRB) to report Tier 1 railroad retirement benefits (the benefits railroad employees or beneficiaries would have been entitled to receive under the Social Security system) and special guaranty benefit payments. Filed by the Social Security Administration (SSA) to report Social Security benefits. Filed by the trustee or issuer of IRAs to report contributions, including any catch-up contributions, and the fair market value of the account. Used to report contributions, including rollover contributions, to any Coverdell Education Savings Account. Filed by a trustee or custodian of health savings accounts, Archer Medical Savings Accounts, and Medicare Advantage Medical Savings Accounts to report contributions, rollovers, and fair market value. Filed to report certain gambling winnings and any federal income tax withheld on those winnings. Filed by employers to report wages paid to each employee from whom income, Social Security, or Medicare tax was withheld or from whom income tax would have been withheld if the employee had claimed no more than one withholding allowance or had not claimed exemption from withholding on Form W-4 (Employee’s Withholding Allowance Certificate). The types of payments reportable on a 1099-MISC and their reporting thresholds vary widely. These include payments to nonemployees for services of at least $600 (called nonemployee compensation), royalty payments of $10 or more, and medical and health care payments made to physicians or other suppliers (including payments by insurers) of $600 or more. If reporting substitute payments in lieu of dividends or interest or gross proceeds paid to an attorney, the due date was February 15, 2012. Provider must have furnished fair market value information and required minimum distribution, if applicable, to participants by January 31, 2012. Paper returns were due to the Social Security Administration by February 29, 2012, and electronic returns were due by April 2, 2012. We have identified a number of leading practices for planning new initiatives at the Internal Revenue Service. Table 2 lists these leading practices and the sources used to develop them. As the table makes clear, we have applied these leading practices for a decade or more. In addition, our own review found these leading practices still relevant today. We discussed with IRS officials the leading practices on which we based our descriptions and assessments during the course of our audit work, and they agreed they are relevant to the Real Time Tax exploratory effort. The figure below provides additional information on the amendment rates for the 25 information returns we reviewed. James R. White, (202) 512-9110 or whitej@gao.gov. In addition to the contact named above, David Lewis (Assistant Director), Shannon Finnegan (Analyst-in-Charge), Ellen Rominger, Andrew Ching, and Albert Sim made key contributions to this report. Also contributing to this report were Joanna Berry, Jehan Chase, Michele Fejfar, Robert Gebhart, Robert Robinson, Sabrina Streagle, and John Zombro.
For tax year 2011, IRS matched over 140 million individual income tax returns against the 1.6 billion information returns it received from third parties such as employers. Generally, this match does not occur until well after refunds are issued. In early 2011 the then IRS Commissioner outlined a vision for a "Real Time Tax" system--a strategy to improve verification by matching third party information to income tax returns before refunds are issued, and IRS began exploring options for Real Time Tax later that year. GAO was asked to review IRS's strategy for exploring Real Time Tax. This report (1) describes when IRS receives and matches individual tax and information returns and (2) assesses the extent to which IRS is following leading practices for managing an exploratory effort of this importance. GAO reviewed IRS documents and guidance, including the Internal Revenue Manual, information return forms, and drafts of Real Time Tax planning documents. GAO generated descriptive data on the timing and volume of 25 information returns using IRS's Compliance Data Warehouse database. GAO identified leading practices on planning new initiatives at IRS using past GAO reports, internal control standards, and IRS documents. The Internal Revenue Service (IRS) receives few information returns before issuing most tax refunds. In 2012, IRS issued 50 percent of tax year 2011 refunds to individuals by the end of February, but had only received 3 percent of information returns. Most information returns are not received by IRS until after mid-April, and IRS conducts the first match of tax and information returns in July, with subsequent matches in February and May of the following year. For tax year 2010, over a year passed on average before IRS notified taxpayers of matching discrepancies, and IRS recognizes that this long time lag burdens taxpayers. IRS is generally following leading practices in its Real Time Tax exploratory effort by, for example, dedicating a team and defining program goals. IRS did not develop an overall timeline because management views Real Time Tax as a broad goal, and officials wanted to avoid causing concern that IRS had already decided on a path. Without a timeline for the overall exploratory effort, IRS cannot know if its efforts will be completed in even the broad time frames IRS is considering, and Congress may not be able to determine what legislative action might be required. IRS officials stated that managing risk is a high priority, but they have not developed an overall risk management framework, as they are still in the early stages of the exploratory effort. They said they plan to further develop the strategy if IRS pursues Real Time Tax. Without systematically identifying and evaluating the risks of Real Time Tax options, IRS officials may miss critical factors that could complicate the effort. A record of prior risk analyses could help prevent unnecessarily repeating the same analyses. GAO recommends IRS identify time frames for the exploratory effort's critical phases and activities and develop a risk management framework for Real Time Tax. IRS agreed with our recommendations.
It is the federal government’s responsibility to assure the physical protection of its facilities and the safety of employees and visitors of those federal buildings. GSA, through its Public Building Service (PBS) is the primary property manager for the federal government, owning or leasing 39 percent of the federal government’s office space. Approximately one million federal employees, millions of visitors, and thousands of children and their day-care providers enter these facilities each day. Within PBS, the Federal Protective Service is responsible for the security of most GSA- managed buildings. Over thirty other executive branch agencies, including DoD and the departments of State, Veterans Affairs, and Transportation, have some level of authority to purchase, own, or lease office space or buildings. These agencies are responsible for the security of their own sites. The U.S. Secret Service is in charge of the security of the White House and other executive office buildings. The U.S. Capitol Police secures the Capitol complex, which includes the Capitol and House and Senate office buildings. The marshal of the Supreme Court and the Supreme Court Police tend to the security of the Supreme Court. Marshals from the Department of Justice’s U.S. Marshals Service ensure the security of other federal courts. The 1995 domestic terrorist bombing of the Alfred P. Murrah Federal Building in Oklahoma City, Oklahoma, aroused governmentwide concern about the physical security of federal buildings. One day after the bombing, then President Clinton directed Justice to assess the vulnerability of all federal office buildings in the United States, particularly to acts of terrorism and other forms of violence. Justice led a working group in developing a report that established governmentwide minimum standards for security at all federal facilities. Also in 1995, the president directed executive departments and agencies to upgrade the security of their facilities to the extent feasible based on the report’s recommendations, giving GSA this responsibility for the buildings it controls. Among the minimum standards for buildings of a higher risk level specified by the Justice report are security technologies, including closed-circuit television (CCTV) surveillance cameras, intrusion detection systems with central monitoring capability, and metal detectors and x-ray machines to screen people and their belongings at entrances to federal buildings. In June 1998, we testified on GSA’s efforts to improve federal building security. We reported that although GSA had made progress implementing security upgrades in its buildings, it did not have the valid data needed to assess the extent to which completed upgrades had helped to increase security or reduce vulnerability to the greatest threats to federal office buildings. We also expressed concerns about whether all GSA buildings had been evaluated for security needs. We recommended that GSA correct the data in its tracking and accounting systems, ensure that all GSA buildings were evaluated, and develop program goals, measures, and evaluations to better manage its security enhancement program. In October 1999 we again testified on GSA’s efforts. During this review, we found that the accuracy of GSA’s security upgrade tracking system had improved and that almost all of its buildings had been evaluated for security needs. However, a review we performed in April and May 2000 exposed a significant security vulnerability in the access controls at many government buildings. Posing as law enforcement officers, we gained access to 18 federal facilities, where we reached the offices of 15 cabinet secretaries or agency heads. Our briefcases were not searched for weapons or explosives. As mentioned previously, last September’s terrorist attacks against the World Trade Center and the Pentagon have focused even greater security concerns about federal buildings. Such concerns have prompted agency officials to create a more stringent security environment at their facilities. For example, the Federal Emergency Management Administration recently informed GSA officials that it was canceling plans to move its national headquarters and 1,000 workers to the Potomac Center redevelopment near the waterfront in Washington, D.C. Citing security concerns about the new building, the agency backed out of a 10-year lease. Despite a show of increased security, it remains uncertain whether effective countermeasures have actually been implemented. For example, reporters who visited a number of government agencies in late October demonstrated that, without thorough screening, nonemployees could easily gain access to freely wander the buildings. Since the 1995 Oklahoma City bombing, the federal government has already spent more than $1.2 billion on increased security measures for the federal government’s office space. Following the September 11th terrorist attacks, increased resources have been appropriated for this purpose. Specifically, on September 18, 2001, President Bush signed the Fiscal Year 2001 Emergency Supplemental Appropriations Act (P.L. 107- 38), appropriating $40 billion to respond to the terrorist attacks. The act provides funding to cover the physical protection of government facilities and employee security. On September 21, 2001, the president allocated $8.6 million from this appropriation to GSA’s Federal Buildings Fund to provide increased security for federal buildings. On October 17, 2001, the president requested that Congress increase the total to $200.5 million for the Federal Building Fund for additional security services at federal buildings. The president’s fiscal year 2003 budget requests that $367 million be made available from the Federal Building Fund to fund costs associated with implementing security improvements to federal buildings. On March 21, 2002, the Bush administration asked Congress for an additional $27.1 billion in emergency funding for fiscal year 2002 for needs stemming from the September 11th terrorist attacks, $5.5 billion of which were for domestic security. Some of these requested funds will most likely be invested in technologies to enhance building security. It will be important to ensure that the technologies that these funds are spent on are effective. The approach to good security is fundamentally similar regardless of the assets being protected. As GAO has previously reported for homeland security and information systems security, applying risk management principles can provide a sound foundation for effective security whether the assets are information, operations, people, or federal facilities. These principles, which have been followed by members of the intelligence and defense community for many years, can be reduced to five basic steps that help to determine responses to five essential questions. Because of the vast differences in types of federal facilities and the variety of risks associated with each of them, there is obviously no single approach to security that will work ideally for all buildings. Therefore, following these basic risk management steps is fundamental to determining security priorities and implementing appropriate solutions. The first step in risk management is to identify assets that must be protected and the impact of their potential loss. Included among the assets of federal facilities are the physical safety and peace of mind of the occupants, the value of the structure itself, and the importance of the mission of the organization housed in the facility. The symbolic value of certain landmark federal facilities and monuments must also be considered in the assessment. Who Are My Adversaries? The second step is to identify and characterize the threat to these assets. Is the threat, for example, that unauthorized individuals can gain access to the building to commit some crime, or that an authorized yet disgruntled employee intent on causing harm to fellow employees or the facility can get in, or, still more menacing, that a terrorist will introduce a chemical/biological agent or even a nuclear device into the building? The intent and capability of an adversary are the principal criteria for establishing the degree of threat to these assets. The terrorist bombing of the World Trade Center in 1993, the Oklahoma City bombing of the Alfred P. Murrah Federal Building in 1995, the U.S. embassy bombings in Tanzania and Kenya in 1998, and last year’s September 11th terrorist attacks on the Pentagon and the World Trade Center leave no doubt as to the existence of adversaries intent on causing the maximum harm. And, as these events have tragically demonstrated, our adversaries certainly have the capability. Moreover, more recent information gathered by intelligence and law enforcement agencies have led government officials to believe that both foreign and domestic terrorist groups continue to pose threats to the security of our nation’s infrastructure, including our public buildings. How Am I Vulnerable? Step three involves identifying and characterizing vulnerabilities that would allow identified threats to be realized. In other words, what weaknesses can allow a security breach? For a facility, weaknesses could include vulnerabilities in the physical layout of the building, its security systems, and processes. For example, the lack of a standoff distance between vehicle access and the building itself, which would allow an adversary to detonate a car or truck bomb within a dangerous distance of the building, is an example of a vulnerability in the perimeter security of a building. Or, it might be that an antiquated and labor-intensive access control system in combination with an inadequate security staff create weaknesses in security systems and processes that allow entrance to a building. What Are My Priorities? In the fourth step, risk must be assessed and priorities determined for protecting assets. Risk assessment examines the potential for the loss of or damage to an asset. Risk levels are established by assessing the impact of the loss or damage, threats to the asset, and vulnerabilities. For example, the risk of loss of human life due to poor access controls on weekends, when fewer people are working in the building, is lower than on weekdays during standard working hours. What Can I Do? The final step is to identify countermeasures to reduce or eliminate risks. In doing so, the advantages and benefits of these countermeasures must also be weighed against their disadvantages and costs. Many security technologies were developed in a research environment. However, in a real-world environment, some degree of security must be traded off against operational and safety considerations. Extreme security countermeasures cannot be implemented if they could disrupt operations or adversely affect the safety of the occupants of a building. For example, an access control system that uses draconian methods to screen employees at public entrances would be inappropriate except in buildings at the highest risk level because it would cause maximum inconvenience to large numbers of building occupants at peak traffic hours. Moreover, an access control system cannot be so rigid that it impedes the safe exit of a building’s occupants during emergencies, such as a fire. In all cases, an acceptable balance between security and these competing factors must be reached, which can only be decided by the building’s occupants. Countermeasures identified through the risk management process support the three integral concepts of a holistic security program: protection, detection, and reaction. Protection provides countermeasures such as policies, procedures, and technical controls to defend against attacks on the assets being protected. Detection monitors for potential breakdowns in protective mechanisms that could result in security breaches. Reaction, which requires human involvement, responds to detected breaches to thwart attacks before damage can be done. Because absolute protection is impossible to achieve, a security program that does not also incorporate detection and reaction is incomplete. To be effective, all three concepts must be elements of a cycle that work together continuously. To illustrate, suppose that the protection of a side door of a federal building is provided by a lock, which is wired to an intrusion detection sensor, which triggers an alarm to alert a guard to initiate a reaction. If someone picks the lock, thereby tripping an alarm, and a guard is monitoring the detection system in real time, she or he will detect the incident and can react to contain the intrusion and apprehend the intruder before damage is done. However, if no guard is monitoring the intrusion detection systems to react to the intrusion, the process breaks down and the security of the building may be compromised. In other words, technologies that implement the concepts of protection and detection cannot alone safeguard a building. An effective human reaction is essential to the security process. Myriad security technologies, at various stages of commercial development, support the security concepts of protection, detection, and reaction. We have categorized these systems according to the particular concept that they support. Access control systems provide protection by establishing a checkpoint at entry points to a building through which only authorized persons may pass. Detection systems look for dangerous objects and agents on persons, their belongings, and their vehicles at a building’s entry points. Intrusion detection systems monitor for security incursions throughout a building to alert security staff to react to investigate and contain the intrusion. The first line of security within a federal building is to channel all access through entry control points where identity verification devices can be used for screening. These devices “authenticate” individuals seeking entry, i.e., they verify that the individuals are indeed authorized by electronically examining credentials or proofs of identity. Identity verification devices use three basic technological approaches to security based on something you have, something you know, and something you are. Accordingly, they range from automatic readers of special identification cards (something you have), to keypad entry devices that generally require a pin number or password (something you know), to more sophisticated systems that use biometrics (something you are) to verify the identity of persons seeking to enter a facility. More secure access control systems use a combination of several of these approaches at the same time for additional security. Technologies used by identity verification devices include the basic bar code or magnetic strip for card-swipe readers, similar to those used for credit cards, cards that use radio frequency signals and need only be passed within close proximity to a reader to identify the card holder, and smart cards that can contain biometric identifiers. Keypad entry devices are often used in combination with cards and card readers. Newer access control systems that use biometric technologies to verify the identity of individuals can significantly increase building security. The term biometrics covers a wide range of technologies used to verify identity by measuring and analyzing human characteristics. Identifiable physiological characteristics include fingerprints, retinas and irises, and hand and facial geometry. Identifiable behavioral characteristics are speech and signature. Biometrics theoretically represent a very effective security approach because biometric characteristics are distinct to each individual and, unlike identification cards and pin numbers or passwords, they cannot be easily lost, stolen, or guessed. Biometric systems first capture samples of an individual’s unique characteristic that are then averaged to create a digital representation of the characteristic, known as a template. This template is stored and used to determine if the characteristic of the individual captured by the identity verification device at the entry control point matches the stored template of that individual’s characteristic. Templates can be stored within the device itself, in a centralized database, or on an access card. Until recently, in addition to being very expensive, the performance of most biometric technologies had unreliable accuracy. However, prices have significantly decreased and, after years of research, the technology has recently improved considerably. Today biometric devices that read fingerprints and hand geometry have been operationally deployed and proven to be affordable and reliable. Nevertheless, other biometric technologies are not as mature and still tend to falsely reject authorized persons or falsely accept unauthorized persons. These reliability weaknesses will have to be overcome before their use can be widespread. User acceptance is also an issue with biometric technologies in that some individuals find them difficult, if not impossible, to use. Still other individuals resist biometrics in general because they perceive them as intrusive and infringing on their right to privacy. Once a person is authenticated, access control systems are designed to electronically allow passage through some barrier. Building access barriers can range from such conspicuous physical structures as revolving doors to all but transparent optical turnstiles that generate an alarm when an unauthorized individual attempts to pass. Table 1 provides a high-level description of access control technologies that can be deployed to protect federal facilities. Attachment I describes the technologies in greater detail. Detection systems provide a second layer of security. Portal (walk- through) metal detectors can be strategically deployed at entry control points to screen individuals for hidden firearms and other potentially injurious objects, such as knives and explosive devices, as they clear the access control system. Unlike more traditional detectors which simply generated an alarm when a metal target was detected anywhere on an individual’s body, more technologically advanced portal scanners now come equipped with light bars to highlight the locations where highest metal concentrations are detected. More sensitive and ergonomic handheld detector wands are also now commercially available to perform thorough and rapid follow-up screens. As individuals proceed through the metal detector, their carried items can be passed through an x-ray system, which scans the items to obtain an image of the contents. Low-energy x-ray systems are also currently being tested to screen individuals for hidden weapons and explosives. However, performance, privacy, and health issues associated with this technology will have to be overcome before it can be widely deployed. Though not yet commercially available, holographic scanning, which can screen for metallic as well as nonmetallic weapons concealed under clothing, is another new technology currently being tested by the Federal Aviation Administration. Explosive trace detectors provide an additional layer of building security. Security personnel swab the surface of a person’s belongings at entry control points to check for concealed explosives. The swab is then placed into the detection device, which tests for the presence of explosive traces. Portal explosive detection systems and systems that detect large vehicles carrying bombs are now commercially available, but the technology has not yet been widely deployed. Finally, more research and development efforts will be required before technologies for detecting chemical/biological agents become more effective and affordable. Table 2 provides a high-level description of detection technologies that can be deployed to protect federal facilities. Attachment II describes the detection technologies in greater detail. Intrusion detection systems alert security staff to react to potential security incidents. CCTV cameras play an integral part of intrusion detection systems. Security personnel can use this technology to monitor activity throughout a building, in particular at entryways, exits, stairwells, and other areas that are susceptible to intrusion. CCTV technology is mature, practical, and reasonably priced. Moreover, it is highly cost efficient because one person can monitor several areas on different screens at the same time from one central location. However, experiments have shown that relying on security staff to detect incidents by constantly monitoring scenes from the camera in real time is ineffective. Because watching camera screens is both boring and mesmerizing, the attention of most individuals has degenerated to well below acceptable levels after only 20 minutes of viewing. This is particularly true if staff are watching multiple monitors simultaneously. A more practical application of CCTV is to interface the CCTV system with electronic intrusion detection technologies, which alert security staff to potential incidents requiring monitoring. Electronic intrusion detectors are designed to identify penetrations into buildings through vulnerable perimeter barriers such as doors, windows, roofs, and walls. These systems use highly sensitive sensors that can detect an unauthorized entry or attempted entry through the phenomena of motion, vibrations, heat, or sound. Examples are technologies that detect motion through breaks in a transmitted infrared light beam and heat emitted from a warm object, such as a human body. When an intrusion is sensed, a control panel to which the sensors are connected transmits a signal to a central response area, which is continually monitored by security personnel. The sensor-detected incident will alert security personnel of the incident and where it is occurring so that personnel will know what monitor to pay attention to. By interfacing these technologies, security personnel can initially assess sensor-detected security events before determining how to react appropriately. Alarm- triggered video recorders can also be installed to provide immediate playback of a detected event for analysis. Table 3 provides a high-level description of intrusion detection technologies that can be deployed to secure federal facilities. Attachment III describes the technologies in greater detail. Although the newer technologies can contribute significantly to enhancing building security, it is important to realize that deploying them will not automatically eliminate all risks. Effective security also entails having a well-trained staff to follow and enforce policies and procedures. Moreover, the technical capabilities of security systems must not be overestimated. Finally, a broad framework of supporting functions must be in place at the federal, state, and local levels. Effective security requires technology and people to work together to implement policies, processes, and procedures that serve as countermeasures to identified risks. To illustrate this point, let us examine the following scenario: an organization has policies in place to mitigate the risk of an outsider committing a harmful act against its employees. One policy states that entry to the building is restricted to authorized personnel and another that no weapons may be brought into the building. An access control system implements the first policy by requiring that people wishing to enter present a smart card with a biometric that matches the stored biometric of the authorized person. A detection system implements the second policy by requiring people to pass through a metal detection portal and their belongings to be scanned by an x-ray machine. These procedures ensure compliance with the policies. However, to be effective, security personnel must enforce the policies by following the prescribed procedures. If security personnel allow exceptions to these procedures, they are failing to enforce compliance with the policies. Just as damaging is the lack of effective security processes. For example, if there are no processes in place to handle the entry of employees who have forgotten their identity access cards, a vulnerability may be created that could be exploited by adversaries. Breaches in security resulting from human error are more likely to occur if personnel do not understand the risks and the policies that are put in place to mitigate them. Training is essential to successfully implementing policies by ensuring that personnel exercise good judgment in following security procedures. In addition, having the best available security technology cannot ensure protection if people have not been trained in how to use it properly. Training is particularly essential if the technology requires personnel to master certain knowledge and skills to operate it. For example, x-ray inspection systems rely heavily on the operator to detect concealed objects in the generated x-ray images. If security personnel have not received adequate training in understanding how the technology works and detecting threat images, such as a knife, the security system will be much less effective. It is also important to determine how effective technologies really are. Are they actually as accurate as vendors state? In overestimating their capabilities, security officials risk falling into a false sense of security and relaxing their vigilance. During our review, we found instances in which the performance estimates vendors provided for some of their biometric technologies were far more impressive than those obtained through independent testing. As always, it is important to keep in mind the adage of “buyer beware” when making procurement decisions. There are publicly available resources that provide assessment guidance regarding security products. For example, the National Institute of Justice has evaluated a number of security products over the past few years and can serve as a valuable resource to federal agencies for making purchasing decisions. Also bear in mind that lesser technological solutions sometimes may be more effective and less costly than more advanced technologies. Dogs, for example, are an effective and time-proven tool for detecting concealed explosives. The dogs currently used by DoD, for example, can detect nine different types of explosive materials. And since dogs have the advantage of being mobile and able to follow a scent to its source, they have significant advantages over mechanical explosive detection systems in any application that involves a search. The use of technologies as countermeasures is identified in the final step of the risk management process. As such, they are only capable of defending against recognized threats. If unrecognized threats are not factored into the risk management process, these risks will not be mitigated and the technologies that have been implemented may be ineffectual in preparing for them. Security managers of federal buildings rely on federal, state, and local government entities to prevent, detect, and respond to acts of terrorism against their facilities. Federal security managers typically are not aware of potential threats posed by foreign and domestic terrorist groups. As such, they depend on intelligence and law enforcement agencies such as the Central Intelligence Agency, the Defense Intelligence Agency, and the State Department’s Bureau of Intelligence and Research to gather information about and assess such threats against their facility. Security managers of federal buildings also do not have access to the range of emergency resources required to respond to terrorist attacks. They rely on state and local governments to provide fire-fighting, medical personnel, and other emergency services. They also rely on the police and the judicial systems to enforce and prosecute violators of the laws and regulations governing the protection of federal buildings. Despite significant advances in performance and capability, the newer security technologies still face considerable technical challenges and user acceptance issues before they can be effectively integrated and widely deployed in federal facilities. First, because there are no industrywide common standards for data exchange and application programming interfaces for technologies that provide physical security, most of the equipment used by the technologies in our review is not interoperable. For example, deploying an access control system that uses a smart card containing a fingerprint biometric would require at least three pieces of equipment: the card reader device, the fingerprint scan device, and the hardware device used to house and operate the biometric software. If these devices are made by different manufacturers, they cannot function as an integrated environment without software to connect the disparate components. Not only does developing the initial customized software represent substantial expenditures, but new software will have to be developed whenever old equipment is replaced by equipment from a different manufacturer. Moreover, standardizing on one manufacturer’s equipment is not the most advantageous option since doing so leaves no range of equipment from which to choose and requires replacing all existing hardware not made by that manufacturer. Although efforts are underway to address the lack of standards, it will be some time before this problem is resolved. Second, Americans expect and cherish the value and freedom of privacy. Recent concern within Congress and public interest groups alike about the intended use of CCTV by D.C. law enforcement agencies has highlighted issues regarding the consequences of the applications of newer security technologies on privacy. In general, apprehensions are based on a fear of misuse, i.e., that these security technologies will be used for purposes other than for which they were intended. For example, there is a fear that the government may use the newer surveillance technologies to track people. In addition, employees fear that management will be tempted to monitor their performance. Also at issue is whether people will be arbitrarily monitored based on their race or ethnic origin or whether operators may be tempted to indulge in video voyeurism by, for example, especially focusing on young, attractive females. Another concern is that biometric technologies may reveal confidential medical information. Because diseases such as AIDS, diabetes, and high blood pressure cause changes to the retina, some people fear that retinal scans could compromise the privacy of this information. Civil liberties advocates also find the newer detection system technologies too intrusive. The tremendous potential for embarrassment was recently pointed out by newspapers reporting on low-dose x-ray systems installed at Orlando International Airport that essentially perform “virtual strip searches.” These systems, now in a test phase, can see a person’s body through clothing. Newspapers published pictures revealing images of a person’s body—every inch of it—graphically captured by the scanner. Third, several of the security technologies we reviewed have the disadvantage of being both complex and inconvenient to use, requiring considerable user cooperation. Most biometric technologies, in particular, have some negative features. Retina scanning, for example, feels physically intrusive to some users because it requires close proximity with the retinal reading device. Moreover, fingerprinting feels socially intrusive to some users because of its association with the processing of criminals. There is also an assortment of health concerns among a segment of the population regarding certain security technologies. There is evidence that pacemakers and hearing aids can be adversely affected by some detection technologies. However, no evidence has been produced to substantiate fears of radiation exposure from x-ray systems and apprehensions that certain detection systems could cause depression or even brain tumors. Certain groups of individuals resist using biometric devices because of hygiene issues. In conclusion, our review has identified myriad commercially available technologies that implement the three essential concepts of effective security: protection, detection, and reaction. Many of these technologies are mature and have already been deployed in various federal facilities, where their capabilities and effectiveness have been demonstrated. Other newer technologies appear to offer great potential in helping federal agencies to ensure the security of their facilities. These technologies could be adopted in the near future. Other technologies are still in a nascent stage of development, but are maturing and appear promising. Many biometric technologies still face barriers in intrusiveness and complexity that must be addressed before they can be most effectively deployed and widely accepted by users. However, they offer greater security, and the challenges to their implementation may not be formidable. However, of foremost importance is to continue to bear in mind that effective security can never be achieved by relying on technology alone. People will always play a fundamental role in all phases: from planning to implementation and to enforcement. Accordingly, technology and people must work together as part of an overall security process, beginning with a risk management approach and incorporating, implementing, and reinforcing those three essential concepts. Mr. Chairman and members of the subcommittee, this concludes my statement. I would be pleased to answer any questions you or the members of the subcommittee may have. For further information, please contact me at (202) 512-6412 or via e-mail at rhodesk@gao.gov. Individuals making key contributions to this testimony included Sophia Harrison, Ashfaq Huda, Richard Hung, Elizabeth Johnston, and Tracy Pierson. The first line of security within a federal building is to channel all access through entry control points where identity verification devices can be used for screening. These devices “authenticate” individuals seeking entry, i.e., they verify that the individuals are indeed authorized to be there by electronically examining credentials or proofs of identity. Identity verification devices use three basic technological approaches to security based on something you have, something you know, and something you are. Accordingly, they range from automatic readers of special identification cards (something you have), to keypad entry devices that generally require a pin number or password (something you know), to more sophisticated systems that use biometrics (something you are) to verify the identity of persons seeking to enter a facility. More secure access control systems use a combination of several of these approaches at the same time for additional security. The term “biometrics” covers a wide range of technologies used to measure and analyze human characteristics to verify a person’s identity. Identifiable physiological characteristics include fingerprints, eye retinas and irises, and hand and facial geometry. Identifiable behavioral characteristics are speech and signature. Biometrics represents a theoretically very effective security approach because these characteristics are distinct to each individual and, unlike identification cards and pin numbers or passwords, they cannot be easily lost, stolen, or guessed. Although biometric technologies measure different characteristics, all biometric access control technologies involve a similar process that includes the following components: Enrollment: multiple samples of an individual’s biometric are captured (as an image or a recording) via an acquisition device (e.g., a scanner or a camera). Reference template: the captured samples are averaged and processed to generate a unique digital representation of the characteristic, which is stored for future comparisons. Templates are essentially binary number sequences. The size of the template depends on the technology, but generally ranges from 10 bytes to 20,000 bytes. It is impossible to recreate the sample, such as a fingerprint, from the template. Templates can be stored centrally on a computer database, within the device itself, or on a smart card. Verification: a sample of the biometric of the person seeking access to a building is captured at the entry control point, processed into a trial template, and compared with the stored reference template to determine if they match.Because the reference template is generated from multiple samples at enrollment, the match is never perfect. Therefore, systems are configured to verify the identity of users if the match exceeds an acceptable threshold. The effectiveness of biometric systems is characterized by two error statistics: false rejection rates (FRRs) and false acceptance rates (FARs). For each FRR there is a corresponding FAR. A false reject occurs when a system rejects a valid identity; a false accept occurs when a system incorrectly accepts an identity. If biometric systems were perfect, both error rates would be zero. However, all biometric technologies suffer FRRs and FARs that vary according to the individual technology and its stage of development. Because biometric access control systems are not capable of verifying identities with 100 percent accuracy, trade-offs must be considered during the final step of the risk management process when deciding on the appropriate level of security to establish. These trade-offs have to balance acceptable risk levels with the disadvantages of user inconvenience. For example, perfect security would require denying access to everyone. Conversely, granting access to everyone would result in denying access to no one. Obviously neither of these extremes is reasonable, and access control systems must operate somewhere between the two. How much risk one is willing to accommodate is the overriding factor in adjusting the threshold, which translates into determining the acceptable FAR. The tighter the security required, the lower the tolerable FAR. Vendors of biometric systems are currently claiming that false accepts occur once out of every 100,000 attempted entries and that the FRR is about 2 to 3 percent. However, because system thresholds are adjusted to accommodate different FARs, it is often difficult to measure and compare their effectiveness. Vendors also describe the accuracy of their systems in terms of an equal error rate, also referred to as the crossover accuracy rate, or the point where the FAR equals the FRR. As shown, selecting a lower FAR increases the FRR—the chance that an authorized person will be denied access to a facility. Perfect security would require denying access to everyone. In this extreme case, the FAR would be “0” and the FRR “1.” Conversely, granting access to everyone would result in a FRR of “0” and a FAR of “1.” Fingerprint scan technology (also known as fingerprint recognition) uses the impressions made by the unique, minute, ridge formations or patterns found on the fingertips. Although fingerprint patterns may be similar, no two fingerprints have ever been found to contain identical individual ridge characteristics. These characteristics develop on normal hands and feet some months before birth and remain constant, except for accidental damage or until decomposition after death. The image of the fingerprint is captured either optically or electrically. A template is then created from the image. There are two primary methods for creating templates. Most fingerprint scan technologies base the template on minutiae, or the breaks in the ridges of the finger (such as ridge endings or points where a single ridge divides into two). The second method is based on pattern matching of the ridge patterns. In neither method is the template a full fingerprint image, and a real fingerprint cannot be recovered from the digitized template. The generated template ranges from 250 bytes for minutiae-based templates to about 1000 bytes for ridge-pattern-based templates. Vendors commonly claim an FRR of 0.01 percent. Despite a low FAR, independent testing has shown that some scanning devices can have a FRR of nearly 50 percent. In a small percentage of the population, fingerprints cannot be captured because a person’s fingerprints are dirty or have become dry or worn due to age, extensive manual labor, or exposure to corrosive chemicals. In addition, the optical method of fingerprint scanning can be prone to errors if there is a buildup of dirt, grime, or oil on the surface of the device where the image is captured. Because fingerprints have historically been used by law enforcement agencies to identify criminals, there is some user resistance to this technology. Also, people may have hygienic issues with having to touch the plate of the scanner that has previously been touched by many people. According to a 2001 report published by Gartner Group, Inc., the leading vendors are American Biometric Company, Digital Persona Inc., Identix Inc., and Bioscrypt, Inc. (formerly Mytec Technologies Inc.). The GSA schedule lists fingerprint readers designed for physical access control at prices ranging from about $1,000 to about $3,000 per unit. Software licenses for the fingerprint technology are listed for about $4.00 per user enrolled. Hand (or finger) geometry is based on the premise that each individual’s hands, although changing over time, remain characteristically the same. The technology collects over 90 automated measurements of many dimensions of the hand and fingers, using such metrics as the height of the fingers, distance between joints, and shape of the knuckles. The user’s hand is placed on the sensor’s surface, typically guided into proper position by pegs between the fingers. Only the spatial geometry is examined; prints of the palm or fingers are not taken. About a 10- to 20- byte template is created from hand geometry. Independent testing of the leading hand geometry readers (manufactured by Recognition Systems, Inc.) at Sandia National Laboratories in 1991 produced a FAR of less than 0.1 percent and an FRR of less than 0.1 percent. Hand geometry is not considered as robust as other biometric access control technologies because of similarities between individual hand templates. Not as much distinguishing information can be found in a hand compared to an iris or a fingerprint. Hand geometry is a well-developed technology, which disregards fingernails and surface details such as fingerprints, lines, scars, and dirt. However, hand injuries and jewelry can impede accurate readings and/or comparisons. Whether used for verification or identification purposes, the stored image templates must be kept updated as appearances are naturally altered by age. Hand geometry is considered to be easy to use, although a minimal amount of training is required for users to align their hands in the reader. The hand geometry market is dominated by Recognition Systems, Inc. The finger geometry market is led by BioMet Partners. Hand geometry reader devices generally cost between $2,000 to $4,000. Retina scan technology is based on the patterns of blood vessels on the retina, a thin nerve about 1/50th of an inch thick located on the back of the eye. These patterns are unique from person to person. No two retinas are alike, not even in identical twins. Retinal patterns remain constant throughout a person’s lifetime except in cases of certain diseases. Retina scan devices project a low-intensity infrared light through the pupil and onto the retina. The patterns of the retina’s blood vessels are measured at over 400 points to generate a 96-byte template. Retinal scanning, along with iris scanning technology, is the most accurate and reliable of the biometric technologies. It is virtually impossible to replicate the image produced by a human retina. It has been used as a mainstay technology for controlling access to highly secure government facilities. Depending upon system threshold settings, FRRs can be as low as 0.1 percent and FARs as low as 0.0001 percent (1 in 1,000,000). Retina scan biometrics are the hardest to use. The older technology requires users to repeatedly focus on a rotating green light through a small opening in the scanning device, located within 1/2 inch of his or her eye, and to hold very still for 10 to 12 seconds at a time. However, a newly developed technology is capable of capturing a retinal image at distances as great as a meter from the user’s eye in 1.5 seconds. Also whereas glasses, contact lenses, and existing medical conditions, such as cataracts, interfere with the older scanning technology, the newer technology is more accommodating. Though stable over time, the retina can be affected by diseases such as glaucoma, diabetes, high blood pressure, and AIDS. Even though the technology itself is completely safe, users tend to be resistant to its use because the eye is a very delicate area. Users perceive the technology as intrusive because it requires the use of infrared rays to obtain an accurate reading. Additionally, some users are very hesitant to use the device because the older technology requires close proximity or even contact with the scanner. The newer technology is less intrusive. Some people fear that retinal scans could compromise the privacy of confidential medical information because certain patterns of blood vessels in the retina can be associated with certain diseases. Until recently EyeDentify Inc. was the sole vendor of retina systems. Retinal Technologies, Inc. has lately entered the market with a new retinal scan technology. Retina scan devices cost approximately $2,000 to $2,500, placing them toward the high end of the physical security spectrum. Iris scan technology is based on the unique visible characteristics of the eye’s iris, the colored ring that surrounds the pupil. The iris of each eye is different; even identical twins have different iris patterns. The iris remains constant over a person’s lifetime. Even medical procedures such as refractive surgery, cataract surgery, and cornea transplants do not change the iris’s characteristics. Built from elastic connective tissue, the iris is a very rich source of biometric data. Complex patterns include striations, rings, furrows, a corona, and freckles. Whereas traditional biometrics have only 13 to 60 unique characteristics, an iris has about 266. A high-resolution black-and-white digital image of the iris is taken to collect data. The system then defines the boundaries of the iris, establishes a coordinate system over the iris, and defines the zones for analysis within the coordinate system. The visible characteristics within the zones are then converted into a 512-byte template. Iris scanning is considered one of the more secure identity verification methods available. Because of the massive quantity of biometric data that can be derived from the iris, the template that is created is unique. In fact, the odds of two different irises returning identical templates is 1 in 10. The technology cannot be foiled by wearing contact lenses or presenting an artificial eye to the reading device because algorithms check for the presence of a pattern on the sphere of the eye instead of on an internal plane and use measurements at different wavelengths to detect if the eye is living. The Army Research Laboratory recently tested an identification system using iris scan technology from Iridian Technologies. The results indicated an FRR of 6 percent and a FAR of 1 to 2 percent. Few other independent tests of the iris scan technology have been published. Both the enrollment and verification steps are easy. Contact lenses, even colored ones, normally do not interfere with the process. Wearers of exceptionally strong glasses could have problems, but these could always be removed. Iris recognition can even be used to verify the identity of blind people as long as one of their sightless eyes has an iris. Any unusual lighting situations may affect the ability of the camera to capture the subject. Also, glare and reflections, along with user settling and distraction, can cause interferences. Unlike other biometric identification verification technologies such as fingerprinting or hand geometry, iris scan technology requires no body contact. Although some users resist technologies that scan the eye, the iris scan is more user friendly than the retinal scan because no light source is shown into the subject’s eye and close proximity to the scanner is not required. Users can simply glance into a standard video camera from a distance of about 10 inches and have their identity verified in approximately 2 seconds. According to a 2001 report published by Gartner Group, Inc., Iridian Technologies is the sole owner and developer of iris recognition technology. Vendors licensing iris technology include: EyeTicket Corporation, LG Electronics, and Panasonic. Iris recognition was traditionally among the most expensive biometric technologies costing tens of thousands of dollars. The significant drop in the price of computer hardware and cameras has brought the price down. However, an iris recognition system still costs approximately between $4,000 and $5,000. Facial recognition is a biometric technology that identifies people based on their facial features. Systems using this technology capture facial images from video cameras and generate templates for comparing a live facial scan of an individual to a stored template. These comparisons are used in either verifying or identifying an individual. Verification systems (also known as one-to-one matching systems) compare a person’s facial scan to a stored template for that person, and can be used for access control. In an identification system (or a one-to- many matching system), a person’s facial scan is compared to a database of multiple stored templates. This makes an identification system more suited for use in surveillance in conjunction with CCTV to, for example, spot suspected terrorists whose facial characteristics have already been captured and a template generated and stored in a database. There are two primary types of facial recognition technology used to create templates: 1. Local feature analysis—Dozens of images from regions of the face are captured, resulting in feature-specific fields such as eyes, nose, mouth, and cheeks. These feature-specific fields are used as blocks of a topographical grid. The types of blocks and their positions are used to identify the face. Small shifts in a feature are anticipated to cause a related shift in an adjacent feature. 2. Eigenface method—Unlike local feature analysis, the eigenface method always looks at the face as a whole. A collection of face images is used to generate a set of two-dimensional, grayscale images to produce the biometric template. When a live image of a person’s face is introduced, the system represents the image as a combination of templates. This combination is compared to a set of stored templates in the system’s database, and the degree of variance determines whether or not a face is recognized. Modifications of the algorithms used in local feature analysis and eigenface methods can lead to variances which incorporate the following: Neural network mapping—Comparisons of a live facial image to a stored template are based on unique global features rather than individual features. Upon a false match, the comparison algorithm modifies the weight given to certain features (such as shadows). Automatic face processing—Facial images are captured and analyzed from the distances and distance ratios between features (such as between the eyes). Testing of an identification system was performed using the Face Recognition Technology (FERET) database. According to results of recent testing, the typical recognition performance of frontal images taken on the same day is 95-percent accuracy. For images taken with different cameras and lighting, typical performance drops to 80 percent accuracy. For images taken 1 year later, the typical accuracy is approximately 50 percent. The Army Research Laboratory recently tested an identification system using facial recognition technology. Despite vendor claims of 75 percent correct identification, the testing showed that only 51 percent were correctly identified. Further, the correct identification was in the system’s top 10 possible matches only 81 percent of the time instead of the vendor- claimed 99.3 percent. Facial recognition technology cannot effectively distinguish between identical twins. The effectiveness of facial recognition technology is heavily influenced by environmental factors, especially lighting conditions. Variations in camera performance, facial position, facial expression, and facial features (e.g., hairstyle, eyeglasses, and beards) further affect performance. As a result, current facial recognition technology is most effective when used in consistent lighting conditions with cooperative subjects in a mug-shot-like position (where hats and sunglasses are removed and individuals look directly at the camera one at a time). Whether used for verification or identification purposes, the stored image templates must be kept updated since appearances are naturally altered by age. When used in a verification system for access control, facial recognition is typically considered by users to be less intrusive than other biometric technologies, such as iris scanners and fingerprint readers. However, when used in an identification system, there are concerns that this technology can be used to facilitate the tracking of individuals without their consent. According to a 2001 report published by Gartner Group, Inc. the leading vendors are eTrue Inc., Viisage Technology Inc., and Visionics. For an installation with up to 30,000 persons, a facial-recognition server costs about $15,000. Depending on the number of entry points using facial- recognition technology, software licenses range from about $650 to $4,500. Speaker verification works by creating a voice template based on the unique characteristics of an individual’s vocal tract, which results in differences in the cadence, pitch, and tone of an individual’s voice. During enrollment, samples of a person’s speech are captured by having the person speak some predetermined information into a microphone or a telephone handset (e.g., name, birth month, birth city, favorite color, or mother’s first name). A template is then generated from these “passphrases” and stored for future comparison. When attempting to gain access, the person is asked by the system to speak one or more of the randomly selected enrolled passphrases for comparison. Some speaker recognition systems do not rely on a fixed set of enrolled passphrases to verify a speaker’s identity. Instead these systems are trained to recognize similarities in the voice patterns of individuals when they speak unfamiliar phrases with the voice patterns they are familiar with based on previously enrolled phrases. This is similar to the way in which the human brain instinctively attempts to match an unfamiliar word that it hears with one that it already knows. The typical biometric voice template is between 10,000 and 20,000 bytes. Although speaker verification can be used for physical access control, it is more often used in environments in which voice is the only available biometric identifier, such as telephony and call centers. Equal error rates for systems that use a fixed set of enrolled passphrases range between 1 and 6 percent, depending on the number of words in the passphrase. Systems that do not rely on a fixed set of enrolled paraphrases are not as accurate. The more unfamiliar phrases the system is required to compare, the more likely that a false accept will occur. Performance increases with higher-quality input devices. Some speaker verification systems provide safeguards against the use of a recorded voice to spoof the system. For these systems, the electronic properties of a recording device, particularly the playback speaker, will change the acoustics to such a degree that the recorded voice sample will not match a stored voiceprint of a “live” voice. The enrollment procedure takes less than 30 seconds. The user must be positioned near the acquisition device. Users must speak clearly and in the same manner during enrollment and verification. The typical verification time is 4 to 6 seconds. Changes in the voice due to factors such as a severe cold might make verifying the voice more difficult. Environmental factors such as background noise also affect system performance. Other factors that can affect performance include different enrollment and verification capture devices, different enrollment and verification environments, speaking softly, poor placement of the capture device, and the quality of the capture device. Speaker verification systems have a high user acceptance rate because they are perceived as less intrusive than other biometric devices and they are also the easiest to use. According to a 2001 report published by Gartner Group, Inc., the leading vendors are Buytel, T-NETIX Inc., Veritel Corporation, and VeriVoice Inc. The list price for a 16-door system is $21,000. Overall speaker verification can cost between $70 and $250 per user. Signature recognition authenticates the identity of individuals by measuring their handwritten signatures. The signature is treated as a series of movements that contain unique biometric data, such as personal rhythm, acceleration, and pressure flow. Unlike electronic signature capture, which treats the signature as a graphic image, signature recognition technology measures how the signature is signed. In a signature recognition system, the user signs his or her signature on a digitized graphics tablet or personal digital assistant. The system analyzes signature dynamics such as speed, relative speed, stroke order, stroke count, and pressure. The system compares not merely what the signature looks like, but also how it is signed. The technology can also track each person’s natural signature fluctuations over time. The signature dynamics information is encrypted and compressed and can then be stored in a database system, smart card, or token device. The stored template size is 1,500 bytes. The use of signature recognition for access control seems fairly limited. A proficient “forger” is quite capable of selectively provoking false accept identifications for individual users. The typical verification time is from 4 to 6 seconds. Several performance factors may impede signature verification. These include a user signing too quickly, a user having an erratic signature, a signature that is particularly susceptible to emotional and health changes, and using different signing positions. Enrollment usually requires several consistent captures. The system is easy to use, non-intrusive, and requires no staff or customer training, nor any alteration in signing modes or habits. Because dynamic signature verification closely resembles the traditional signature process, it has minimal user acceptance issues. The graphics tablet can be inconvenient as an input device. While the principal criticism is that the person does not see what he is writing, the rather soft base on which the person signs also takes some getting used to. According to a 2001 report published by Gartner Group, Inc., the leading vendors are Communication Intelligence Corporation and Cyber-SIGN Inc. Additional vendors include Hesy, WonderNet, and ScanSoft. A signature recognition tablet costs about $375. Systems based on magnetic swipe cards allow users to access buildings by inserting or swiping a uniquely coded access card through a reader. Magnetic swipe cards have a narrow strip (magstripe) of magnetic material fused to the back of a plastic card, which is very similar to a piece of cassette tape. The size of the card and the position of the magnetic strip are set by the International Organization for Standardization (ISO) standards. A typical bank or credit card is an example of a magnetic swipe card. The principle of an access control system that uses magnetic swipe technology is that a unique number is encoded onto the user card. The card reader reads the number that the access control unit interprets and in conjunction with a database determines if the user is authorized. Most magnetic swipe card readers use one of two methods for reading the card: Swipe reader—A card is swiped through a long, narrow slot that is open at each end. Insert reader—A card is inserted into a small receptacle that is just large enough to accommodate the card. The security swipe card may be for general access, meaning that the card does not provide data about the person using it, or it may be individually encoded, containing specific information about the cardholder. Typically, the data on an encoded security swipe card can include: ID number (social security number or other unique number), and access level when different offices within a facility require different levels of access. Magnetic swipe card systems perform effectively. However, a magnetic swipe card system still does not necessarily verify a person; it only confirms that the person has a card. For this reason, these systems are generally not considered acceptable as stand-alone systems for high security areas and require additional controls, such as PINs or biometric identification. Coded credentials are also vulnerable to counterfeiting and decoding. A card that is lost or stolen can be used by unauthorized persons. Additionally, if the authorized access lists are not frequently updated, the potential exists for persons who no longer have authorization to gain access to a secure area. As a result, a magnetic swipe card system is considered more effective when combined with other methods of authentication, such as a keypad entry system or biometrics. The most common problem with the magnetic swipe card is the inability to be read by the card reader. Because they have to be durable enough to withstand repeated use, magnetic swipe cards are wrapped in a single piece of protective laminate that protects them from demagnitization, a common cause of card failure in reader systems. The wrapper also protects them from cracking or chipping. Even then, wear and tear will affect the card itself; dirty or scratched cards are also unreadable. The Defense Protective Service has complained that the problem with its current access control magnetic swipe cards is that the magnetic strip wears down within a year of use. Overall there are no user acceptance issues with the magnetic swipe card. According to the Security Industry Association, the leading vendors are Mercury, Apollo, and Doavo. The magnetic swipe cards themselves are very inexpensive at around $1 each. Card readers cost between $150 and $300 each. Proximity cards are passive, read-only devices. They can be of various sizes ranging from a token (about the size of a watch battery) to the size of a credit card. Proximity cards contain an embedded radio frequency (RF) antenna. The proximity card reader constantly transmits a low-level fixed RF signal that provides energy to the card. When the card is held at a certain distance from the reader, the reader’s RF signal is picked up by the card’s antenna and absorbed by a small coil inside the card that powers the card’s microchip. Once powered, the card transmits to the reader a unique identification code contained in the card’s microchip. The whole process is completed in microseconds. Cards can usually be read through a purse or wallet and through most other nonmetallic materials. The reader can be surface-mounted or concealed inside walls or special enclosures. It can even function behind glass, plaster, cement, or brick, depending on the range. It has no openings that can jam or be tampered with. Card and reader orientation is not critical, and keys or coins held in contact with the card will not alter its code or prevent accurate readings. Reading ranges primarily depend on the reader. The larger the reading range, the larger the size of the reader. Proximity card systems perform effectively. However, a proximity card system still does not necessarily verify a person; it only confirms that the person has a card that was issued to the person he or she claims to be. For this reason, these systems are generally not considered acceptable as stand-alone systems for high-security areas, and require additional controls, such as PINs or biometric identification. Additionally, authorized access lists must be frequently updated to ensure that access authorization remains current. As a result, a proximity card system is considered more effective when combined with other methods of authentication, such as a keypad entry system or biometrics. The user has to make sure to hold the card facing the reader. The card can typically be verified in less than one second. The contactless nature of the cards reduces the wear and tear associated with cards requiring contact, such as magnetic swipe cards. Proximity cards are nonintrusive and very easy to use. If a reader has a range of 1 meter, then a proximity card can be worn on a clip or chain and users can gain access by simply passing by the reader. According to the Security Industry Association, the leading vendors are Hughes Identification Devices (HID), Indala, and Applied Wireless Identifications. Proximity cards cost about $5 to $6; readers can cost up to $750. Skeletal image of a smart card. Smart cards, about the size and shape of a credit card, are used in access- control systems to verify that the cardholder is the person he or she claims to be. They are increasingly used in one-to-one verification applications that compare a user’s biometric (commonly a fingerprint or hand geometry) to the biometric template stored on the smart card. Smart cards contain a memory chip to store identification data and often have a microprocessor to run and update applications. Most smart cards in use today have the capacity to store 8 kilobytes or 16 kilobytes worth of information, and cards with 32-kilobyte and 64-kilobyte capacities are also becoming available. There are two types of smart cards: contact cards, which work by being inserted in a smart card reader, and contactless cards, which use radio frequency (RF) signals and need only be passed within close proximity to a card terminal to transmit information. Card readers and terminals are generally very compact and can be mounted on turnstiles and doors. An advantage of smart cards is that they can support more than one application. For example, they can be used to authenticate physical access to multiple facilities or to specific rooms within a facility, and even to authenticate access to computers or networks. Although the smart card industry has made use of experiences from traditional magnetic swipe cards, card reliability is not easy to predict. Physical interfaces for smart cards have been standardized through the ISO, and manufacturers claim that their products pass the ISO reliability tests meant to simulate “real life” conditions. However, each implementation of smart cards varies due to differences in usage patterns, environmental conditions, software, and readers/terminals. A smart card system still does not necessarily verify a person; it only confirms that the person has a card. For this reason, these systems are generally not considered acceptable as stand-alone systems for high- security areas and require additional controls, such as PINs, or biometric identification. As a result, a smart card system is considered more effective when combined with other methods of authentication, such as a keypad entry system or biometrics. One government use of smart cards encountered problems because of network performance issues. Specifically, the response time for passing information between the card readers or terminals and the central database was slow, and officials could not readily verify the identification of users trying to access these facilities, causing congestion problems. Further testing revealed that the plastic cards, interfaces or workstation connections, card readers, and terminals worked effectively—though some interface devices worked slower than others. Consistent performance of smart cards relies heavily on cardholder education about proper card care. Inappropriate user actions (such as punching a hole in the card or using it to scrape ice off a car windshield) are common and should be planned for. Glitches in card reader/terminal software and hardware can also damage smart cards, and it is important to implement mechanisms that identify faulty software and hardware. Public policy organizations continue to be concerned about the data that will be stored and transferred to databases from smart cards and how government organizations will use the information. As such, some individuals may be reluctant to carry one card for multiple purposes. There is no requirement for smart card technologies to meet a minimum set of security standards, and smart cards may be vulnerable to various types of cyber attacks because the devices often support multiple applications that interface with other computerized products. The National Institute of Standards and Technology (NIST) and the National Security Agency (NSA) are currently working on an evaluation program to certify the security of smart card technologies. The dominant vendors of smart cards are Gemplus and SchlumbergerSema, although many vendors offer security systems based on smart cards. Major smart card system vendors include ActivCard S.A., RSA Security, and Spyrus. At the federal level, the General Services Administration awarded a $1.5 billion contract in 2000 to five vendors— PRC/Litton, EDS, 3-G International, Logicon, and KPMG—to provide federal agencies with a range of smart card services. Under the contract, more than 140 additional vendors have been used to supply federal agencies with software, cards, card readers, terminals, and other peripheral smart card devices—including Nokia, Microsoft, Rainbow Technologies, and others. The unit price for smart card technology varies and largely depends on the applications and security features supported by the device. The price for the smart card itself can range from about $3 to $30 each. The more applications supported by the smart card, the higher the unit price. Card readers or terminals also range in unit price starting from about $16 per unit. In addition to these costs, organizations incur expenses for managing the associated databases and software as well as issuing the cards to users and administering their use. When used with doors fitted with electric or magnetic locks, keypad entry systems selectively allow users to enter buildings or other secured areas by requiring them to first enter a passcode (a PIN or special code). A standard passcode can be set to allow access to a specific group of individuals, or multiple passcodes can be adopted for each individual to be assigned a unique code. When an authorized passcode is entered using the keypad (which is similar to the numeric keypads of ATM bank machines), the system activates the electric or magnetic lock, unlocking the door for only a brief period of time. A database may be automatically updated each time a passcode is entered to document both successful and unsuccessful access attempts. Keypad devices typically include a duress function, where a person being threatened can activate a silent alarm to summon assistance. In some systems, the threatened user would enter a specific duress code, whereas in others the threatened users would enter their usual passcode followed by additional digits. In either case, access would be granted in a seemingly normal manner, but a silent duress code would be sent to a designated monitoring station. A variety of keypads are available, from very simple entry devices to unique keypads that scramble the numbers differently for each use. Although they can be used on their own in an access control system, keypads are typically used in conjunction with an ID card and card reader. In a card-reader-only system, an individual must present something they have (an authorized card) to gain entry. However, users of a keypad-only system must only know of an authorized passcode. As such, once a user shares a legitimate passcode, further use cannot be prevented unless the code is changed. Also, as users enter their passcodes, they are susceptible to their codes being “stolen” by a person looking over their shoulder. A keypad entry system is considered more effective when combined with a card system, providing a higher level of security than just the keypad alone. Keypad entry systems provide a flexible solution for controlling the movement of groups of people or individuals, as the passcodes can be disabled when they are no longer appropriate. However, keypad entry systems, in a manner similar to passwords on computer systems, can be prone to users forgetting their passcodes; hence, requiring other procedures to pass through the door. Keypads are vulnerable to mechanical malfunction as well as vandalism. User acceptance is high for keypad systems. A selection of vendors taken from the GSA Schedule includes Radionics, Securitron Magnalock Corp., Ideskco Corp., Ultrak, Inc., Vikonics, Inc. Simple stand-alone keypads, hooked directly to an electric door lock, may cost less than $200 for all the necessary hardware. More sophisticated keypad systems that may be part of a network of keypads can cost from $1200 to several thousand dollars. Turnstiles and revolving doors are access barriers that can be installed to continuously control and monitor every individual entering and or exiting a building. Whereas revolving doors are most often deployed to control the entry to a building from the street, turnstiles are usually set within the lobby of a building. There are a variety of different models of turnstiles that use different technologies. The traditional physical barrier turnstile is the type used in many large business facilities, amusement parks, stadiums, and subway systems. A metal bar is locked into a blocking position to prevent anyone who has not been authorized via some form of identity verification or form of payment, such as a token, from walking through the passageway. When authorization is granted, the bar is released and then relocked until the next person is granted access. An optical turnstile can enable complete control of access to a facility without using a physical barrier. It uses a smart card, proximity card, or magnetic swipe card system, infrared sensors, and an intelligent control unit to detect and count persons walking through a lane or passageway. Access is granted to only one person per card, thus discouraging tailgating. If a person walks through the passageway without authorization, an alarm is generated. Optical turnstiles are easy to use and are almost transparent to users. Visual or audio indications are given to the user to indicate various functions such as the open/closed status of the lane, whether the user is authorized to pass through the lane or not, or whether an unauthorized access has been attempted. All activity—including card presentations, reset, unauthorized card presentation, alarms and access attempts—can be monitored and logged by the system controlling the turnstiles. Because these turnstiles function automatically, they only need monitoring by a guard for illegal access attempts or to change lane directions at, for example, different times of the working day. Like turnstiles, security revolving doors are used to control access to buildings by a card reader verification system, but this technology is usually installed at points of entry from the street. Security revolving doors use either ultrasonic or weight sensors to detect unauthorized access such as piggybacking, where two people try to go through the door at the same time in the same door section, and tailgating, where a person tries to go through the door at the same time as an authorized person in a different section. In the event of an unauthorized access, the door will be reversed so that the unauthorized person remains on the proper side of the door. Security revolving doors can come equipped with voice annunciators that warn unauthorized individuals to exit the revolving door and can cause the door direction to reverse and force the intruder out. Turnstiles can detect and accurately report two people walking one behind the other, very close to each other, as long as they are ¼” apart. They can also detect people trying to defeat the turnstile by crawling through or rolling through on a cart. Turnstiles cannot normally detect two people walking side-by-side in lockstep, but turnstile lanes are made narrow enough that this is impractical. Security revolving doors can increase security by detecting and stopping two or more people trying to pass through the door simultaneously. When the scanning system detects unauthorized passage, the doors come to a controlled stop, and then slowly reverse, thus keeping the violator from passing through. Violations can be logged and reported. Optical turnstiles can have a traffic flow rate as high as 30 people per minute, or 1800 people per hour, per walkway. Most revolving door systems are capable of processing almost 1,000 passages per hour in either direction. Turnstiles with barrier arms are equipped with safety sensors on either side of the barrier arm, so that if someone tries to run through the turnstile as the barriers are closing, the barriers will react quickly and retract. Revolving doors have a number of built-in safeties that prevent people from being locked in or stuck in the door. They can be operated manually in case of a power failure. When, for whatever reason, one of the doors jams, the other door will turn to an open position. And, they are equipped with an emergency button to stop the door at any desired moment. In addition, the door wings are collapsible, creating a wide and safe escape route in an emergency. Only when the collapsed door wing has been manually returned into the proper position will the door again revolve automatically. Turnstiles and revolving doors are both very user friendly. They are unobtrusive and aesthetically pleasing and are effective traffic lanes through which employees can pass with safety and security. Turnstile vendors include Smarter Security Systems Inc., Magnetic Autocontrol Corp., Designed Security Inc., and Gunnebo Omega, Inc. Revolving door vendors include SafeSec Corporation, Horton Automatics, and Boon Edam. Optical turnstiles can be purchased for about $43,000 per portal with a card reader. Individual optical-free barrier turnstiles without readers can cost from about $1,000 - $5,000. Revolving doors can cost anywhere from $20,000 to $30,000. Detection systems provide a second layer of security. X-ray machines, metal detectors, and explosive detectors can be strategically deployed at entry control points to screen individuals and their belongings for hidden firearms, explosives, and other potentially injurious objects as they clear the access control system. X-ray scanners use technology that exposes a person or object to electromagnetic waves (x-rays), allowing distinct structures to be viewed within the person or object. Due to their differing material compositions, items such as metal knives, plastic weapons, and explosive substances will be displayed differently on a monitor. (This is similar to a medical diagnostic x-ray system that differentiates between bone and organs.) Based on the images displayed on the monitor, a human operator can then determine whether an item of interest warrants further investigation. There are four primary technologies currently used in x-ray scanning systems for weapons and chemical detection: 1. Transmission: An x-ray scanner uses only a single x-ray beam, in which the portion of the beam that penetrates the object under investigation is detected and used to produce the x-ray image. Because materials have different densities and compositions, the x-rays allow distinct structures, particularly metal items, to be viewed within an object. 2. Backscatter: Objects are detected based on the images produced from reflected x-rays. As a result, plastic weapons, explosives, and drugs appear bright white on a display monitor. 3. Multi-view (or dual-view): The object under investigation is examined by two x-ray beams coming in at different angles. 4. Computed Tomography (CT): Known to most people as CAT scanning, this is the same technology used in hospitals to look deep inside the human body. CT has been adapted for security applications and is used in airports to scan checked baggage. Transmission x-ray images are taken at many different angles through an object and are put together to produce a three-dimensional image of the object. This allows explosives to be specifically identified and discriminated from other similar, yet harmless, materials. Different x-ray scanning systems have been developed to examine baggage, mail, vehicles, and individuals. Large amounts of mail or cargo can be examined by a fixed system that can scan an entire pallet of cargo for suspicious items. Larger x-ray systems the size of a truck or an entire building allow vehicles to be examined. Body scanning devices detect contraband hidden on a person by utilizing low-power x-rays to see through clothing, penetrating only a few millimeters below the skin. The four x-ray technologies have different levels of effectiveness in detecting various items. Persons familiar with the exact construction of a particular x-ray system could pack a bag to make a threat item difficult to recognize. Accordingly, it has been proposed that a combination of technologies working in unison could significantly improve the detection ability of screeners. Transmission technology reveals fine details, such as bomb components, and exposes situations where an attempt to camouflage or shield an object has been made. Its strength lies in detecting metallic objects such as conventional knives and firearms, but it may be difficult to separate the image of one object from another. Although backscatter technology is not as effective as transmission technology in identifying metals, it is more effective in detecting explosives, composite weapons, and organic materials such as plastics and drugs. A dual-view system provides two different views of each item, allowing an even clearer view of camouflaged or cluttered items. The CT technique provides maximum sensitivity and accuracy for detecting and identifying materials. Unlike some metal detectors that can be rendered ineffective by demagnetization, x-ray scanners are not sensitive to their surroundings. Virtually no clearance is needed around the equipment except for space for an operator to sit or stand at the controls. However, the size of the actual equipment may be a factor of effective performance (for example, a truck-sized scanner may present a space limitation for an average-sized federal building). The throughput of x-ray scanning equipment depends on two things: the amount of clutter in a bag or on a person, and the efficiency of the operator. Clutter occurs where several dark items are grouped together in an x-ray image, so that the actual size and shape of each item cannot be reasonably detected. The performance of metal detection systems is closely linked with the performance of their operators. Operators assist with the placement of items to be scanned, work the controls, view the monitor, make judgments regarding each scanned item, and perform any needed manual searches. X-ray scanning equipment only provides an operator the tools to examine persons, baggage, or vehicles; it does not identify weapons or explosives for the operator. It is up to the operator to identify the items of interest from the x-ray image. Hence, adequate training of the operators to properly identify weapons and explosives is paramount to the performance of a metal detection system. Initial training is typically provided by the vendor, but the practice and experience of the operator is an important factor. Personal safety issues have been raised, particularly concerns about the exposure to radiation from x-rays. In the unlikely event that a person is exposed to radiation from x-ray equipment used for baggage inspection, studies have shown that this small amount is comparable to that received during an extended air flight. Additionally, research has found that body scanning systems use a very low energy level that is considered safe. Nonetheless, many people find any exposure to x-rays objectionable. Concerns about the safety of exposing food to x-ray scanners continue to surface, although in 1989 the World Health Organization released a report that supports the safeness of food that has passed through an x-ray device used for cargo. Additionally, with the advancement of x-ray technology to search baggage for explosives, some individuals continue to be wary of allowing camera film to pass through scanners that use higher-power x-rays that could damage film. New body-scanning equipment used to detect contraband is capable of projecting an image of a passenger’s naked body. The use of this equipment may be considered intrusive and raises concerns that a person’s privacy would be violated. Vendors include American Science and Engineering (AS&E), PerkinElmer, Heimann Systems, and Rapiscan. X-ray scanning devices sized for the detection of materials in baggage range from about $14,000 to $90,000. Equipment used to scan large volumes of cargo can range from around $35,000 to $120,000. Devices for the inspection of trucks and vehicles range from about $1.7 million to $3.7 million. Body scanners cost about $100,000. Regardless of the function, scanning devices using multiple x-ray technologies (typically a combination of transmission and backscatter) are generally found in the upper end of the price range. Single-technology devices tend to fall in the lower end, with the exception of CT scanning equipment, which costs about $1 million per unit. Metal detectors are typically used as a physical security mechanism to locate concealed metallic weapons on a person seeking access to secure areas. When the detector senses a questionable item or material, an alarm signal (either a noise, a light, or both) is produced. Because metal detectors cannot distinguish between, for example, a large metal belt buckle and a metal gun, trained operators are essential to the deployment of metal detectors. A metal detector senses changes to an electromagnetic field generated by the detector itself. The generated field causes metallic (or other electrically conductive) objects in the proximity to produce their own distinct magnetic fields. The size, shape, electrical conductivity, and magnetic properties of an object are the significant factors used by metal detection technologies to distinguish metal from other detected objects and materials. Two types of metal detection equipment are commonly used for access control: portal (walk-through) and handheld detectors. Portal detectors are stand-alone structures resembling a deep door frame. Conventional portal detectors alert an operator when metal objects have passed through the portal, but do not indicate the location of the metal objects. However, some of the newer portal systems use a light bar that is located along the side of the portal to pinpoint zones of the body where the metal objects are detected. After a person who has passed through a portal system has set off an alarm signal, an operator will typically use a handheld metal detector to more accurately locate the object that caused the alarm. These devices are battery-operated and lightweight, allowing the operator to move the wand end of the device around (and within a few inches of) the person’s body. When an irregularity in the magnetic field is identified, the handheld device typically emits a loud noise. The operator is then responsible for judging whether the intensity of the signal warrants further investigation. Metal detectors are considered a mature technology that can accurately detect the presence of most types of firearms and knives. However, they are typically not accurate when used on objects that contain a large number of different materials (such as purses, briefcases, and suitcases). Government security officials have also reported frequent false alarms and incomplete follow-up scans by security personnel. Both the portal and handheld metal detectors are designed for use in close proximity situations. Portal metal detectors are extremely sensitive to interference from conflicting signals of nearby objects. As such, their effectiveness can be easily degraded by a poor location (directly under fluorescent lights or metal air ducts); the nearby use of electromagnetic equipment (such as an elevator); movement from one location to another, and even the placement of a nearby metal trash can. The initial calibrations are generally made by the vendor when the detector is installed. However, facilities often must make adjustments based on results gained through use and their particular security requirements, which determine levels of equipment sensitivities. Unlike portal metal detectors, handheld metal detectors are not nearly as sensitive to surrounding metal objects. However, the performance of portal metal detectors tends to vary on a daily basis and requires frequent adjustment. A successful metal detection system depends on well-trained and motivated operators. Typically, an effective operator should be able to process between 15 and 25 people per minute through a portal detector. (This does not include investigation of alarms or other delays.) Traffic flow is generally driven by three factors: the number of devices, the rate at which individuals arrive, and the motivation of individuals to cooperate with the established procedures. Cooperative individuals can typically be scanned with a handheld detector in about 30 seconds. Some people, particularly those with certain medical devices such as pacemakers and implantable cardioverter/defibrillators, fear the possible side effects of being subjected to the magnetic field of metal detectors. Because metal detectors emit an extremely weak magnetic field, interactions with walk-through and handheld devices are unlikely to cause clinically significant symptoms. Nevertheless, in 1998 the U.S. Food and Drug Administration began working to address these concerns with both the manufacturers of medical devices and the manufacturers of metal detectors. Additional issues have been raised regarding the use of handheld metal detectors. Because these devices are passed very closely over the body of individuals who have been selected for further screening, they can be perceived as potential tools for harassment and intimidation. Men wearing turbans and women in undergarments with metal components are examples of two cases that have caused concerns related to discrimination and privacy. There are a number of vendors, including CEIA, Control Screening, LLC, Garrett Metal Detectors, Heimann Systems, Ranger, and Rapiscan. Portal metal detectors vary widely in price, ranging from about $1,000 to about $30,000. Models in the higher price ranges offer enhanced capabilities, while the lower-range devices may have limited sensitivity and detection capabilities. Most handheld metal detectors on the market range from about $20 to about $350. As with the portal detectors, capabilities increase along with the price. Several different technologies are currently used to detect explosives: trace detection, quadrupole resonance analysis, and x-ray scanning machines. The most widely used technology is trace detection, which uses ion mobility spectrometry (IMS) to detect and identify both trace particles and vapors of explosives, narcotics, chemical warfare agents, and toxic industrial chemicals. Trace explosive detection systems can detect a trace of chemicals used in explosives as small as a millionth of a gram. Trace explosive detection equipment comes in a variety of sizes, depending on whether it is to be used to detect chemicals concealed on individuals, in containers, packages, or in or under vehicles. The handheld explosive detection unit can be used almost anywhere. The device, which is small and lightweight, is capable of detecting over 30 substances in seconds. Tabletop units are becoming common for the detection of explosives concealed in baggage. For these units, which also use IMS technology, security personnel rub the outside of a bag, such as a lock or handle or zipper, with a cotton swab and then insert the swab into a machine that heats the swab, turning the sample into vapors. The unit alerts the operator to the presence of any explosive traces that warrant further examination. Some systems create different sounds to indicate the relative density of the contraband detected and indicate probable drug or gun type materials. Portal explosive detection units take in the air from around the subject as he or she walks through to check for explosive residue. When explosives are detected, the system sets off a visual and audible alarm, and lists the material identified. It can detect organic and inorganic contraband on the body and clothing. Quadrupole resonance analysis is another type of technology used to detect explosives. Similar to magnetic resonance imaging (MRI) used in hospitals, this technology is typically used to scan belongings and baggage. These units resemble x-ray machines used for the same purpose. X-ray machines can also be used to detect explosives and are available to scan belongings, people, or moving and stationary vehicles. While the technology is capable of detecting most military and commercially available explosives—including TNT, plastic explosives, high-vapor explosives, and chemical warfare agents—most devices are designed to detect only a subset. Others have slow processing rates for larger items. As with other technologies, explosion detection equipment also has a small percentage of false alarms. All explosive detection systems have specific sampling guidelines for specific applications. This is important because some systems rely almost entirely on the skills of the operators. Handheld detection devices are lightweight and ready to operate within 1 minute from the time they are turned on. They are easy to use, and provide readings within seconds. The use of these devices near idling cars has been shown to cause interference and require frequent recalibrations. Tabletop trace detection units are self-calibrating and also provide readings within seconds. Baggage x-ray machines also provide rapid readings and can process an average of about 550 bags to 800 bags per hour. Portals are capable of processing seven passengers per minute. Vehicle screening detectors take approximately 1 minute. Explosive detection units are noninvasive and carry no health concerns. The following vendors appear on the GSA schedule: Ion Track, Barringer Instruments Inc., SAIC, Raytheon, InVision Technologies Inc, L-3 Communications, Scintrex Trace Corporation, and Rapiscan. A handheld device can cost between $20,000 and $45,000. A tabletop detection device can cost from $20,000 to $65,000. A portal system can cost from $80,000 to $400,000. The largest baggage x-ray units are priced from $110,000 to $1.3 million. The medium size x-ray units for smaller packages range from $100,000 to $235,000. Standalone units for personal belongings are priced from $30,000 to $50,000. Intrusion detection systems serve to alert security staff to react to potential security incidents. These systems are designed to identify penetrations into buildings through vulnerable perimeter barriers such as doors, windows, roofs, and walls. These systems use highly sensitive sensors that can detect an unauthorized entry or attempted entry through the phenomena of motion, vibrations, heat, or sound. Closed circuit television (CCTV) is an integral part of intrusion detection systems. These systems enable security personnel to monitor activity throughout a building. Intrusion detection technologies can also be interfaced with the CCTV system to alert security staff to potential incidents requiring monitoring. When an intrusion is sensed, a control panel to which the sensors are connected transmits a signal to a central response area, which is continually monitored by security personnel. The sensor-detected incident will alert security personnel of the incident and where it is occurring. By interfacing these technologies, security personnel can initially assess sensor-detected security events before determining how to react appropriately. Analog CCTV surveillance system. CCTV is a visual surveillance technology designed for monitoring a variety of environments and activities. CCTV systems typically involve a dedicated communications link between cameras and monitors. Digital camera and storage technologies are rapidly replacing traditional analog systems. CCTV provides real-time or recorded surveillance information to help in detecting and reacting to security incidents. A CCTV system can also be used to prevent security breaches by allowing remotely stationed security personnel to monitor access control systems at entry points to secure areas. Other advantages to using CCTV include deterring criminal activity, promoting a safe and secure work environment, enhancing the effectiveness of security personnel, discouraging trespassing, providing video evidence of activities occurring within the area, and reducing civil liability. A CCTV system involves a linked system of cameras able to be viewed and operated from a control room. Cameras come in two configurations: fixed made or pan-tilt-zoom mode. In pan-tilt-zoom mode they can either automatically scan back and forth or be controlled by an operator to focus on particular parts of a scene. Some systems may involve more sophisticated technologies such as night vision, computer-assisted operation, and motion detection systems. A camera that is integrated with a motion detection system would, for example, enable alerted security staff to remotely investigate potential security incidents from a central control center. Other sophisticated CCTV systems incorporate technologies that make possible features such as the multiple recording of many cameras, almost real-time pictures over telephone lines, low-light cameras, 360- degree-view cameras, the switching of hundreds of cameras from many separate control positions to monitors, immediate full-color prints in seconds from a camera or recording, and the replacement of manual controls by simply touching a screen. CCTV is also sometimes used to capture images for a facial recognition biometric system. The clarity of the pictures and feed is often excellent, with many systems being able to recognize a cigarette packet at a hundred meters. The more expensive and advanced camera systems can often work in pitch- blackness, bringing images up to daylight level. However, CCTV systems are not considered to be suitable for high- security areas that require security staff to be present at entry control points. Also, inattention to monitors by security personnel, as discussed below, is a common problem. The biggest problem concerning CCTV is proper installation. Since cameras vary in size, light sensitivity, resolution, type and power, it is essential to understand the target area before procuring a camera. Important aspects to be considered are lighting, environment, and mounting options. Because insufficient attention is often paid to all of these aspects before products are selected and installed, many CCTV systems do not work properly. Just how important proper lighting is is reflected in the Defense Protective Service’s having installed 98 percent of their CCTV cameras in well-lit areas. While CCTV can be used to supplement and reinforce security staff, using CCTV as an active surveillance tool is often not effective. Studies have shown that because monitoring video screens is both boring and mesmerizing, the attention span of a person watching and assessing a CCTV monitor degrades below acceptable levels after 20 minutes. CCTV is more effective when used, for example, at control points to actively allow or disallow individuals through a particular door on the basis of the security staff’s recognition of the CCTV image of the individual. Most CCTV systems have all their connected cameras record continuously. The result is an abundance of video material that must be manually reviewed if an incident that cannot be narrowed down to a particular time is being investigated. However, by using cameras that are triggered to turn on by the occurrence of motion within their field of view, the amount of video that is recorded is greatly reduced and facilitates faster searches. Whereas analog storage is space consuming and human intensive, digital technology allows large amounts of data to be captured, compressed, recorded, and automatically stored and managed so that recorded events can be tracked and located by date and time. CCTV has raised much concern over privacy issues. Apprehensions are generally based on a fear that CCTV will be used for purposes other than for which they were intended. Examples of these concerns are that CCTV systems: may be used to monitor an individual’s actions in real time or over a may be used by employers to monitor employees’ performance, including when they arrive and leave work; may enable security personnel to indulge in voyeurism by especially focusing on attractive individuals; and may be used to arbitrarily monitor individuals of a particular race or ethnic background. Apprehensions such as these have hindered organizations from exploiting the full potential of CCTV towards enhancing security. The Capitol Police, for example, does not plan to install many more cameras in its internal spaces because of the sensitivity of its members to internal surveillance. The GSA schedule lists the following CCTV vendors: Panasonic Security Systems Group, Extreme CCTV Inc., Ultrak Inc., and Silent Witness Enterprises Ltd. A fully integrated CCTV system for physical access surveillance can cost from $10,000 to about $200,000, depending on the size of the entrance and the degree of surveillance required for monitoring the area. For additional CCTV equipment, cameras can cost about $125 to $500. Cameras with advanced technological features can cost up to $2,300. Monitors can cost between $125 and about $1,000. Recorders can cost between $400 and $2,700, and a video control system (remote controller and accessories) between $3,000 and $12,000. Electronic intrusion detection systems are designed to detect penetrations into secured areas through vulnerable perimeter barriers such as walls, roofs, doors, and windows. Detection is usually reported by an intrusion sensor and announced by an alarm (typically to a central response area). The intrusion alarm must then be followed by an assessment to determine the proper response. CCTV is typically used in internal assessments to determine the validity of the alarm. A variety of technologies have been developed for the detection of intrusions: Line sensors use cables that are either placed above ground or buried in the ground. When positioned just outside a building wall, they can detect both prowlers and tunneling activity. Some lines are sensitive to magnetic or electric disturbances that are transmitted through the ground to the sensing elements, while others respond to changes in pressure from an intruder’s footstep or vehicle. Video motion detectors transform the viewing-only ability of CCTV cameras into a tracking and alarm system. By monitoring the video signals, the sensors detect changes caused by the movement of an object within the video’s field of view. Sometimes only a portion of the total field of view is monitored for motion. The size of the moving object or its speed (for example, blowing debris or a flying bird) can sometimes be used to distinguish a person from other objects in motion. Balanced magnetic switches are an extension of the conventional magnetic switch used on doors and windows in a home security system and are widely used to indicate whether a door is open or closed. Conventional magnetic switches can be defeated by placing a steel plate or magnet over the switch, allowing the door to be opened while keeping the switch closed. Balanced magnetic switches activate an alarm if this defeat tactic is used. Sonic and vibration sensors detect intrusion indicators such as the sound and movements of breaking glass or wood at windows and walls. Because they are typically used in rooms during timeframes when legitimate access is not expected, these sensors can also be used to detect the motion of a person walking into or within a designated area. While changes in sound waves are typically detected by sonic sensors, vibrations are typically detected by the use of microwave radiation or infrared (IR) light (both of which are invisible to the naked eye). Microwave sensors generate a detection zone by sending out a continuous field of microwave energy. Intruders entering the detection zone cause a change in this field, triggering an alarm. IR technology operates in two methods: 1. Active IR sensors inject infrared rays into the environment to detect changes. They generate an alarm when the IR light beam (similar to that used in a TV remote controller) is broken. Multiple active IR beams are often used at gates and doors to create a web of rays that make the system more impenetrable. 2. Passive IR sensors, also known as pyroelectric sensors, operate on the fact that all humans (and animals) generate IR radiation according to their body temperatures. Humans, having a skin temperature of around 93F, generate IR energy with a wavelength between 9 and 10 micrometers. Passive IR sensors are therefore typically set to detect a range of 7 to 14 micrometers. Sensor technology has been relied on for many years as an effective countermeasure to security breaches. However, this technology is susceptible to nuisance alarms or false alarms not caused by intruders. Depending on the technology used, disturbances that contribute to nuisance alarms can be generated by animals, blowing debris, lightning, water, and nearby train or truck traffic. Nuisance alarms can be mitigated by adjusting a sensor’s sensitivity level and by careful routing of signal cables. Because these intrusion detection systems operate on electricity, any disturbance in the electrical power will affect their performance. Special design considerations must be given to the routing and protection of power and signal cables to prevent exposure to tampering and environmental wear and tear. Careful placement of sensors is also critical to their success. Some vibration sensors should not be mounted directly on window glass, as the mounting adhesive may not be designed to withstand long exposures to heat, cold, and condensation. Because passive IR sensors detect changes in temperature, their sensitivity would decrease if placed in rooms that would approach the same temperature as the human body. Manufacturers’ specifications for each sensor technology should be heeded to ensure maximum performance. Doors and windows that have been equipped with intrusion detection devices cannot be propped open for circulation of fresh air. A building with a large number of windows cannot be fully secured with an intrusion detection sensor unless all windows are equipped with the devices. For the technologies discussed above, The National Institute of Justice’s Perimeter Security Sensor Technologies Handbook lists the following vendors: ADT Security Systems, Advantor, DAQ Electronics, Detection Systems, Inc., GYYR, Microwave Sensors, Millennium Sensors, Presearch, Safeguards Technologies, Scantronic, Senstar, South West Microwave, Stellar Security Products, Vindicator, and Visonic LTD. Line sensor cables range from about $300 to $750 for 100 meters. Line sensor detection systems are available for about $1,000. Video motion detector cameras range from about $150 to $1,500. Balanced magnetic switches range from about $100 to $289. Simple microwave sensors are available for about $30, while comprehensive microwave detection systems range from about $400 to $1,000.
The terrorist attacks of September 11 have heightened concerns about the physical security of federal buildings and the need to protect those who work in and visit these facilities. These concerns have been underscored by reports of long-standing vulnerabilities, including weak controls over building access. There are several commercially available security technologies that can be deployed, ranging from turnstiles, to smart cards, to biometric systems. Although many of these technologies can provide highly effective technical controls, the overall security of a federal building will depend on robust risk management processes and implementing the three integral concepts of a holistic security process: protection, detection, and reaction.
OWCP is responsible for adjudicating and administering claims of work- related injuries and illnesses as authorized by the Federal Employees’ Compensation Act (FECA) (5 U.S.C. 8101 et seq., as amended).The FECA program covers nearly 3 million active duty civilian federal employees, providing benefits to those it determines sustain an injury or illness in the performance of duty worldwide. During fiscal year 1999, FECA’s costs totaled about $1.9 billion in compensation, medical, and death benefits, and federal employees filed about 167,000 injury notices. At the end of fiscal year 1999, OWCP was administering about 243,000 ongoing injury cases, including from previous years, for partial or total disability. According to OWCP officials, they receive an estimated 2.6 million phone calls and 5.5 million pieces of mail each year from customers--claimants, medical providers, agencies, and others. Some mail requires a response— for example, congressional inquiries on behalf of constituents. However, district office officials said they believed that most of the mail does not require a response. For example, medical reports are used to assist claims examiners in adjudicating cases but do not usually require a response. Although OWCP did not know what proportion of its mail requires a response, district office officials’ estimates ranged from 1 percent to 7 percent. The telephone calls and written correspondence are handled primarily by 12 OWCP district offices nationwide, which had a total of about 900 employees as of December 1, 1999. These district offices operate under the authority and guidance of OWCP headquarters and are responsible for adjudicating claims from injured workers, approving wage loss claims, paying medical bills, and responding to inquiries from customers. Each district office is responsible for providing services to claimants living in several states. expanding automated voice response systems to allow pharmacy staff to verify claimant’s eligibility and the amounts of drug payments authorized; beginning the process of converting incoming medical bills and other correspondence to a computerized format to make the information available to district office representatives via their computer terminals, enabling them to answer more queries during initial calls; giving federal agencies, unions, and congressional staff direct computer access to information they need to deal with their employees’ or constituents’ cases; and initiating a communications redesign project last year--which included establishing a redesign team comprised of union members and management to propose standards, reengineer practices, and make other improvements in OWCP’s communications--and investigating best practices in public and private organizations. OWCP has also taken actions when its monitoring systems have indicated that district offices have failed to meet goals for responsiveness to telephone inquiries. For example, when district offices failed to meet goals for responding to telephone inquiries for one or more quarters of a fiscal year, OWCP’s national office counseled the district directors and required plans for improvement. In addition, OWCP’s budget request for fiscal year 2001 requested funding for a toll free 800 telephone number for medical authorizations, for telephone system hardware upgrades, for additional communication specialists, and for expanded access to automated information for injured workers. As of September 22, 2000, the House and Senate appropriation committees for OWCP had decided not to fund this request. Although NPR has stated that the level of service a customer receives should not vary significantly across an organization, we found that service levels varied widely for those attempting to reach OWCP representatives by phone. As figure 1 shows, the extent to which we were unable to access district offices’ telephone systems on our 2,400 calls—that is, where there was a busy signal, no answer after 1 minute, or a message erroneously stating that the phone number was invalid—ranged from 0 percent in Boston to 54 percent in Jacksonville. Washington, D.C., office had purchased an additional eight telephone lines in late July 2000 and that he believes this will increase the system’s accessibility. We also found that our ability to speak to an OWCP employee varied significantly across districts. Of the 2,400 calls we made, 1,200 calls were to either an office phone number designated for contacting an employee or a central phone number that gives callers an option for contacting a representative. As figure 2 shows, the rates at which we were unable to reach any employee within 5 minutes ranged from 13 percent to 97 percent of the calls. In three offices—Jacksonville, Dallas, and New York—we were unable to access an employee on 97, 86, and 80 percent of the calls, respectively. offices with the lowest telephone access rates by giving them part of another office’s staff allocation. When we made our 2,400 telephone calls, we also attempted on 1,200 of those calls to compare the information on actual injured workers’ claims provided to us by OWCP headquarters officials with the same information available on that claimant through district offices’ telephone systems. For example, if OWCP headquarters told us that claimant Mary Smith was mailed a compensation check of $550, would the district office where Mary’s claim was handled provide us with this same information? We did not include 604 of the 1,200 calls in our analysis: 138 calls where we could not access the phone system for various reasons (e.g., busy signal); 43 calls where we could access the phone system but not the interactive voice response system for various reasons (e.g., claim number was different from that provided by headquarters), and 423 calls where we could not compare the information because it had been updated after OWCP headquarters provided it to us. For the remaining 596 calls, the extent to which district offices provided us with consistent claims information ranged from 88 percent to 100 percent. (See appendix II for information on the accuracy of each district office’s interactive voice system.) Most of the inconsistent information involved the dates and amounts of claimants’ compensation checks. Other communication practices also varied significantly across district offices: The Dallas office, unlike most others, used e-mail for medical authorizations, congressional contacts, and general inquiries. The four other district offices we visited did not use e-mail because of Privacy Act concerns. The national office and four district offices have taken steps to provide customers information through the Internet, while others have not. The Internet-linked offices have established a World Wide Web page to provide information about the workers’ compensation program and the district offices’ procedures and practices. Most district offices had representatives available to answer the phone 7 or more hours per day, but three offices—New York, Philadelphia, and Boston--were available by phone 6 hours or less per day, and one of these offices—Boston—had representatives available for only 4.5 hours. OWCP had not set any goals in some important areas of telephone communications and the goals it did set for telephone and written communications allowed OWCP more time to provide responses to customers than NPR suggests for telephone communications or that other organizations we surveyed allowed. For example, NPR suggests the following telephone service goals: 99 percent of callers access the telephone system; 98 percent of callers reach a customer service representative, and the time waiting on line be no more than 30 seconds; and 85 percent of callers’ inquiries should be resolved during the first call. These three basic goals focus on meeting callers’ needs for timely and accurate information. The other agencies we contacted--SSA, VBA, and Ohio’s BWC--varied in whether they established goals for these measures. Three had goals for telephone access, two had goals for the portion of callers reaching representatives and the time they have to wait on line, and one had a goal for resolving inquiries on the first call. OWCP had not set goals that conform to any of these three goals. OWCP did have a goal to return 90 percent of phone calls not related to medical authorizations within 3 days to persons who leave messages. That goal could be met by OWCP’s calling the people within 3 days, giving them the status of their claims, and saying that the answer to their questions would follow at a later time. However, OWCP’s Acting Director noted that this is the only response possible for many calls when OWCP lacks information, such as doctors’ reports, needed to resolve the caller inquiry. OWCP also had a separate goal established in fiscal year 1999 to return 95 percent of calls related to medical authorizations within 3 days. whose employees’ only responsibility is answering telephone calls. He explained that because district offices have many other responsibilities in addition to answering calls, some district offices prefer to direct most calls to voice mail and respond at a later time. Several district directors told us that, if there were such a goal, assigning additional employees to answer calls would take time away from their adjudication of claims. The scope of NPR’s study did not include identifying what goals the private sector has for responding to written inquiries, as it did for telephone communications. Thus, we could not compare the goals that OWCP has established for the timeliness of written communications with an NPR suggested standard. Nevertheless, for nonpriority mail requiring a response, OWCP had a goal of responding to 85 percent within 30 days. OWCP also had goals for responding to priority mail from Congress: 90 percent within 14 days and 98 percent within 30 days. These goals do not compare favorably to VBA’s goal of responding to all written benefit inquiries within 10 workdays and Ohio’s BWC’s goal of responding to written requests the same day, or within 24 hours of the request’s being referred to another section of the Bureau. OWCP also did not have a national goal for responding to requests for medical authorizations in the mail, through e-mail, or by fax. Nonetheless, the five district offices we visited gave written medical authorization requests received by mail the same priority that OWCP gave congressional correspondence. Several of these five offices have also established their own goals for medical authorizations received by e-mail or fax. For example, the Dallas district office encourages claimants to use e-mail for medical authorizations. Dallas had a goal of responding to 90 percent of e- mails within 24 hours. The Chicago district office received 95 percent of its medical authorization requests by telephone. The district office has chosen to use the goal for medical authorizations received by phone—95 percent within 3 days—for authorization requests received by fax. Often, OWCP did not collect credible performance data to gauge progress in attaining its goals. Credible performance information is essential for accurately assessing agencies’ progress toward meeting existing goals and for setting new goals. Decisionmakers must have assurance that the program and financial data being used will produce complete, credible, useful, and consistent data in a timely manner if these data are to inform decisionmaking. OWCP’s system for measuring its goal of 3 days for returning phone calls to those who left a message that required a response did not yield valid timeliness measurements. It could do this by either creating a record of all such calls or of a statistically valid sample. While all calls may not require a response, district office officials have stated that most callers have inquiries that require a response. The national office suggested--but did not require--that district employees use a standardized computer program (CA- 110) to make a recording of all calls requiring a response, as well as of those in which the content is relevant to adjudicating decisions. OWCP did not require the use of this program for all calls because some district office employees have complained about taking time away from their other tasks to record the information, such as the date and nature of the call. We found that all 12 district offices used the CA-110 system to some extent. However, two of the five offices we visited—Dallas and Seattle-- told us that they entered only about 15 percent or fewer of all calls requiring a response and did not enter calls in the systematic manner that would be necessary to yield valid results. The other three offices— Chicago, San Francisco, and Washington, D.C.--estimated that they entered about 75, 75, and 95 percent of the calls requiring a response, respectively. However, our analysis of the number of calls received and the number of calls recorded in the CA-110 system suggests that these estimates are high.For example, San Francisco estimated that it entered 75 percent, but for a 3-month period in fiscal year 2000, San Francisco received 76,238 calls and entered 14,502, or 19 percent, in the CA-110 system. sampling plan approved by OWCP headquarters when recording information. However, the OWCP Acting Director said that each of these four offices had developed a modified version of the sampling plan and that their plans--while approved by OWCP--were probably not statistically valid. We reviewed the national office’s sampling plan for the data to be entered into these systems and also believe that this plan would not yield statistically valid results even if implemented as designed. The telephone response rates developed using the CA-110 and other systems indicated that the district offices were generally meeting their timeliness goals. However, because the performance data were not statistically valid or collected in enough cases, OWCP could not determine whether these goals were being met. We also found problems in the methods that OWCP used to measure timeliness goals for written correspondence that undermine the usefulness of the data. Each district office was required to take a statistical sample of incoming general, nonpriority correspondence, and then record and track the correspondence to determine whether it was responded to within 30 days. The results of these samples were to be reported to the national office on a quarterly basis. Four of the five district offices we visited each had a different approach for sampling such correspondence, and each stated that its approach was not scientifically developed or developed in a manner that would produce valid results if projected to the universe of all responses. For example, the Dallas district office required each claims examiner to provide his or her supervisor four pieces of written correspondence requiring a response per month to determine whether the response was provided within 30 days. We are concerned about the validity of this approach because, among other things, the potential exists for the claims examiner to give the supervisor only those letters to which the response was timely. The Seattle district office required claims examiners to log in all general correspondence received every Wednesday that required a response. Supervisors were to review the claimants’ files for these letters after 30 days to determine whether a response had been sent within the 30-day goal. days. Although the performance data showed that this timeliness goal for fiscal year 1999 was generally met by all district offices, the results are probably not statistically reliable. Conversely, OWCP did seem to have a valid and reliable system for tracking responses to priority mail involving congressional requests. For example, the date of receipt of all congressional correspondence was to be recorded in the Priority Correspondence Tracking System. This system provides reports that track each piece of correspondence until a response is provided. As I said earlier, the third approach that NPR found model organizations followed was to continuously improve customer service by using performance data—including surveys of important customers and stakeholders-- to identify how and where improvements are needed. We found that OWCP’s efforts in this area fell well short of the best practices that NPR found in the private sector and in the three agencies we surveyed. Let me first state the obvious--OWCP did not measure progress toward goals that it did not establish in the first place. OWCP did not have goals— nor did it have related measures—for three basic areas of telephone communications: the percentage of callers able to access the telephone system, the percentage of callers who can reach a customer service representative and their time waiting on line, and the percentage of callers’ inquiries that are resolved during the first call. Officials from SSA and VBA told us that setting goals for those areas (e.g., telephone access) for their agencies and measuring the results has proven useful in identifying areas where customer satisfaction levels needed improving. For example, data from SSA’s customer satisfaction surveys in 1993 showed that access was the single biggest factor affecting customer satisfaction. According to an SSA official, SSA began collecting access data and found that callers attempting to reach SSA at the busiest times were getting busy signals 50 percent of the time. SSA established a telephone access goal and continued to collect access data, explore new technologies, and acquire additional telephone capacity. By 1996, SSA said, it was able to set and achieve a goal of 95 percent of the callers reaching the system within 5 minutes, and customer satisfaction scores improved accordingly. My point in using this example is that without goals or measures in these three basic high priority areas, OWCP was not in a position to know what levels of customer service it was providing and where and how telephone services needed to be improved. For the goals that OWCP did establish, we found that it often did not collect credible performance data from surveys of (1) injured workers, (2) medical providers, or (3) employees that was sufficiently reliable or done in a timely manner to measure progress or to set goals for improving customer service. NPR found that model telephone service organizations in the private sector survey (1) their customers frequently to determine how satisfied they are with the services provided and (2) their employees who are answering the phones for job satisfaction levels and ideas to improve their services. In addition, Executive Order 12862, issued September 11, 1993, directs departments and agencies to survey customers to determine the kind and quality of services they want and their level of satisfaction with existing services. Every organization must decide how frequently it can survey its customers in a cost-effective manner. However, we found that OWCP (1) did not obtain information on injured workers’ satisfaction with services as frequently and by utilizing as many techniques as other model organizations do; (2) did not survey other important stakeholders, such as medical providers and federal agencies; and (3) did not survey the OWCP employees who are answering the phones. OWCP did do customer satisfaction surveys of injured workers. Since 1996, OWCP has hired a contractor to conduct customer satisfaction surveys about once each year to determine claimants’ perceptions of several aspects of the FECA program, including overall service, the timeliness of responses to telephone inquiries, and the timeliness, thoroughness, and accuracy of written responses to claimants’ inquiries.The claimants are selected on a random sample basis. During these surveys, injured workers are asked to recall situations that occurred up to 1 year in the past. and to gather requests for new services. Ohio’s BWC surveys approximately 600 injured workers weekly by mail. The OWCP Acting Director said that OWCP does not conduct a customer satisfaction survey of medical providers because it had difficulty doing so in the past. That is, OWCP officials said they had previously attempted to survey medical providers; however, when they called the representative in the medical provider’s office often could not identify the individual who had previously called OWCP. Some district office directors and other officials cited outreach programs that, while not necessarily systematic, were an attempt to gain more input from a broader selection of customers. For example, the Seattle district office, in May 2000, initiated a program to begin calling a sample of all telephone callers within the same week they called. The district director said that the office asked the callers whether the response was appropriate and if the caller was satisfied with the representative’s response. OWCP national office officials said that these efforts did not ensure that consistent questions were used or that a random sample of all district office customers was surveyed. OWCP also has not surveyed its own employees regarding customer service or employee satisfaction within the last 5 years. According to NPR, employee satisfaction is measured as routinely as customer satisfaction in model customer service organizations. NPR cited as benefits of such employee surveys: obtaining information on how to improve the work processes that lead to improved customer service, as well as identifying employee morale issues that could lead to customer service problems. In addition, Executive Order 12862 directed agencies to survey front-line employees on barriers to, and ideas for, matching the best in business. Officials from Ohio’s BWC told us that they have systems in place, such as an annual employee survey that captures information about employee morale. included related activities, such as establishing a dedicated customer relations team to receive complex customer issues and complaints and recording caller complaints for use in identifying the root causes of problems. While no organization would be expected to apply all of these best practices, we wanted to determine the extent to which OWCP was applying these practices in comparison to SSA, VBA, and Ohio’s BWC. Consequently, we asked these organizations to characterize to what extent—“all,” “some,” or “none”--they followed the activities within each of these practices. Of the 20 best practices related to measuring performance, OWCP stated that two best practices were not applicable because they applied to call center operations and OWCP does not have any call centers. Of the 18 that OWCP said were applicable, it reported that it performed “all” or “some” of 9 of the 18 practices, or 50 percent. This compares to SSA, VBA, and Ohio’s BWC, who reported that they performed 20, 17, and 20, or 100, 85, and 100 percent, of all 20 NPR best practices, respectively. Of the 9 practices OWCP indicated it did not perform, SSA, VBA, and Ohio’s BWC responded that they applied 8 of them to all or some extent. Examples of these 8 performance measuring practices follow. Call monitoring: Senior managers regularly listen in on live calls in order to stay in touch with the customer. Team leaders participate in group monitoring sessions to ensure consistency of measurement. Accessible to customers: A customer feedback loop is built into every phase of the customer service delivery process. It is convenient and easy for customers to contact world-class organizations. Of the 68 best practices for improving customer service, OWCP said that 6 practices were not applicable because they applied to call centers. Of the 62 practices OWCP said were applicable, OWCP reported that they generally followed to all or some extent 31 of the 62 practices, or 50 percent. This compares to SSA, VBA, and Ohio’s BWC, who reported that they followed 65, 51, and 66, or 96, 75, and 97 percent, of the 68 NPR best practices to all or some extent, respectively. Of the 31 practices OWCP indicated it did not perform, SSA, VBA, and Ohio’s BWC each responded that they applied 18 of them to all or some extent. Examples of these 18 practices follow. Information queuing: Callers waiting in the queue are provided with information as to the expected length of delay, allowing them to choose whether to stay in queue or hang up. Resource allocation strategies: Continuous evaluations of key performance indicators help to ensure the appropriate alignment of resource allocations with planning objectives. Well-established benchmarking programs help identify improvement opportunities for cross-functional teams. OWCP has been concerned about the level of services that it provides to its customers—injured workers, medical service providers, and agencies. The annual surveys of injured workers that OWCP has contracted for have surfaced issues, like access to service representatives, that OWCP has taken actions to address. For instance, the installation of the automated response systems in district offices have given customers an alternative means of getting answers to common questions. OWCP has also begun to set goals and use data about goal achievement to manage the customer service aspect of district office operations. written communications can be made more exacting, and beginning to reliably measure both customer satisfaction and goal achievement, OWCP can lay the foundation for better serving its customers’ needs for timely and accurate information. We recommend that the Secretary of Labor require the Director of OWCP to establish goals for all important areas of OWCP’s telephone and written communications with injured workers and other customers and revise as appropriate existing goals to better ensure that customers’ needs for accurate and timely information are met; collect credible performance data on progress toward these goals, including timely periodic surveys of injured workers’, medical providers’, and agencies’ satisfaction with OWCP’s services and surveys of OWCP employees to gauge their job satisfaction and to gather ideas on how to improve services; and use these performance data and survey results to identify areas needing improvement and to develop strategies for achieving those improvements, including new and revised goals where appropriate. OWCP’s Acting Director said that he agreed with our recommendations and would continue to explore ways in which to improve customer communications. That concludes my statement, Mr. Chairman. I would be happy to answer any questions that you or other Members of the Subcommittee may have. For further contacts regarding this testimony, please contact Michael Brostek at (202) 512-9039 or Alan Stapleton at (202) 512-3418. Individuals making key contributions to this testimony included Jeanne Barger, Thomas Davies Jr., James Turkett, Michael Valle, and Cleofas Zapata Jr. To determine how OWCP communicates with injured federal workers, agencies who employed these workers, and medical and other service providers who are involved in their treatment, we performed the following audit steps. We placed 2,400 telephone calls to OWCP’s 12 district offices (200 per office), to assess the accessibility of telephone representatives at each of the 12 OWCP district offices and whether each office’s automated integrated voice response (IVR) system’s data was consistent with the national office’s claim data. We generated listings for our callers which provided the dates, times, and phone numbers to be called, and when applicable, the claim information to be accessed. OWCP provided information about the breakdown of IVR call types for four of the twelve OWCP district offices (Chicago, Philadelphia, Kansas City, and Seattle) during a 2-month period. We used this information to build a nongeneralizable profile of the distribution of IVR calls across the following six types of claim information for each district office (number of calls per district office in parentheses): claimant calling about a bill payment ( 47), medical provider calling about a bill payment (8), medical provider calling about a periodic roll payment (24), claimant calling about a compensation check (10), claimant calling about physical therapy authorization (2), and medical provider calling about physical therapy authorization (9). For the IVR calls, we were provided with identifiers, such as claim numbers and employer identification numbers, that enabled us to enter the IVR and identify specific transactions on each office’s automated database. We worked with OWCP to identify a time period of earlier transactions from the national database that would still exist in each district office’s IVR system during our test period in June and July 2000. OWCP sampled records from their national database for this time period until they obtained the required number of records of each type, for each district office. OWCP’s selection of IVR cases did not strictly constitute a random sample of cases for each office from the specified time period, since the cases were selected in case number order, which generally reflected a chronological order, until the required number of cases was obtained. However, since we had no reason to believe that time ordering of the cases was associated with whether a district office’s IVR data would match the information in the national database, we accepted OWCP’s selections for use in our test. All of our test calls attempted to reach an OWCP representative, either directly or after accessing the IVR system. We therefore randomly scheduled all our test calls only during the hours that each office told us that a “live” representative should be available to respond to customer inquiries. Saturdays, Sundays, and Tuesday, July 4, 2000, were excluded from our test days. For district offices that had the same telephone number for the automated voice response system and for a representative, we called that number 200 times and attempted to reach a representative immediately upon accessing the system on 100 of those calls and attempted to access claim information on the automated system and then access a representative on the other 100 calls. For offices that had separate numbers for the automated voice response system and the representative, we called each number 100 times. As before, upon completion of the automated voice response system, we attempted to reach a representative. All representatives contacted were informed that they had participated in a GAO telephone survey. When attempting to access an OWCP office’s telephone system, we let the telephone ring 15 times (over 1 minute) before making the determination that the telephone system did not answer. And, in waiting for a representative to answer the line, we waited at least 5 minutes after making the selection to speak to a representative before determining that the call was not answered. We used the 5 minute time period because we wanted to be more conservative than VBA’s goal of a caller’s accessing a representative within 3 minutes and NPR’s guidance and Ohio’s BWC’s goal of 30 seconds. For each call, we recorded how successful the district office was in providing services an injured worker or medical provider might desire. The information recorded included busy signals, no answer after 1 minute, whether we reached an office representative, whether we reached the automated voice response system, and whether consistent information was provided about the claimant by that system, such as the amount of a medical payment or the status of medical authorization for physical therapy. We conducted this telephone survey over a 6-week period in June and July 2000, and attempted the calls throughout the business hours listed for each district office. If the test calls we made were considered random samples of customers’ telephone experiences during the test period, the following statements could be made about the precision of the estimates: Estimates of the proportion of calls in each district in which customers were unable to access the telephone systems, and the proportion of calls in each district in which customers were unable to reach an employee within five minutes, have sampling errors of no more than 10 percentage points. Estimates of the proportion of IVR calls with consistent data (among IVR calls for which transactions could be tested) have sampling errors of no more than 10 percentage points unless otherwise noted in table II.1. The data we collected, however, were from test calls rather than “actual” customer calls. Characteristics of the test calls that might affect the outcomes we measured, such as time of day, day of week, or subject matter, might not have mirrored the profile of these characteristics among “actual” customers’ phone calls in any district during the test period. Therefore, the results displayed for individual districts might differ from the ones we might have obtained by sampling “actual” customer calls, by amounts larger than the stated sampling errors. At OWCP’s headquarters in Washington, D.C., we interviewed knowledgeable officials, reviewed strategic and operational plans for fiscal years 1995 through 2000 to identify goals and measures related to responding to customer inquiries, and obtained communication reports showing data that would indicate OWCP’s performance for the same period. We also discussed with these officials methods for testing the communications at OWCP district offices, and obtained claimant and medical provider information so that we could perform a telephone survey of the 12 district offices. We visited 5 of the 12 OWCP district offices—Chicago, Dallas, San Francisco, Seattle, and Washington, D.C. At these five offices, we conducted an in-depth review; we interviewed regional directors, district directors, claims managers, claims examiners, and workers compensation assistants to gain an understanding of the communications at OWCP from various perspectives. We obtained available communication reports, accountability review reports, reports measuring written and telephone communication performance, and other applicable communication data. We selected the five offices based on several factors to obtain a mixture of offices of differing sizes and levels of performance. The primary factors were (1) the number of employees and the number of cases managed at these offices, which ranged from among the lowest to among the highest of all the district offices; (2) OWCP’s telephone and written responsiveness measures, which indicated that some offices had and some had not met national performance goals; and (3) the proximity of two of the offices to our Washington, D.C., and Dallas Regional Office staff. We surveyed the other seven district offices--Boston, Cleveland, Denver, Jacksonville, Kansas City, New York, and Philadelphia--via a questionnaire and obtained general information about their communication practices, such as the number of employees each assigned to respond to telephone calls. To compare OWCP’s goals and practices for telephone communication with those of leading organizations, we also surveyed three agencies that have won awards for their telephone communication practices: the Social Security Administration, Department of Veterans Affairs’ Bureau of Benefits Administration, and state of Ohio’s Bureau of Workers’ Compensation (BWC). We asked OWCP and the three agencies to identify which of 95 telephone “best practices” they used that NPR identified in a 1995 study. We did not attempt to verify or validate their responses. In addition, we interviewed and obtained documents on communication performance goals and practices from officials at the three organizations and compared OWCP’s performance goals and measures with those of private sector organizations identified in the NPR study and those of the three organizations we surveyed. We did our work between January and September 2000 in accordance with generally accepted government auditing standards. Number (percentage) of transactions with consistent data55(93) 48(94) 49(91)48(94) 56(93) 28(88)58(98) 47(90)55(93) 42(100) 561(94) The district office had separate telephone numbers for accessing the automated interactive voice response system and accessing an OWCP representative. The other district offices had only one main number. An average for all OWCP district offices cannot be provided, because OWCP could not provide the total number of telephone calls each office received, and thus we could not weight the sample to accurately reflect the impact of each district office’s performance on a national average. Viewing GAO Reports on the Internet For information on how to access GAO reports on the INTERNET, send e-mail message with “info” in the body to: or visit GAO’s World Wide Web Home Page at: Reporting Fraud, Waste, and Abuse in Federal Programs To contact GAO's Fraud Hotline use: Web site: http://www.gao.gov/fraudnet/fraudnet.htm E-Mail: fraudnet@gao.gov Telephone: 1-800-424-5454 (automated answering system)
This testimony discusses the Department of Labor's Office of Workers' Compensation Programs (OWCP). GAO reviewed how OWCP communicates with injured federal workers, agencies who employ these persons, and medical and other service providers who treat them. To evaluate OWCP's system, GAO used criteria suggested by the National Partnership for Reinventing Government (NPR). This report summarizes GAO's findings on NPR's study of private sector practices for providing telephone customer service, which included: (1) setting challenging goals for meeting callers' needs for timely and accurate information; (2) collecting credible performance data to measure progress in attaining those goals; and (3) improving telephone service by using the performance data and results to periodic surveys of customers and stakeholders to determine levels of satisfaction. GAO found that OWCP provided consistent customer service regardless of where injured workers live. GAO made 2,400 telephone calls to OWCP's 12 district offices. To compare OWCP's goals and practices for telephone communication with those of model organizations, GAO surveyed three agencies that have won awards for their telephone communication practices: the Social Security Administration, the Department of Veterans Affairs' Benefits Administration, and Ohio's Bureau of Workers' Compensation.
VA’s mission is to promote the health, welfare, and dignity of all veterans in recognition of their service to the nation by ensuring that they receive medical care, benefits, social support, and lasting memorials. According to information from the department, its employees maintain the largest integrated health care system in the nation for more than 5 million patients at more than 1,500 sites of care, provide compensation and pension benefits for nearly 4 million veterans and beneficiaries, and maintain nearly 3 million gravesites at 163 properties. Over time, the use of IT has become increasingly important to the department’s efforts to provide these benefits and services to veterans; VA relies on its IT systems for medical information and records and for processing benefits claims, including compensation and pension and education benefits. Further, VA is increasingly expected to improve its service to veterans by sharing information with other departments, especially DOD. VA’s fiscal year 2012 request for almost $3.2 billion in IT budget authority indicates the range of the department’s IT activities. For example, the request includes: about $1.4 billion to operate and maintain existing infrastructure approximately $650 million to develop new system capabilities to support, for example, faster compensation and pension claims processing, elimination of veteran homelessness, and improvement of veteran mental health; $68 million for information security activities; and $915 million to fund about 7,000 IT personnel. Our prior work has shown that success in managing IT depends, among other things, on having and using effective system development capabilities and having effective controls over information and systems. We have issued several products on VA in important management areas where the department faces challenges. My testimony today will briefly summarize these products. Historically, VA has experienced significant IT development and delivery difficulties. We recently reported on two important VA systems development projects. The first project expended an estimated $127 million without delivering any of the planned capabilities. VA has begun implementing capabilities from the second project, although we identified opportunities for improvement. To carry out VA’s daily operations in providing care to veterans and their families, the department relies on an outpatient appointment scheduling system. However, according to the department, this current scheduling system has had long-standing limitations that have impeded its effectiveness. Consequently, VA began work on a replacement system in 2000. However, after spending an estimated $127 million over 9 years, VA had not implemented any of the planned capabilities. VA’s efforts to successfully complete the Scheduling Replacement Project were hindered by weaknesses in several key project management disciplines and a lack of effective oversight. Specifically, ● VA did not adequately plan its acquisition of the scheduling application and did not obtain the benefits of competition. The Federal Acquisition Regulation (FAR) required preparation of acquisition plans that must address how competition will be sought, promoted, and sustained. VA did not develop an acquisition plan until May 2005, about 4 years after the department first contracted for a new scheduling system. Further, VA did not promote competition in contracting for its scheduling system. Instead, VA issued task orders against an existing contract that the department had in place for acquiring services such as printing, computer maintenance, and data entry. These weaknesses in VA’s acquisition management reflected the inexperience of the department’s personnel in administering major IT contracts. To address identified shortcomings, we recommended that VA ensure that future acquisition plans document how competition will be sought, promoted, and sustained. ● VA did not ensure that requirements were complete and sufficiently detailed. Effective, disciplined practices for defining requirements include analyzing requirements to ensure that they are complete, verifiable, and sufficiently detailed. For example, maintaining bidirectional traceability from high-level operational requirements through detailed low-level requirements to test cases is a disciplined requirements management practice. However, VA did not adequately define requirements. For example, in November 2007, VA determined that performance requirements were missing and that some requirements were not testable. Further, according to project officials, some requirements were vague and open to interpretation. Also, requirements for processing information from other systems were missing. The incomplete and insufficiently detailed requirements resulted in a system that did not function as intended. In addition, VA did not ensure that requirements were fully traceable. As early as October 2006, an internal review noted that the requirements did not trace to business rules or to test cases. By not ensuring requirements traceability, the department increased the risk that the system could not be adequately tested and would not function as intended. We therefore recommended that VA ensure implementation of a requirements management plan that reflected leading practices. ● VA’s concurrent approach to performing system tests increased risk. Best practices in system testing indicate that testing activities should be performed incrementally, so that problems and defects with software versions can be discovered and corrected early. VA’s guidance on conducting tests is consistent with these practices and specifies four test stages and associated criteria for progressing through the stages. For example, defects categorized as critical, major, and average severity identified in testing stage one are to be resolved before testing in stage two is begun. Nonetheless, VA took a high-risk approach to testing by performing tests concurrently rather than incrementally. Scheduling project officials told us that they ignored their own testing guidance and performed concurrent testing at the direction of Office of Enterprise Development senior management in an effort to prevent project timelines from slipping. The first version to undergo stage two testing had 370 defects that should have been resolved before stage two testing was begun. Almost 2 years after beginning stage two testing, 87 defects that should have been resolved before stage two testing began had not been fixed. As a result of a large number of defects that VA and the contractor could not resolve, the contract was terminated. To prevent these types of problems with future system development efforts, we recommended that VA adhere to its own guidance for system testing. ● VA’s reporting based on earned value management data was unreliable. The Office of Management and Budget (OMB) and VA policies require major projects to use earned value management to measure and report progress. Earned value management is a tool for measuring a project’s progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Such a comparison permits actual performance to be evaluated and is based on variances from the cost and schedule baselines. In January 2006, the scheduling project began providing monthly reports to the department’s Chief Information Officer based on earned value management data. However, the progress reports included contradictory information about project performance. Specifically, the reports featured stoplight indicators (green, yellow, or red) that frequently were inconsistent with the reports’ narrative. For example, the June 2007 report identified project cost and schedule performance as green, despite the report noting that the project budget was being increased by $3 million to accommodate schedule delays. This inconsistent reporting continued until October 2008, when the report began to show cost and schedule performance as red, the actual state of the project. Further, the former program manager noted that the department performed earned value management for the scheduling project only to fulfill the OMB requirement, and that the data were not used as the basis for decision making because doing so was not a part of the department’s culture. To address these weaknesses, we recommended that VA ensure effective implementation of earned value management. ● VA did not effectively identify, mitigate, and communicate project risks. Federal guidance and best practices advocate risk management. To be effective, risk management activities should include identifying and prioritizing risks as to their probability of occurrence and impact, documenting them in an inventory, and developing and implementing appropriate risk mitigation strategies. VA established a process for managing the scheduling system project’s risks that was consistent with relevant best practices. Specifically, project officials developed a risk management plan that defined five phases—risk identification, risk analysis, risk response planning, risk monitoring and control, and risk review. However, the department did not take key project risks into account. Senior project officials indicated that staff members were often reluctant to raise risks or issues to leadership due to the emphasis on keeping the project on schedule. Accordingly, VA did not identify as risks (1) using a noncompetitive acquisition approach, (2) conducting concurrent testing and initiation of stage two testing with significant defects, and (3) reporting unreliable project cost and schedule performance information. Any one of these risks alone had the potential to adversely impact the outcome of the project. The three of them together dramatically increased the likelihood that the project would not succeed. To improve management of the project moving forward, we recommended that VA identify risks related to the scheduling project and prepare plans and strategies to mitigate them. ● VA’s oversight boards did not take corrective actions despite the department becoming aware of significant issues. GAO and OMB guidance call for the use of institutional management processes to control and oversee IT investments. Critical to these processes are milestone reviews that include mechanisms to identify underperforming projects, so that timely steps can be taken to address deficiencies. These reviews should be conducted by a department-level investment review board composed of senior executives. In this regard, VA’s Enterprise Information Board was established to provide oversight of IT projects through in-process reviews when projects experience problems. Similarly, the Programming and Long-Term Issues Board is responsible for performing milestone reviews and program management reviews of projects. However, between June 2006 and May 2008, the department did not provide oversight of the Scheduling Replacement Project, even though the department had become aware that the project was having difficulty meeting its schedule and performance goals. According to the chairman of the Programming and Long-Term Issues Board, it did not conduct reviews of the scheduling project prior to June 2008 because it was focused on developing the department’s IT budget strategy. To address these deficiencies, in June 2009, VA began establishing the Program Management Accountability System to promote visibility into troubled programs and allow the department to take corrective actions. We recommended that VA ensure the policies and procedures it was establishing were executed effectively. In response to our report, VA concurred with our recommendations and described its actions to address them. For example, the department stated that it would work closely with contracting officers to ensure future acquisition plans clearly identify an acquisition strategy that promotes full and open competition. In addition, the department stated that the Program Management Accountability System will provide near-term visibility into troubled programs, allowing the Principal Deputy Assistant Secretary for Information and Technology to provide help earlier and avoid long- term project failures. In May 2011, VA’s program manager stated that the department’s effort to develop a new outpatient scheduling system—now referred to as 21st Century Medical Scheduling—consists largely of planning activities, including the identification of requirements. However, according to the manager, the project is not included in the department’s fiscal year 2012 budget request. As a result, the department’s plans for addressing the limitations that it had identified in its current scheduling system are uncertain. In contrast to the scheduling system project failure, VA has begun implementing a new system for processing a recently established education benefit for veterans. The Post-9/11 GI Bill provides educational assistance for veterans and members of the armed forces who served on or after September 11, 2001. VA concluded that its existing system and manual processes were insufficient to support the new benefits. For instance, the system was not fully integrated with other information systems such as VA’s payments system, requiring claims examiners to access as many as six different systems and manually input claims data. Consequently, claims examiners reportedly took up to six times longer to pay Post- 9/11 GI Bill program claims than other VA education benefit claims. The challenges associated with its processing system contributed to a backlog of 51,000 claims in December 2009. In response to this situation, the department began an initiative to modernize its benefits processing capabilities. VA chose an incremental development approach, referred to as Agile software development, which is intended to deliver functionality in short increments before the system is fully deployed. In December 2010, we reported that VA had delivered key automated capabilities used to process the new education benefits. Specifically, it deployed the first two of four releases of its long-term system solution by its planned dates, thereby providing regional processing offices with key automated capabilities to prepare original and amended benefits claims. Further, VA established Agile practices including a cross-functional team that involves senior management, governance boards, key stakeholders, and distinct Agile roles and began using three other Agile practices—focusing on business priorities, delivering functionality in short increments, and inspecting and adapting the project. However, to help guide the full development and implementation of the new system, we reported that VA could make further improvements to these practices in five areas. 1. Business priorities. To ensure business priorities are a focus, a project starts with a vision that contains, among other things, a purpose, goals, metrics, and constraints. In addition, it should be traceable to requirements. VA established a vision that captured the project purpose and goals; however, it had not established metrics for the project’s goals or prioritized project constraints. Department officials stated that project documentation was evolving and they intended to improve their processes based on lessons learned; however, until it identified metrics and constraints, the department did not have the means to compare the projected performance with the actual results. We recommended that VA establish performance measures for goals and identify constraints to provide better clarity in the vision and expectations of the project. 2. Traceability. VA had also established a plan that identified how to maintain requirements traceability within an Agile environment; however, the traceability was not always maintained between legislation, policy, business rules, and test cases. We recommended that VA establish bidirectional traceability between requirements and legislation, policies, and business rules. 3. Definition of “done.” To aid in delivering functionality in short increments, defining what constitutes completed work and testing functionality is critical. However, VA had not established criteria for work that was considered “done” at all levels of the project. Program officials stated that each development team had its own definition of “done” and agreed that they needed to provide a standard definition across all teams. Without a mutual agreement for what constitutes “done” at each level, the resulting confusion can lead to inconsistent quality. We therefore recommended that VA define the conditions that must be present to consider work “done” in adherence with agency policy and guidance. 4. Testing. While the department had established an incremental testing approach, the quality of unit and functional testing performed during Release 2 was inadequate in 10 of the 20 segments of system functionality we reviewed. Program officials stated that they placed higher priority on user acceptance testing at the end of a release and relied on users to identify defects that were not detected during unit and functional testing. Without improved testing quality, the department risks deploying future releases that contain defects that may require rework. To reduce defects and rework to fix them, we recommended that VA improve the adequacy of the unit and functional testing processes. 5. Oversight. In order for projects to be effectively inspected and adapted, management must have tools to provide effective oversight. For Agile development, progress and the amount of work remaining can be reflected in a burn-down chart, which depicts how factors such as the rate at which work is completed (velocity) and changes in overall product scope affect the project over time. While VA had an oversight tool that showed the percentage of work completed to reflect project status at the end of each iteration, it did not depict the velocity of the work completed and the changes to scope over time. We therefore recommended that VA implement an oversight tool to clearly communicate velocity and the changes to project scope over time. VA concurred with three of our five recommendations. It did not concur with our recommendation that it implement an oversight tool to clearly communicate velocity. However, without this level of visibility in its reporting, management and the development teams may not have all the information they need to fully understand project status. VA also did not concur with our recommendation to improve the adequacy of the unit and functional testing processes to reduce the amount of system rework. However, without increased focus on the quality of testing early in the development process, VA risks delaying functionality and/or deploying functionality with unknown defects that could require future rework that may be costly and ultimately impede the claims examiners’ ability to process claims efficiently. In early May 2011, we reported that the implementation of remaining capabilities is behind schedule and additional modifications are needed. According to VA officials, system enhancements such as automatic verification of the length of service were delayed because of complexities with systems integration and converting data from the interim system. Additionally, recent legislative changes to the program required VA to modify the system and its deployment schedule. For instance, VA will need to modify its system to reflect changes to the way tuition and fees are calculated—an enhancement that officials described as difficult to implement. Because of these delays, final deployment of the system is now scheduled for the end of 2011—a year later than planned. Effective information security controls are essential to securing the information systems and information on which VA depends to carry out its mission. Without proper safeguards, the department’s systems are vulnerable to individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. The consequence of weak information security controls was illustrated by VA’s May 2006 announcement that computer equipment containing personal information on veterans and active duty military personnel had been stolen. Further, over the last few years, VA has reported an increasing number of security incidents and events. Specifically, each year during fiscal years 2007 through 2009, the department reported a higher number of incidents and the highest number of incidents in comparison to 23 other major federal agencies. To help protect against threats to federal systems, the Federal Information Security Management Act of 2002 (FISMA) sets forth a comprehensive framework for ensuring the effectiveness of information security controls over information resources that support federal operations and assets. The framework creates a cycle of risk management activities necessary for an effective security program. In order to ensure the implementation of this framework, FISMA assigns specific responsibilities to OMB, agency heads, chief information officers, inspectors general, and the National Institute of Standards and Technology (NIST), in particular requiring chief information officers and inspectors general to submit annual reports to OMB. In addition, Congress enacted the Veterans Benefits, Health Care, and Information Technology Act of 2006. Under the act, VA’s Chief Information Officer is responsible for establishing, maintaining, and monitoring departmentwide information security policies, procedures, control techniques, training, and inspection requirements as elements of the department’s information security program. It also reinforced the need for VA to establish and carry out the responsibilities outlined in FISMA, and included provisions to further protect veterans and service members from the misuse of their sensitive personal information and to inform Congress regarding security incidents involving the loss of that information. Information security has been a long-standing challenge for the department, as we have previously reported. In 2010, for the 14th year in a row, VA’s independent auditor reported that inadequate information system controls over financial systems constituted a material weakness.of eight agencies in fiscal year 2010 to report such a material weakness. Among 24 major federal agencies, VA was one VA’s independent auditor stated that, while the department continued to make steady progress, IT security and control weaknesses remained pervasive and placed VA’s program and financial data at risk. The auditor noted the following weaknesses: Passwords for key VA network domains and financial applications were not consistently configured to comply with agency policy. Testing of contingency plans for financial management systems at selected facilities was not routinely performed and documented to meet the requirements of VA policy. Many IT security control deficiencies were not analyzed and remediated across the agency and a large backlog of deficiencies remained in the VA plan of action and milestones system. In addition, previous plans of action and milestones were closed without sufficient and documented support for the closure. In addition, VA has consistently had weaknesses in major information security control areas. As shown in table 1, for fiscal years 2007 through 2010, deficiencies were reported in each of the five major categories of information security access controls as defined in our Federal Information System Controls Audit Manual. In fiscal year 2010, for the 11th year in a row, the VA’s Office of Inspector General designated VA’s information security program and system security controls as a major management challenge for the department. Of 24 major federal agencies, the department was 1 of 23 to have information security designated as a major management challenge. The Office of Inspector General noted that the department had made progress in implementing components of an agencywide information security program, but nevertheless continued to identify major IT security deficiencies in the annual information security program audits. To assist the department in improving its information security, the Office of Inspector General made recommendations for strengthening access controls, configuration management, change management, and service continuity. Effective implementation of these recommendations could help VA to prevent, limit, and detect unauthorized access to computerized networks and systems and help ensure that only authorized individuals can read, alter, or delete data. In March 2010, we reported that federal agencies, including VA, had made limited progress in implementing the Federal Desktop Core Configuration (FDCC) initiative to standardize settings on workstations. We determined that VA had implemented certain requirements of the initiative, such as documenting deviations from the standardized set of configuration settings for Windows workstations and putting a policy in place to officially approve these deviations. However, VA had not fully implemented several key requirements. For example, the department had not included language in contracts to ensure that new acquisitions address the settings and that products of IT providers operate effectively using them. Additionally, VA had not obtained a NIST-validated tool to monitor implementation of standardized workstation configuration settings. To improve the department’s implementation of the initiative, we made four recommendations: (1) complete implementation of VA’s baseline set of configuration settings, (2) acquire and deploy a tool to monitor compliance with FDCC, (3) develop, document, and implement a policy to monitor compliance, and (4) ensure that FDCC settings are included in new acquisitions and that products operate effectively using these settings. VA concurred and has addressed the recommendation to ensure settings are included in new acquisitions. The department intends to implement the remaining recommendations in the future. FISMA requires each agency, including agencies with national security systems, to develop, document, and implement an agencywide information security program to provide security for the information and information systems that support the operations and assets of the agency, including those provided or managed by another agency, contractor, or other source. As part of its oversight responsibilities, OMB requires agencies to report on specific performance measures, including the percentage of: employees and contractors receiving IT security awareness training and those who have significant security responsibilities and have received specialized security training, systems whose controls were tested and evaluated, have tested contingency plans, and are certified and accredited. Since fiscal year 2006, VA’s progress in fully implementing the information security program required under FISMA and following the policies issued by OMB has been mixed. For example, from 2006 to 2009, the department reported a dramatic increase in the percentage of systems for which a contingency plan was tested in accordance with OMB policy. However, during the same period, it reported decreases in both the percentage of employees who had received security awareness training and the percentage of employees with significant security responsibilities who had received specialized security training. These decreases in the percentage of individuals who had received information security training could limit the ability of VA to effectively implement security measures. For fiscal year 2009, in comparison to 23 other major federal agencies, VA’s efforts to implement these information security control activities were equal to or higher in some areas and lower in others. For example, VA reported equal or higher percentages than other federal agencies in the number of systems for which security controls had been tested and reviewed in the past year, the number of systems for which contingency plans had been tested in accordance with OMB policy, and the number of systems that had been certified and accredited. However, VA reported lower percentages of individuals who received security awareness training and lower percentages of individuals with significant security responsibilities who received specialized security training. Cloud computing is an emerging form of computing that relies on Internet-based services and resources to provide computing services to customers, while freeing them from the burden and costs of maintaining the underlying infrastructure. Examples of cloud computing include Web-based e-mail applications and common business applications that are accessed online through a browser, instead of through a local computer. The President’s budget has identified the adoption of cloud computing in the federal government as a way to more efficiently use the billions of dollars spent annually on IT. However, as we reported in May 2010, federal guidance and processes that specifically address information security for cloud computing had not yet been developed, and those cloud computing programs that have been implemented may not have effective information security controls in place. As we reported, cloud computing can both increase and decrease the security of information systems in federal agencies. Potential information security benefits include those related to the use of virtualization, such as faster deployment of patches, and from economies of scale, such as potentially reduced costs for disaster recovery. Risks include dependence on the security practices and assurances of the provider, dependence on the provider, and concerns related to sharing computing resources. However, these risks may vary based on the cloud deployment model. Private clouds may have a lower threat exposure than public clouds, but evaluating this risk requires an examination of the specific security controls in place for the cloud’s implementation. We made recommendations to OMB, the General Services Administration, and NIST to assist agencies in identifying uses of cloud computing and necessary security measures, selecting and acquiring cloud computing products and services, and implementing appropriate information security controls when using cloud computing. VA and DOD have two of the nation’s largest health care operations, providing health care to 6 million veterans and 9.6 million active duty service members and their beneficiaries at estimated annual costs of about $48 billion and $49 billion, respectively. Although the results of a 2008 study found that more than 97 percent of functional requirements for an inpatient electronic health record system are common to both departments, the departments have spent large sums of money to separately develop and operate electronic health record systems. Furthermore, the departments have each begun multimillion dollar modernizations of their electronic health record systems. Specifically, VA reported spending almost $600 million from 2001 to 2007 on eight projects as part of its Veterans Health Information Systems and Technology Architecture (VistA) modernization. In April 2008, VA estimated an $11 billion total cost to complete the modernization by 2018. For its part, DOD has obligated approximately $2 billion over the 13-year life of its Armed Forces Health Longitudinal Technology Application (AHLTA) and requested $302 million in fiscal year 2011 funds for a new system. Additionally, VA and DOD are working to establish the Virtual Lifetime Electronic Record (VLER), which is intended to facilitate the sharing of electronic medical, benefits, and administrative information between the departments. VLER is further intended to expand the departments’ health information sharing capabilities by enabling access to private sector health data. The departments are also developing joint IT capabilities for the James A. Lovell Federal Health Care Center (FHCC) in North Chicago, Illinois. The FHCC is to be the first VA/DOD medical facility operated under a single line of authority to manage and deliver medical and dental care for veterans, new Naval recruits, active duty military personnel, retirees, and dependents. In February 2011, we reported that VA and DOD lacked mechanisms for identifying and implementing efficient and effective IT solutions to jointly address their common health care system needs as a result of barriers in three key IT management areas—strategic planning, enterprise architecture, and investment management. Strategic planning: The departments were unable to articulate explicit plans, goals, and time frames for jointly addressing the health IT requirements common to both departments’ electronic health record systems. For example, VA’s and DOD’s joint strategic plan did not discuss how or when the departments propose to identify and develop joint health IT solutions, and department officials did not determine whether the IT capabilities developed for the FHCC could or would be implemented at other VA and DOD medical facilities. Enterprise architecture: Although VA and DOD had taken steps toward developing and maintaining artifacts related to a joint health architecture (i.e., a description of business processes and supporting technologies), the architecture was not sufficiently mature to guide the departments’ joint health IT modernization efforts. For example, the departments did not define how they intended to transition from their current architecture to a planned future state. Investment management: VA and DOD did not establish a joint process for selecting IT investments based on criteria that consider cost, benefit, schedule, and risk elements, which would help to ensure that a chosen solution both meets the departments’ common health IT needs and provides better value and benefits to the government as a whole. These barriers resulted in part from VA’s and DOD’s decision to focus on developing VLER, modernizing their separate electronic health record systems, and developing IT capabilities for FHCC, rather than determining the most efficient and effective approach to jointly addressing their common requirements. Because VA and DOD continued to pursue their existing health information sharing efforts without fully establishing the key IT management capabilities described, they may have missed opportunities to successfully deploy joint solutions to address their common health care business needs. VA’s and DOD’s experiences in developing VLER and IT capabilities for FHCC offered important lessons to improve the departments’ management of these ongoing efforts. Specifically, the departments can improve the likelihood of successfully meeting their goal to implement VLER nationwide by the end of 2012 by developing an approved plan that is consistent with effective IT project management principles. Also, VA and DOD can improve their continuing effort to develop and implement new IT system capabilities for FHCC by developing a plan that defines the project’s scope, estimated cost, and schedule in accordance with established best practices. Unless VA and DOD address these lessons, the departments will jeopardize their ability to deliver expected capabilities to support their joint health IT needs. We recommended several actions that the Secretaries of Veterans Affairs and Defense could take to overcome barriers that the departments face in modernizing their electronic health record systems to jointly address their common health care business needs, including the following: Revise the departments’ joint strategic plan to include information discussing their electronic health record system modernization efforts and how those efforts will address the departments’ common health care business needs. Further develop the departments’ joint health architecture to include their planned future state and transition plan from their current state to the next generation of electronic health record capabilities. Define and implement a process, including criteria that considers costs, benefits, schedule, and risks, for identifying and selecting joint IT investments to meet the departments’ common health care business needs. We also recommended that the Secretaries of Veterans Affairs and Defense strengthen their ongoing efforts to establish VLER and the joint IT system capabilities for FHCC by developing plans that include scope definition, cost and schedule estimation, and project plan documentation and approval. Both departments concurred with our recommendations and on March 17, 2011, the Secretaries of Veterans Affairs and Defense committed their respective departments to pursue joint development and acquisition of integrated electronic health record capabilities. In summary, effective IT management is critical to the performance of VA’s mission. However, the department faces challenges in key areas, including systems development, information security, and collaboration with DOD. Until VA fully addresses these and implements key recommendations, the department will likely continue to (1) deliver system capabilities later than expected; (2) expose its computer systems and sensitive information (including personal information of veterans and their beneficiaries) to an unnecessary and increased risk of unauthorized use, disclosure, tampering, theft, and destruction; and (3) not provide efficient and effective joint DOD/VA solutions to meet the needs of our nation’s veterans. Mr. Chairman, this concludes my statement today. I would be pleased to answer any questions you or other members of the subcommittee may have. If you have questions concerning this statement, please contact Joel C. Willemssen, Managing Director, Information Technology Team, at (202) 512-6253 or willemssenj@gao.gov; or Valerie C. Melvin, Director, Information Management and Human Capital Issues, at (202) 512-6304 or melvinv@gao.gov. Other individuals who made key contributions include Mark Bird, Assistant Director; Mike Alexander; Nancy Glover; Paul Middleton; and Glenn Spiegel. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The use of information technology (IT) is crucial to helping the Department of Veterans Affairs (VA) effectively serve the nation's veterans, and the department has expended billions of dollars annually over the last several years to manage and secure its information systems and assets. VA has, however, experienced challenges in managing its IT. GAO has previously highlighted VA's weaknesses in managing and securing its information systems and assets. GAO was asked to testify on its past work on VA's weaknesses in managing its IT resources, specifically in the areas of systems development, information security, and collaboration with the Department of Defense (DOD) on efforts to meet common health system needs. Recently, GAO reported on two VA systems development projects that have yielded mixed results. For its outpatient appointment scheduling project, VA spent an estimated $127 million over 9 years and was unable to implement any of the planned capabilities. The application software project was hindered by weaknesses in several key management disciplines, including acquisition planning, requirements analysis, testing, progress reporting, risk management, and oversight. For its Post 9/11 GI Bill educational benefits system, VA used a new incremental software development approach and deployed the first two of four releases of its long-term system solution by its planned dates, thereby providing regional processing offices with key automated capabilities to prepare original and amended benefits claims. However, VA had areas for improvement, including establishing business priorities, testing the new systems, and providing oversight. Effective information security controls are essential to securing the information systems and information on which VA depends to carry out its mission. For over a decade, VA has faced long-standing information security weaknesses as identified by GAO, VA's Office of the Inspector General, VA's independent auditor, and the department itself. The department continues to face challenges in maintaining its information security controls over its systems and in fully implementing the information security program required under the Federal Information Security Management Act of 2002. These weaknesses have left VA vulnerable to disruptions in critical operations, theft, fraud, and inappropriate disclosure of sensitive information. VA and DOD operate two of the nation's largest health care systems, providing health care to 6 million veterans and 9.6 million active duty service members at estimated annual costs of about $48 billion and $49 billion, respectively. To provide this care, both departments rely on electronic health record systems to create, maintain, and manage patient health information. GAO reported earlier this year that VA faced barriers in establishing shared electronic health record capabilities with DOD in three key IT management areas--strategic planning, enterprise architecture (i.e., a description of business processes and supporting technologies), and IT investment management. Specifically, the departments were unable to articulate explicit plans, goals, and time frames for jointly addressing the health IT requirements common to both departments' electronic health record systems. Additionally, although VA and DOD took steps toward developing and maintaining artifacts related to a joint health architecture, the architecture was not sufficiently mature to guide the departments' joint health IT modernization efforts. Lastly, VA and DOD did not have a joint process for selecting IT investments based on criteria that consider cost, benefit, schedule, and risk elements, which would help to ensure that the chosen solution both meets the departments' common health IT needs and provides better value and benefits to the government as a whole. Subsequent to our report, the Secretaries of Veterans Affairs and Defense agreed to pursue integrated electronic health record capabilities. In previous reports in recent years, GAO has made numerous recommendations to VA aimed at improving the department's IT management capabilities. These recommendations were focused on: improving two projects to develop and implement new systems, strengthening information security practices and ensuring that security issues are adequately addressed, and overcoming barriers VA faces in collaborating with DOD to jointly address the departments' common health care business needs.
Effective internal controls are essential to achieving the proper conduct of government business with full accountability for the resources made available. Internal controls serve as the first line of defense for preventing and detecting fraud and help ensure that an agency meets its missions, goals, and objectives; complies with laws and regulations; and is able to provide reliable financial and other information concerning its programs, operations, and activities. The Accounting and Auditing Act of 1950 requires agency heads to establish and maintain effective internal controls. Since then, other laws have required renewed focus on internal controls. For example, the Federal Managers’ Financial Integrity Act (FMFIA) of 1982 was enacted by the Congress because of repeated reports of fraud, waste, and abuse caused by weak internal controls and control breakdowns. FMFIA requires agency heads to periodically evaluate their systems of internal control using the guidance issued by the Office of Management and Budget (OMB) and to report annually to the President and the Congress on whether their systems conform to internal control standards issued by GAO. Pursuant to FMFIA, OMB Circular A-123, Management Accountability and Control, provides the requirements for assessing controls and GAO’s Standards for Internal Control in the Federal Government provide the measure of quality against which controls in operation are assessed. Most recently, the Federal Financial Management Improvement Act of 1996, in focusing on financial management systems, identified internal control as an integral part of those systems. Over the years, we and Defense auditors have issued a number of reports that have pointed to serious internal control weaknesses in the Department of Defense’s (DOD) payment processes and systems. In part, because of the seriousness of these problems and other related problems, we identified DOD’s contract payment process as error prone and costly and designated DOD contract management as a high-risk area. In this regard, we have reported that serious internal control weaknesses have resulted in numerous erroneous and, in some cases, fraudulent payments. For example, $3 million in fraudulent payments were made to a former Navy supply officer on over 100 false invoices. Also, we have identified computer security as a governmentwide high-risk area. With respect to DOD, in May 1996, we reported that unknown and unauthorized individuals are increasingly attacking highly sensitive unclassified information on DOD’s computer systems, which we found were particularly susceptible to attack through Internet connections. During fiscal year 1997, the DFAS Denver Center and its accounting and disbursing offices processed a reported $17.2 billion in vendor payments for the Air Force. The DFAS Denver Center, which was activated in January 1991, is responsible for accounting, disbursing, collecting, and financial reporting for Air Force vendor contracts. As a result of DFAS consolidations between 1991 and 1998, Defense Accounting Offices were closed. Under the DFAS Denver Center, financial services for vendor contracts are now performed by the Directorate of Finance and Accounting Operations, in Denver, Colorado, and five DFAS operating locations at Dayton, Ohio; Limestone, Maine; Omaha, Nebraska; San Antonio, Texas; and San Bernardino, California. The vendor payment process includes the processing and approval of payments for operational support such as utilities, medical services, and administrative supplies and services. Payments must be supported by (1) a signed contractual document, such as a purchase order, (2) an obligation, (3) an invoice, and (4) a receiving report. If the process is operating as intended, vendor payment team members at the various operating locations are to review these documents for accuracy and completeness and enter information into the vendor payment system—the Integrated Accounts Payable System—to create a payment voucher, which is subsequently approved by a certifying officer. Certifying officers are to compare payment vouchers to invoices and receiving reports to ensure the accuracy of the payment information prior to disbursement. For the first and last payments on a contract, certifying officers are to verify contract information as well. Following certification, the payment information is loaded into the disbursing system—the Integrated Paying and Collecting system. Before funds are disbursed, an independent check of available obligations (prevalidation) is to be made by electronically comparing vendor payment system transactions to obligations recorded in the General Accounting and Finance System (general ledger). Once available obligations are confirmed, the disbursing system uses the payment transactions generated by the vendor payment system to make the disbursements and report the payment data to the Department of the Treasury. In addition, the vendor payment system generates payment transactions to update the accounting system. Finally, the Merged Accountability and Fund Reporting reconciliations between the accounting, vendor payment, and disbursing systems are performed on a daily basis to help ensure that detail transactions, such as contract expenditures, are in agreement. To reduce the risk of error, waste, or wrongful acts and to reduce the risk of them going undetected, GAO internal control standards require segregation of key duties and responsibilities in authorizing, processing, recording, and reviewing transactions and maintaining custody over assets. No one individual should control all key aspects of a transaction or event without appropriate compensating controls. Also, the individuals performing these duties are to receive qualified and continuous supervision to ensure that the agency’s internal control objectives are met. To ensure that financial reports provide timely, accurate information on the results of operations, internal control standards require that transactions and other significant events are to be promptly recorded and properly classified. In addition, periodic evaluations are required to assess risks, identify deficiencies, and effect corrective action. For the two fraud cases, the primary internal control weakness was the lack of segregation of duties. In each case, the individuals committing the fraud had authority or capability to perform functions that should have been segregated. For example, in the Bolling AFB case, the contracting officer’s technical representative (COTR) had authority to authorize, approve, verify, and process contract and payment documentation and receive and accept goods and services. In the Dayton case, the Staff Sergeant, who at different times held positions in accounting and payment processing, was responsible for recording contract data, including obligations; invoice and receiving report information; and remittance addresses. After the Staff Sergeant’s access to the Dayton vendor payment system was removed, he was able to perform these functions by obtaining and using the computer password of another employee who had a level of access to the vendor payment system comparable to the level of access the Staff Sergeant previously held. An Air Force civilian employee, who was the COTR on the two Bolling AFB contracts, had broad authority to request contract amendments, order goods and services, receive and accept the goods and services, and approve payment for the items received. In addition, this person was not adequately supervised. The COTR’s supervisor told investigators and us that she allowed the COTR to perform these duties independently without close supervision. The COTR was able to embezzle over $500,000 by creating fictitious invoices and receiving reports. In September 1992, the COTR requested that contractor employees submit five false invoices totaling $342,832, for billings of goods and services that had not been ordered or received. According to contractor employees, the COTR told them that he was requesting advance billings to prevent the expiration of unused funding. While DOD has some authority to make advance payments, advance billings are not authorized for this purpose. Contractor employees submitted the five false invoices, as well as false receiving reports for each invoice, as instructed by the COTR. The COTR also gave the contractor a memo dated October 14, 1992, instructing the contractor to order $500,000 of legislative consulting services from a subcontractor, Applied Quantitative Systems, and include a 25 percent markup ($125,000) for overhead to be retained as the contractor’s fee when submitting the invoice to the Air Force. The COTR’s memo listed the five false invoices discussed above as partial documentation of these services. However, Applied Quantitative Systems was a fictitious company created by the COTR that had not provided any services under this contract, and the remittance address on the invoice was for a post office box opened by the COTR to receive the $500,000 payment. Had the contractor followed through on the COTR’s instructions, it would have eventually billed the government $625,000, sent $500,000 to the COTR’s fictitious company address, and kept $125,000 as overhead. However, according to contractor internal review files, management determined that legislative services were outside the scope of the contract and, as a result, did not submit this invoice to the government. In November 1992, contractor management became aware of the five false invoices that had been submitted at the COTR’s request and retrieved from the bank four checks received from the Air Force totaling $322,032 for payment of three of the invoices. The contractor voided the checks and returned them to the Air Force. Reportedly at the request of the contractor, the COTR had the Air Force withdraw the remaining two invoices totaling $20,800. Then, in December 1992, the COTR, without the contractor’s involvement, prepared 11 false invoices resulting in $504,941 in fraudulent payments. As with the Applied Quantitative Systems invoice, the COTR used his own post office box as the remittance address on the 11 false invoices. The COTR retrieved the payment from the post office box and deposited the funds in two newly established accounts at a bank where he maintained a personal account. The COTR was able to accomplish this scheme without detection by Air Force officials because he took advantage of his broad authority and the lack of adequate supervision. In addition, at the time of this incident, the address on the invoice was used as the remittance address, which is a control weakness. Therefore, directing the payments to himself was a simple matter of listing his post office box as the contractor address on the false invoices. Authorities were only alerted to the COTR’s embezzlement when he attempted to withdraw a large portion of the funds, and suspicious bank officials put a hold on the accounts and notified the U.S. Secret Service. After coming under suspicion, the COTR prepared a letter stating that overbilling errors had been made and returned the funds to the government. Following an investigation by the Air Force Office of Special Investigation, the COTR pleaded guilty and was sentenced to 3 years probation and ordered to pay $495. Further details on the COTR’s schemes can be found in GAO/OSI-98-15. Since the 1992-1993 Bolling AFB fraud, contractors are generally required to send invoices to DFAS Denver’s Directorate of Finance and Accounting Operations for payment. As a result, COTRs generally do not review or approve invoices. In addition, the Single Agency Manager (SAM) was put in place in March 1995. The mission of SAM, in general, is to provide, manage, operate, and maintain designated information technology services for all applicable components and customers. As a part of that mission, SAM operates and maintains information technology systems. In order to procure information technology systems and services, SAM utilizes contracting offices at the Pentagon and at Bolling AFB. SAM is in the process of implementing a position for contracting officer representatives (COR) who are to be responsible for the direct supervision of COTRs’ performance of contract-related duties, such as the writing of technical specifications, inspection of contractors’ technical performance, and submission of receiving reports. A SAM official told us that this change, which is targeted for full implementation by the spring of 1999, is intended to address the lack of close supervision that contributed to the Bolling AFB fraud. An Air Force Staff Sergeant was convicted of fraudulent activities at two locations. The first known location where fraudulent payments were made was Castle AFB, California, between October 1994 and May 1995. The Staff Sergeant, who was Chief of Material in the Accounting Branch, had broad access to the automated vendor payment system, which allowed him to enter contract information, including contract numbers, delivery orders, modifications, and obligations as well as invoice and receiving report information and remittance addresses. The Staff Sergeant used this broad access to process invoices and receiving report documentation that resulted in eight fraudulent payments totaling $50,770 that were identified. The invoices prepared by the Staff Sergeant designated the name of a relative as the payee and his own mailing address as the remittance address, although any address, including a post office box, could have been used. Castle AFB closed in September 1995, and the Staff Sergeant was transferred to DFAS Dayton. At DFAS Dayton, the Staff Sergeant was assigned as the Vendor Pay Data Entry Branch Chief in the Vendor Pay Division. As Vendor Pay Chief, the Staff Sergeant was allowed a level of access to the vendor payment system similar to the access he previously held at Castle AFB. Between November 1995 and January 1997, the Staff Sergeant prepared false invoices and receiving reports that resulted in nine fraudulent payments totaling $385,916. By designating the remittance address on the false invoices, the Staff Sergeant directed fraudulent payments to an accomplice. In February 1997, the Staff Sergeant was reassigned to DFAS Dayton’s Accounting Branch and his access to the vendor payment system was removed. However, while assigned to the Accounting Branch, the Staff Sergeant created two false invoices totaling $501,851 and submitted them for payment in June 1997, using the computer password of another DFAS employee who had a level of access comparable to that previously held by the Staff Sergeant. The Staff Sergeant’s fraudulent activities were detected when, for an invoice totaling $210,000, an employee performing the Merged Accountability and Fund Reporting reconciliation identified a discrepancy between the contract number associated with the invoice in the vendor payment system and the contract number associated with the invoice in the accounting system. These two numbers should always agree. For this invoice, the Staff Sergeant failed to ensure that the contract cited was the same in both systems. Further research determined that the contract was not valid and the payment was fraudulent. A second fraudulent invoice for $291,851, the $50,770 in fraudulent payments at Castle AFB, and the $385,916 in fraudulent payments at DFAS Dayton were detected during the subsequent investigation of the DFAS Dayton fraud. The Staff Sergeant was convicted of embezzling over $435,000 and attempted theft of over $500,000. He was also convicted of altering invoices and falsifying information in the vendor payment system—in violation of 18 U.S.C. 1001—to avoid interest on late payments and improve reported performance for on-time payments. In July 1998, the Staff Sergeant was sentenced to 12 years imprisonment. At DFAS Dayton and DFAS Denver Directorate of Finance and Accounting Operations, we observed internal control weaknesses in the vendor payment process that were similar or the same as those that contributed to the incidents of fraud discussed in this report. In addition, we identified weaknesses in computer security that would permit improper access to the vendor payment system. The lack of segregation of duties with respect to the level of access to the vendor payment system held by the Staff Sergeant that allowed him to embezzle funds remains widespread. We identified three critical access control weaknesses in the vendor payment system: (1) access levels do not provide adequate functional segregation of duties, (2) the number of staff with such access is excessive and widespread throughout DFAS and the Air Force, and (3) computer security over the operating system and the vendor payment application for DFAS Denver is weak. With regard to the first issue, an August 1996 Air Force Audit Reportdisclosed that DFAS personnel did not properly control access to the vendor payment system and recommended that DFAS review and reduce vendor payment system access levels where appropriate. Our review of vendor payment system access levels as of mid-June 1998 showed that across DFAS and Air Force installations, individual users could enter contract data, including obligations, and invoice and receiving report information, and change remittance addresses for vendor payments. Currently, there are four access levels to the vendor payment system: inquiry, clerk, subsupervisor, and supervisor. Inquiry is read only access. Clerk access allows the user to enter data other than remittance addresses. Subsupervisor access allows the user to input or change contract data; information on obligations, invoices, and receiving reports; and remittance addresses. Supervisor access allows the user to perform all subsupervisor functions as well as assign or remove access. The Staff Sergeant who committed the DFAS Dayton fraud had supervisor access. Proper and effective internal controls would preclude allowing any individual user to have the ability to record an obligation, create and change invoices and receiving reports, and enter remittance addresses. Once these activities are segregated organizationally by assigning them to different individuals, the authority to enter contract data and payment information must be functionally segregated within the vendor payment system application to maintain the integrity of the organizational segregation. Without segregation of these duties and controls over access to the system, appropriate compensating controls need to be in place, such as reviews of remittance address change activity and periodic verification of payment addresses with the vendors. Our review of the vendor payment process at DFAS Dayton and DFAS Denver’s Directorate of Finance and Accounting Operations confirmed that employees with supervisor and subsupervisor access to the vendor payment system could make fraudulent payments without detection by entering contract information and obligations, invoice and receiving report data, and changing or creating a remittance address. If the data on a false invoice and receiving report match the information on the voucher, certifying officers are not likely to detect a fraudulent payment through their certification process, a key prevention control. Second, problems with the lack of segregated access within the payment system application are compounded by the excessive and widespread access to the system throughout DFAS and the Air Force. Our review of vendor payment system access levels as of mid-June 1998 showed that 1,867 users across DFAS and Air Force installations had supervisor or subsupervisor access. Further, 94 of these users had not accessed the system since 1997, indicating that they may no longer be assigned to vendor payment operations. In addition, 171 users had not accessed the system at all, possibly indicating that access is not required as a regular part of their duties. DFAS officials told us they were unaware that such a large number of employees had broad access to the vendor payment system. DFAS Denver Center has scheduled operational reviews of all DFAS operating locations for completion by January 1999. These reviews are intended to assess whether DFAS operations comply with DFAS policies and procedures as well as laws and regulations. However, we found that the review program did not address the implementation and effectiveness of internal controls, including the segregation of duties and systems access issues identified in this report. After we briefed the DFAS Denver Center Director about our concerns, he told us that the operational review program would be revised to place a greater focus on internal controls, including the review of vendor payment system access levels. DFAS officials told us that for Air Force employees outside the operating locations who had supervisor or subsupervisor access, but only need status reports, they have initiated action to reduce the level of access to inquiry only. They also told us that they would consider modifying the supervisor and subsupervisor access levels across DFAS locations to provide for greater segregation of duties within the vendor payment application for employees responsible for processing payments. Finally, with respect to access controls, there are significant weaknesses in the mainframe operating system security and the vendor payment system application that would allow unauthorized users to make fraudulent or improper payments. A recently completed review by the Defense Information Systems Agency (DISA), performed at our request, determined that the Defense Megacenter (DMC) in San Antonio, on which DFAS Denver’s Directorate of Finance and Accounting Operations vendor payment system runs, did not appropriately restrict access to powerful system utilities. These utilities enable a user to access and manipulate any data within the mainframe computer and vendor payment system. The DMC had granted this privileged access to an excessive number of users and was not able to provide adequate documentation of management approval and review for most of these 161 users. In addition, the DMC had granted 673 users higher levels of access authority than necessary to perform their duties. These high-level security profiles enable a user to bypass the regular control features, which the mainframe computer and vendor payment system are capable of providing to preclude unintentional or unauthorized manipulation of vendor payment files. The DISA review also determined that routine system monitoring and oversight was not performed to identify and follow-up on user noncompliance with security standards. This allowed serious security weaknesses to exist, which are commonly exploited by hackers. For example, the review team was able to access user IDs and passwords residing in unsecured files on the system and gain access to other systems. Also, default passwords, which are commonly known, were not disabled. Further, passwords and user IDs were not managed according to DISA policies. In general, all user IDs and passwords were allowed to remain inactive for 90 days, contrary to DISA policy requiring that user IDs and passwords be disabled after 35 days of inactivity. There were also 36 users whose passwords expired after 180 days, and 12 users, including a security administrator, whose passwords were set to never expire, which exceeds the 90-day DISA policy. These situations increase the risk that user IDs will be compromised to gain unauthorized access to DOD systems. In addition, our tests of the local network and communication links to the DFAS Denver Directorate of Finance and Accounting Operations and the DFAS Dayton vendor payment systems showed that these systems are vulnerable to penetration by unauthorized internal DFAS and Air Force users. For example, because vendor payment system passwords and user IDs are transmitted across the local network and communication links in clear text, readily available software would permit any user to read vendor payment system passwords and user IDs. Thus, a clerk could obtain the passwords and user IDs of employees with higher access and use this information to enter the vendor payment system and perform all payment processing functions. DOD does not encrypt passwords and user IDs for unclassified financial data. However, other technological controls could be used to improve user authentication procedures, such as a smart card. Alternatively, other internal controls could be implemented, such as supervisory review and validation of user activity. As with the selection of any internal control, consideration of these alternatives would entail an assessment of the cost and benefits of each. The control over remittance addresses remains a weakness. DFAS changed its policy in April 1997 to require that the contractor address listed in the contract be used as the remittance address, but it still permits the use of the invoice address if the invoice states that payment must be made to a specified address. This continues to afford a mechanism to misdirect payments for fraudulent purposes. In addition, widespread access to the vendor payment system that allows users to enter changes to the remittance address, as discussed earlier, remains a weakness. The Defense Logistics Agency has an initiative under way intended to validate remittance addresses. Under the Central Contractor Registry,contractors awarded a contract on or after June 1, 1998, are required to be registered in order to do business with the government. While DFAS Denver Center officials did not have a target date for full implementation of the Registry, they expect that 80 percent of the eligible contracts will be included in the Registry by mid-1999. The Registry, which is accessed through the Internet using a password or manually updated using a standard form, is intended to ensure that the contractor providing payment data, including the remittance address, is the only one authorized to change these data. However, this process, while an improvement, still has vulnerabilities related to control over remittance address changes. First, as previously discussed, DOD’s computer systems are particularly susceptible to attack through connections on the Internet. In addition, once the addresses are downloaded from the Registry to the vendor payment system, they will be vulnerable to fraudulent or improper changes due to the access control weaknesses previously discussed. Therefore, Registry controls over the remittance addresses will only be effective to the extent that access to remittance addresses currently held by DFAS and Air Force employees is eliminated or compensating controls are implemented. Internal controls are put in place not only to help ensure accountability over resources, but also to help an agency achieve full compliance with laws and regulations, such as the Prompt Payment Act of 1982, as amended. This act provides governmentwide guidelines for establishing due dates on commercial invoices and provides for interest payments on invoices paid late. Except where otherwise specified within contracts, the act provides, generally, that agencies pay within 30 days after the designated office receives the vendor invoice or the government accepts the items ordered as satisfactory, whichever is later. According to Office of Management and Budget Circular A-125, Prompt Payment, which provides implementation guidance under the act, if the government does not reject items received within 7 days, acceptance will be deemed to occur on the 7th day after receipt. Payments made after the required payment date must include interest. One performance measure used by DFAS to assess operating location performance is the amount of interest paid. The falsification of payment documentation to improve reported performance for on-time payments was a violation of 18 U.S.C. 1001. In addition, it undermined DFAS Dayton’s internal controls over payments and impaired its ability to detect or prevent fraud. According to DFAS internal review and Air Force investigative reports, the Staff Sergeant convicted of embezzlement had also instructed his branch employees to falsify invoice dates in an effort to improve reported payment performance, thereby depriving government contractors of interest on late payments. This was done by (1) altering dates on invoices received from contractors, (2) replacing contractor invoices with invoices created using an invoice template that resided on DFAS Dayton personal computers used by vendor payment employees, and (3) throwing away numerous other invoices. According to DFAS internal review and Air Force investigative reports, during 1996, DFAS Dayton also used faxed invoices to alter invoice receipt dates to avoid late payment interest required by the Prompt Payment Act. According to documents presented at the Staff Sergeant’s trial, this was done by using a photocopy of the fax and manually changing the dates and then photocopying the fax again. DFAS Dayton staff then faxed the photocopied document to their own office to create a new date. Not only did this practice undermine late payment controls, but an environment in which altered documents are commonplace made it more difficult to detect other fraudulent activity, such as the false invoices generated for personal financial gain. In addition, we found that in June 1996, DFAS Dayton implemented an Air Force-wide initiative to improve payment timeliness which generally permitted (1) payment of invoices under $2,500 without receiving reports and (2) acceptance of remittance addresses recorded on invoices without further verification. As of October 1997, the payment of invoices without receiving reports was to be terminated based on legal concerns about compliance with prompt payment and advance payment statutes. Our review of selected fiscal year 1997 DFAS Dayton and DFAS Denver Directorate of Finance and Accounting Operations vendor payment transactions identified a number of problems, including inadequate documentation, which affect not only Prompt Payment Act compliance but the ability to determine whether payments were proper or whether the government received the goods and services paid for under Air Force contracts. Further, without adequate supporting documentation for disbursements, DFAS cannot ensure that fraud has not occurred. For DFAS Dayton, we tested 27 vendor payment disbursement transactions made during fiscal year 1997 as part of our audit of the governmentwide consolidated financial statements. Our tests disclosed that 9 of 27 disbursement transactions were not supported by proper payment documentation, which includes a signed contract, approved voucher, invoice, and receiving report. Of the remaining 18 disbursement transactions, receiving report documentation for 12 transactions did not properly document the date that goods and services were received. Instead, the receiving report documentation showed the date that the document was signed. At your request, we reviewed 77 vouchers for Bolling AFB contracts paid by DFAS Denver’s Directorate of Finance and Accounting Operations in 1997 and 1998 that were obtained by your staff during their review of the DFAS Denver Directorate’s vendor payment operations in March 1998. All 77 of the payment vouchers had deficiencies, ranging from incomplete information to identify the individual receiving the goods and services to a missing receiving report. For example, 13 of the 77 DFAS Denver Directorate’s payment vouchers were replacement invoices that were marked “duplicate original” or “reprint,” possibly indicating that the original invoices had been lost or misdirected before being entered in the vendor payment system. In addition, 31 of the 77 vouchers contained receiving report documentation that omitted the date that goods and services were received. On March 25, 1998, in response to concerns regarding these 31 vouchers, the DFAS Denver Directorate revised its receiving report requirements to help ensure proper documentation of this date. However, at the end of our review in mid-August 1998, we were told that this problem had not yet been corrected at DFAS Dayton or the other vendor payment operating locations. Our review also showed that 2 of the 77 vouchers had discrepancies similar to those identified as part of the DFAS Dayton investigation. Specifically, one voucher had been voided and resubmitted later without the appropriate interest calculation. The other voucher included an invoice that appeared to have been created by a DFAS Denver Directorate employee because, according to the contract, the contractor lacked invoicing capability. The practice of creating invoices for contractors provides an opportunity for DFAS and Air Force employees to create false invoices. In the absence of computerized invoicing, contractors can submit billing letters that identify quantities, items billed, and costs. Thus, there appears to be no valid reason for DFAS or Air Force employees to create invoices. In addition, we reviewed five examples of altered invoices identified by DFAS Dayton staff who had raised concerns about the payment process. We obtained copies of the invoices from the Air Force Audit Agency in June 1998. In one case, the invoice was duplicated and then altered so that interest due the vendor for a late payment was charged to the Defense Stock Fund rather than the appropriate Operation and Maintenance appropriation interest account. DFAS performance measures for late payments do not include interest paid from the Stock Fund. The other four invoices were created to alter the invoice dates by using an invoice template that is a standard file on Air Force and DFAS personal computers. These invoices were substituted for the original invoices submitted by the vendors to avoid interest payments. We also found that neither DFAS Dayton nor DFAS Denver’s Directorate of Finance and Accounting Operations tracks invoices, whether mailed or faxed, from the time they are received until they are entered into the vendor payment system. One means of tracking both mailed and faxed invoices would be for the mail room employees to enter invoice information into the vendor payment system at the time the invoices are received. This control would help ensure that the payment team personnel who are measured on timely performance are not also responsible for establishing the invoice receipt date for one of the key documents that determines when a payment is late. Due to missing and altered records, we were unable to reconstruct the history of the two contracts associated with the Bolling AFB embezzlement to determine whether the Air Force received the goods and services it paid for under the contracts. On July 30, 1986, a $49.6 million contract was awarded to provide office automation hardware, software, maintenance, training, and contractor support services for Air Staff offices at the Pentagon and several other locations. Responsibility for managing the contract was assigned to a contracting office at Bolling AFB. The contract ran from July 30, 1986, through December 31, 1991. Under this contract, almost 500 delivery orders were used to acquire goods and services. Under the Federal Acquisition Regulation (FAR), all records, documents, and other files pertaining to contracts such as this must be maintained for 6 years and 3 months after final payment. According to a Bolling AFB contracting official, the last payment on this contract was made in December 1992. Therefore, records pertaining to this contract should be maintained until at least March 1999. Nevertheless, despite an extensive search of both DFAS and Air Force records, we were unable to locate documentation showing the total amount paid under the contract. Further, neither the Air Force nor DFAS Denver officials were able to locate all the files pertaining to these contracts. As agreed with your office, due to the magnitude of missing records, we did not make further attempts to reconstruct the payment history for the 1986 contract. A Bolling AFB contracting official told us that a team has been formed to close out the contract. Under the FAR, the team would need to confirm that a final invoice has been approved or a final payment has been made for goods and services received and accepted before closing the contract. The contractor’s report on its 1993 internal review of the contract indicated that its records identified approximately $38 million of goods and services that were delivered over a 5-1/2 year period under the 1986 contract. We were unable to locate the contractor records needed to verify this amount. Given the extent of missing records, DFAS and Air Force efforts to confirm that payment was made for goods and services received will be difficult, if not impossible. On March 13, 1992, a follow-on contract was awarded, effective January 3, 1992, to the company responsible for the first contract. As with the 1986 contract, responsibility for managing this contract was assigned to Bolling AFB and the same COTR. The 1992 contract provided for hardware and software maintenance, technical support, parts, training, and a computer maintenance database. This contract also used delivery orders to acquire goods and services. During the life of the contract, contracting staff awarded 41 delivery orders and 81 modifications to these delivery orders. Based on available Air Force records, the total amount obligated under the 1992 contract appears to be about $8.2 million. We were able to locate payment vouchers totaling $6.7 million. However, we also found invoices in the contract files totaling over $279,000 for which payment vouchers could not be located. Further, the DFAS Denver Directorate was unable to locate check registers. Thus, we were unable to determine whether these invoices had been paid. As was the case for the 1986 contract, due to poor recordkeeping, neither Bolling AFB nor the contractor were able to accurately determine the status of payments and deliveries under the 1992 contract. The last delivery order for the contract was dated October 1, 1995. However, the contract extended through September 1996. On March 31, 1998, the contractor submitted a final bill totaling $194,000, which listed 16 invoices for which full or partial amounts may still be owed by the Air Force. DFAS officials told us that they did not plan to pay the final bill until they finish reviewing and validating the items included in the invoice because they believe that payment has already been made for some of these items. We were also unable to determine whether the Air Force received the goods and services paid for under the two contracts because, in addition to missing records, a number of improper and questionable procedures were followed for receipt and control of equipment and services paid for under the contracts. As discussed earlier, from June 1988 until February 1993, the COTR had broad authority to order, receive, and accept goods and services. In ordering equipment, the COTR designated the delivery location and later signed for the receipt and acceptance of the equipment. Also, the COTR directed equipment to be delivered to or from an Air Force storage facility. Beginning in 1990, the contractor requested a change in procedures whereby the Air Force would sign for equipment purchased under the 1986 contract but let the contractor store the equipment at the contractor’s warehouse until the Air Force was ready to take delivery. However, because neither the Air Force nor the contractor maintained accurate, complete property records on this equipment, we could not determine whether the Air Force received this equipment. Because of its desirability and portability, computer equipment is highly susceptible to theft. Under DOD’s Financial Management Regulation, pilferable items, such as personal computers, are required to be recorded in the property records. We attempted to determine whether the government received 29 computer equipment items identified as being maintained under the contract at Air Force locations in the Washington, D.C., area. We located 10 items and obtained documentation on the disposal of 3 items. Of the 16 remaining items, all of which were computer servers, only 4 were recorded in the property records. However, we were unable to locate the 4 servers. In addition, we could not locate or identify documentation for the 12 remaining servers. Property officials told us that computer equipment delivered and paid for under the contract was not always recorded in property records. In several instances, the COTR directed the contractor to bill for equipment as maintenance in order to avoid contract limitations on the amount of equipment that could be procured. The contractor’s 1993 internal review report stated that equipment was misdescribed as maintenance on 116 of 142 invoices reviewed. Although the 1992 contract required the contractor to develop a database to track equipment maintenance, neither Air Force nor contractor files contained complete maintenance records for equipment purchased under the contract. According to a contractor official, the contractor’s 1993 internal review team inadvertently destroyed the equipment maintenance database that the contractor was required to develop and maintain under the contract. Further, while the 1992 contract required the contractor to provide certificates for completed training, Bolling AFB contract records did not contain training certificates. Internal control weaknesses that contributed to past fraud in the Air Force’s vendor payment process continue. DFAS and the Air Force have not developed adequate segregation of duties to ensure that one individual cannot establish a contract obligation, enter invoice and receiving report information, and change a remittance address. Moreover, the Air Force’s vendor payment system is vulnerable to unauthorized users due to weaknesses in operating computer system and local network security. Until DFAS and the Air Force address control weaknesses in systems and processes and maintain accountability over goods and services received, the Air Force vendor payment process will continue to be vulnerable to fraudulent and improper payments. To address the continuing vulnerabilities in the vendor payment process, we recommend that the DFAS Director strengthen payment processing controls by establishing separate organizational responsibility for entering (1) obligations and contract information, (2) invoice and receiving report information, and (3) changes in remittance addresses; revise vendor payment system access levels to correspond with the segregation of organizational responsibility delineated above; and reduce the number of employees with vendor payment system access by (1) identifying the minimum number of employees needing on-line access to specific functions, (2) determining whether the access levels given to each user are appropriate for the user’s assigned duties, and (3) removing access from employees who are no longer assigned to these functions. To strengthen computer security for the vendor payment system, we recommend that the DISA Director (1) correct the system security control weaknesses in the operating system (mainframe) on which DFAS Denver’s vendor payment system application runs and (2) assess the costs and benefits of implementing technological and/or administrative controls over user IDs and passwords. To ensure that internal controls are properly designed and operating as intended, we recommend that the DFAS Director revise the operational review program to include assessments of the internal controls over the vendor payment process. To help ensure that vendor payments are proper and that they comply with Prompt Payment Act time frames, we recommend that the DFAS Director ensure that (1) the date that invoices are received and the date that goods and services are received are properly documented and (2) invoices are tracked from receipt through disbursement of funds. In addition, we recommend that the DFAS Director no longer permit the creation of contractor invoices by DFAS employees and require those contractors that lack invoicing capability to submit billing letters. We are sending copies of this report to the Ranking Minority Member of the Subcommittee on Administrative Oversight and the Courts, Senate Committee on the Judiciary; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the Senate Committee on Governmental Affairs, the House Committee on Government Reform and Oversight, and the House and Senate Committees on Appropriations; and the Director of the Office of Management and Budget. We are also sending copies to the Secretary of Defense; the Secretary of the Air Force; the Director, Defense Finance and Accounting Service; the Director, Defense Information Systems Agency; and the Director, Defense Logistics Agency. Please contact me at (202) 512-9095 if you or your staff have any questions. Major contributors to this report are listed in appendix II. In accordance with your request, our objectives were to (1) identify internal control weaknesses that contributed to Bolling AFB, Castle AFB, and the DFAS Dayton fraud, (2) provide our observations on whether the same or similar internal control weaknesses at the locations covered by our review continue to leave the Air Force vulnerable to fraud, and (3) to the extent possible, reconstruct the history of the two contracts associated with the Bolling AFB fraud to determine whether the government received the goods and services paid for under the contracts. To identify internal control weaknesses that contributed to the Bolling AFB and the Castle AFB and DFAS Dayton fraud, we reviewed investigative reports by DFAS internal reviewers and the Air Force’s Office of Special Investigations on how these incidents of fraud were accomplished. We also discussed the control weaknesses related to the fraud cases with DFAS Denver and Dayton managers. We compared the activities involved in the fraud with GAO internal control standards and federal agency requirements for assessing controls contained in OMB Circular A-123, Management Accountability and Control. Our work was limited to a review of the fraud incidents and related documentation for which the two individuals were convicted and does not address any ongoing investigations involving any additional participants. Our observations on the current internal control environment are based on the following. A review of the current vendor payment processes at the DFAS Denver Directorate of Finance and Accounting Operations and DFAS Dayton. A test of 27 fiscal year 1997 DFAS Dayton vendor payment transactions included in a statistical sample of payment transactions tested as part of our governmentwide consolidated financial statement audit effort. A review of 77 vendor payment vouchers processed by DFAS Denver in 1997 and 1998 that were provided to us by Subcommittee staff. We were asked to analyze this sample which was obtained by the Subcommittee staff as part of its review of DFAS Denver vendor payments. A review of five examples of altered invoices identified by DFAS Dayton staff, which we obtained from the Air Force Audit Agency. A test of computer system access controls for the vendor payment system—Integrated Accounts Payable System and the Central Contractor Registry. Discussions with DFAS, Air Force, and Single Agency Manager officials. To identify significant operating computer system control weaknesses, we reviewed the Defense Information Systems Agency’s (DISA) Security Readiness Review methodology and compared it with GAO’s Financial Information System and Control Audit Methodology. We also considered the results of Security Readiness Reviews performed by DISA at the Defense Megacenters in San Antonio, Texas, and Warner-Robins AFB, Georgia, which are the data processing centers for DFAS Denver and Dayton, respectively. In attempting to summarize the history of the 1986 and 1992 Bolling AFB contracts, we reviewed Bolling AFB contract files to determine the purpose, scope, and cost of the 1986 Air Staff Office Automation System contract and the 1992 Air Staff CAISS Air Force Follow-on contract and reviewed the 1986 and 1992 contract activity using records obtained from Bolling AFB, the Air Force finance office at the Pentagon, and DFAS Denver’s Directorate of Finance and Accounting Operations. In our efforts to determine whether the Air Force received the goods and services paid for under the 1986 and 1992 contracts, we reviewed contract records, payment documents, and systems data at Bolling AFB, the Air Force finance office at the Pentagon, and the Single Agency Manager office at the Pentagon. We performed our work from October 1997 through August 1998 in accordance with generally accepted government auditing standards. We conducted our review at the 11th Wing Contracting Squadron at Bolling AFB, Washington, DC; the Single Agency Manager office at the Pentagon in Arlington, Virginia; DFAS Dayton in Ohio; the DFAS Denver Center and DFAS Denver Directorate of Finance and Accounting Operations; and the Defense Megacenters at San Antonio, Texas, and Warner-Robbins AFB, Georgia. We requested comments on a draft of this report from the Secretary of Defense or his designee. We had not received comments by the time we finalized our report. Thomas Armstrong, Assistant General Counsel Andrea Levine, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed two specific cases of fraud involving Air Force vendor payments, focusing on: (1) internal control weaknesses that contributed to the two fraud cases; (2) observations on whether the same or similar internal control weaknesses continue to leave the Air Force vulnerable to fraud or improper payments; and (3) reconstructing the history of the two contracts associated with the Bolling Air Force Base (AFB) fraud to determine whether the government received the goods and services paid for under the contracts. GAO noted that: (1) the two cases of fraud resulted from a weak internal control environment; (2) the lack of segregation of duties and other control weaknesses created an environment where employees were given broad authority and the capability, without compensating controls, to perform functions that should have been performed by separate individuals under proper supervision; (3) similar internal control weaknesses continue to leave Air Force funds vulnerable to fraudulent or improper vendor payments; (4) for example, as of mid-June 1998, over 1,800 Defense Finance and Accounting Service (DFAS) and Air Force employees had a level of access to the vendor payment system that allowed them to enter contract information, including the contract number, delivery orders, modifications, and obligations, as well as invoice and receiving report information and remittance addresses; (5) no one individual should control all key aspects of a transaction or event without appropriate compensating controls; (6) this level of access allows these employees to submit all the information necessary to create fraudulent and improper payments; (7) in addition, the automated vendor payment system is vulnerable to penetration by unauthorized users due to weaknesses in computer security, including inadequate password controls; (8) further, DFAS lacked procedures to ensure that the date that invoices were received for payment and the date that goods and services were received were properly documented; (9) these are critical dates for ensuring proper vendor payments and compliance with the Prompt Payment Act, which requires that payments made after the due date include interest; (10) missing records, another indicator of a weak internal control environment, prevented GAO from reconstructing the complete history of the two Air Force contracts associated with the Bolling AFB fraud; and (11) GAO was also unable to determine whether the Air Force received the goods and services paid for under these contracts because, in addition to missing records, a number of improper procedures were followed for receipt and control of equipment and services paid for under the contracts.
In July 1995, we reported on IRS’ progress in implementing some of the business and technological components of its modernization effort, known as Tax Systems Modernization (TSM). Although we said that IRS had made some progress, we also said that pervasive management and technical weaknesses existed that placed the modernization effort at risk. Among other things, (1) IRS did not have a business strategy to maximize electronic filing, the result of which could slow the planned decrease in the workload of paper processing systems; and (2) IRS lacked the full range of managerial and technical foundations to realize its modernization objectives. Some of these key foundation components were a complete cost/benefit analysis of the overall modernization effort and thorough testing of individual systems before they were implemented. In September 1996, we reported on IRS’ progress in addressing the managerial and technical weaknesses we identified in July 1995. We concluded that although IRS is working to resolve these weaknesses, it had not fully satisfied any of our recommendations. SCRIPS was one of the systems that was designed under the conditions cited in our July 1995 report. It was intended to replace the aging OCR equipment that IRS had been using to process all of the paper Federal Tax Deposit (FTD) coupons; almost all of the paper information returns (e.g, Forms 1099); some of the individual income tax returns filed on Form 1040EZ; and some employment tax returns (Form 941). In addition, SCRIPS was expected to process Form 1040PC, a paper form that taxpayers can generate when they use computer software to prepare a tax return. Figure 1 shows the various SCRIPS components. Under the character recognition and image capture component of SCRIPS, scanners (1) read information from the document and convert the information to machine-readable format for later computer processing and (2) create an image of the document. In the event of recognition errors during document scanning, IRS staff can access an image of the tax return in lieu of having to locate the original paper tax return to make corrections. IRS expected that having an image of the tax return would improve the productivity of staff doing data validation. In addition, the images of FTD coupons and certain information return documents are stored on optical disk for later use. The older OCR systems used microfilming as the storage medium—essentially a manual process that required more physical storage space than optical disks. IRS continues to retain a paper copy of the Form 1040EZ rather than store the image because IRS has certain legal concerns. IRS expected that SCRIPS would provide faster, more accurate document processing. Specifically, IRS expected that SCRIPS would result in a 20 percent productivity increase over manual data entry and a 10 percent productivity increase over older OCR equipment. Other expected benefits included lower costs for system maintenance and storage of tax return data. Originally, IRS planned to implement SCRIPS in all 10 service centers where it currently processes paper returns. However, after the contract was awarded in February 1993, IRS decided to consolidate paper tax return processing in five centers. Accordingly, SCRIPS was tested in Cincinnati in the summer of 1994, and the other four SCRIPS centers began using SCRIPS between September and November 1994. Our objectives were to (1) determine the primary causes for SCRIPS performance problems in 1995; (2) assess whether those problems were corrected as of September 30, 1996; and (3) provide a status report on IRS’ future plans for SCRIPS. To accomplish these objectives, we did the following: We interviewed National Office officials in the SCRIPS project office and the taxpayer service function, which has responsibility for those tax forms processed on SCRIPS, to obtain their views on the extent to which SCRIPS’ performance improved during fiscal year 1996. We interviewed IRS contracting office officials to determine what, if any, performance requirements the contractor was being held to in 1995 and 1996. We interviewed officials at all five SCRIPS service centers to determine whether performance had improved in 1996 and to identify the performance indicators officials were using to evaluate SCRIPS performance. We observed SCRIPS in operation at the service centers in Memphis and Cincinnati and at the program development site in Washington, D.C. We interviewed contractor officials about the workload requirements that IRS had specified for SCRIPS. We reviewed IRS business cases for SCRIPS that documented the objectives of the system, its expected benefits, and its estimated cost. IRS prepared business cases for SCRIPS in February 1992, April 1992, December 1993, October 1994, and December 1995. To assess SCRIPS, we used the performance expectations that were included in the October 1994 business case because it was the one that was in effect at the beginning of the 1995 filing season. We reviewed IRS evaluations and reports on the implementation of SCRIPS, including an August 1995 performance evaluation report; a December 1995 post-implementation review report that assessed whether business goals were met; a May 1995 Internal Audit report on SCRIPS testing in Cincinnati; a February 1996 Internal Audit report on the rollout of SCRIPS; and a September 1996 report on the results of an investment evaluation of SCRIPS. We compared available SCRIPS performance data for January through September 1995 and 1996. We computed composite rates—the number of documents processed per hour—for Form 1040EZ, information returns, and FTD coupons. To compute those rates, we used IRS data on the number of hours spent for various aspects of tax return processing, including scanning, data correction, and data validation for January through September 1995 and 1996. Our composite rate calculations, similar to those done by IRS shortly after SCRIPS was implemented, did not include time for document preparation or another function that is referred to as code and edit. We did our audit work between October 1995 and September 1996 and in accordance with generally accepted government auditing standards. Other than identifying obvious reporting errors, we did not verify the accuracy of IRS’ data on the number of hours spent for various aspects of tax return processing. We did attempt to detemine what elements were included in IRS’ cost estimates for SCRIPS, but we did not attempt to verify the accuracy of the costs for those elements. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the National Director for Submission Processing, the Assistant Commissioner for Forms and Submission Processing, and the SCRIPS project manager, provided IRS’ comments in a November 13, 1996, meeting. Their comments on our recommendation were reiterated in a November 19, 1996, memorandum from the Acting Chief of Taxpayer Service. IRS’ comments are summarized and evaluated on pages 22 and 23. SCRIPS performed well below expectations in 1995. Extensive, unscheduled system downtime and slower than expected processing rates affected SCRIPS’ ability to meet the expectations that IRS had established before the start of the 1995 filing season. Hardware problems with the scanner contributed to significant amounts of system downtime; various software problems contributed to slow processing rates. In 1995, SCRIPS processed 19 percent more FTD coupons than IRS expected. However, as shown in table 1, SCRIPS did not meet IRS’ volume expectations for the three other document types scheduled for processing in 1995—information returns, Form 1040EZ, and Form 941 (IRS did not expect to start processing Form 1040PC on SCRIPS until 1996). SCRIPS processed 19 percent more FTD coupons than IRS had expected because more FTD coupons were filed than expected, and service center officials placed the highest priority on processing those forms. That priority stemmed from (1) IRS procedures that require that 90 percent of all FTD coupons be processed within 24 hours of receipt; and (2) the absence of a backup processing system, because IRS had cancelled its maintenance contract for the older OCR equipment that had been processing FTD coupons. IRS had backup systems for the other documents that SCRIPS was expected to process in 1995. The manual data entry system could be used for forms 1040EZ and 941, and IRS extended the maintenance contract for the older OCR equipment that was processing information returns. Not only did SCRIPS not process the number of forms expected, but the speed with which it did its processing was slower than expected. As shown in table 2, SCRIPS did not meet the processing rate expectations for any of the three document types it processed in 1995. Moreover, the actual processing rate for Forms 1040EZ in 1995 was about 7 percent less than the rate achieved in 1995 by manual data entry. Because of performance problems, two of the five service centers stopped using SCRIPS to process Forms 1040EZ in 1995. Instead, they reverted to manual data entry, which required using more staff resources than planned, thus increasing processing costs. IRS had planned to use 25.6 staff years to process other-than-full-paid Forms 1040EZ during the 1995 filing season in the five SCRIPS centers but used 66.5 staff years instead. Hardware problems contributed to SCRIPS’ performance deficiencies in 1995. According to IRS’ August 1995 performance evaluation report, the SCRIPS scanner experienced the most hardware failures, which contributed to a “substantial amount of downtime” at the centers. According to IRS data, between April and June 1995, the five service centers experienced about 791 hours of unscheduled downtime. In addition, the scanner jammed when paper was extremely thin, which was sometimes the case with information returns. Software problems also occurred. According to IRS officials and evaluations of SCRIPS, the image controller did not operate fast enough to keep up with the scanner because the operating system software was inefficient. The image controller is to track each scanned document image file to ensure that it moves from the scanner and ultimately to the server where it is stored. This component helps reconcile the number of documents scanned to the number of documents in the database. IRS officials and evaluations of SCRIPS also indicated that SCRIPS did not provide accurate reports on this reconciliation—referred to as “run-to-run” balancing. The FTD coupon run-to-run balancing report, for example, did not provide total counts to help IRS staff confirm that all of the FTD coupons in a block that was scanned had in fact been processed. This balancing is particularly important for FTD coupons because IRS needs to accurately classify tax deposits to the appropriate Treasury account. At one center we visited, officials told us that they had to revert to manual counts for doing run-to-run balancing. Many of the problems experienced with SCRIPS in 1995 might have been anticipated if IRS had thoroughly tested SCRIPS before installing the system in the other four service centers. The pilot test of SCRIPS was incomplete because it (1) did not certify all software applications that were to be used during 1995; and (2) did not test SCRIPS’ ability to handle peak processing volumes, such as those experienced in the tax return filing season. An organization tests a new or modified system to detect system design and development errors and to correct them before putting a system into operation. In our July 1995 report on TSM, we said that although IRS recognized the importance of testing, it had not yet developed a complete and comprehensive testing plan for TSM. We said that individual TSM systems were developing their own test plans, which IRS described as rudimentary and inadequate. This testing environment was in effect when IRS did a pilot test of SCRIPS at the Cincinnati Service Center in the summer of 1994. The purpose of a pilot test is to evaluate the performance of a system in one location before deciding whether to implement the system at other locations. IRS uses the pilot test to certify that the system is meeting its program or business objectives. During the pilot test, IRS is to collect data on the performance of the system and compare the data against established performance goals to certify that the system is performing as expected. Due to delays in receiving and testing software, only the FTD coupon application was certified as a result of the pilot test. The certifications for Forms 1040EZ and information returns were not done due to management concerns that conducting these certifications would delay the rollout of SCRIPS to five centers, which was scheduled for January 1995. As a result of incomplete testing, for example, IRS did not identify problems with the information returns software until SCRIPS was fully operational. Due to these software problems, a higher percentage of returns were sent to data validation than originally anticipated, thus decreasing system productivity. According to IRS officials, those problems were corrected before the 1996 filing season. The pilot test was also incomplete because it did not (1) provide IRS a clear indication of SCRIPS’ ability to perform under peak workload conditions and (2) show the impact of running multiple software applications. If the system had been tested at larger volumes, running multiple software applications, problems may have been identified earlier and resources could have been diverted to address performance problems before the system’s rollout. According to an Internal Audit report on the pilot test, the contract required that SCRIPS process 200,000 documents a day during peak periods. During the 1-day volume test in the pilot, only 45,000 documents were scanned into the database. The remaining components of the system processed an additional 142,000 documents that had been scanned before the test. Thus, the scanner, one of the primary sources of downtime during 1995, was not tested under levels IRS experiences in a production environment. Also, only 10,000 documents were processed completely—only 5 percent of the daily production workload specified in the contract. According to IRS officials, a full production test plan had been developed in December 1993 that would simulate peak filing season volumes. However, the plan was never implemented. One reason IRS officials provided for not implementing the test plan was that IRS never had full access to a SCRIPS system to do the test. The system that IRS was to use for the test was the same system that the contractor was using for systems development work. Another factor that IRS officials cited was a sense of urgency to roll out SCRIPS for FTD coupons because IRS had no backup processing system. IRS’ December 1995 report on the post-implementation review of SCRIPS cited several problems with the testing of SCRIPS in addition to those that occurred in the pilot test. For example, although an equipment acceptance test was done, the test was not adequate to validate the system’s readiness for production. The report also stated that IRS waived an “operational capabilities demonstration” that would have included (1) tests for the readability of forms and (2) measures of the number of documents scanned per hour. The report said that such a test, done as part of the contract award process, may have shown that the system could not meet the minimum performance requirements. The post-implementation review team proposed that tests be done to verify that the system could handle the production workload. It also stated that if such a test could not be done, a simulated test should be done focusing on the performance of key areas of the system, particularly those that could lead to processing bottlenecks. It also proposed a 1-year pilot test for forms that may be added to SCRIPS in the future and proposed that the pilot test include the peak volumes of a filing season. Officials in the five service centers that used SCRIPS in 1996 said that it is performing significantly better than it did in 1995. IRS officials told us that the primary performance expectation for SCRIPS in 1996 was system stabilization. One of the primary indicators these officials used in evaluating stabilization was the amount of downtime. According to IRS service center officials and available IRS data, downtime decreased substantially in 1996. This reduction enabled SCRIPS to process more Forms 1040EZ and information returns in 1996 while continuing to process all FTD coupons. In addition, the processing rates for SCRIPS (i.e., the number of documents processed per hour) improved for two document types—Form 1040EZ and FTD coupons. Despite these improvements, the system, as of September 30, 1996, (1) was not processing all the forms that the October 1994 business case said it would process, (2) was expected to cost more than original estimates, and (3) was not expected to provide the estimated labor savings that were cited in the October 1994 business case. According to service center officials, SCRIPS performed “significantly better” in 1996 than it did in 1995 because the system experienced significantly less downtime than in 1995. IRS did not begin tracking downtime in 1995 until April. Comparable data for April through June 1995 and 1996 show that unscheduled downtime did decrease significantly—from about 791 hours to 43 hours. As shown in table 3, mostly because of less unscheduled system downtime, SCRIPS processed many more documents during the first 9 months of 1996 than it did during the first 9 months of 1995. Also, according to IRS officials, new software was installed in November 1995 to correct the run-to-run balancing problems that we discussed earlier. According to these officials, since that software was installed, SCRIPS centers have not reported problems with tracking the number of documents scanned and processed. Also in 1996, as shown in table 4, processing rates increased for Forms 1040EZ and FTD coupons. Despite the improved performance in 1996, SCRIPS is still doing much less than expected. In addition, estimated costs have increased and estimated labor cost savings have decreased. The October 1994 business case stated that in 1996 SCRIPS would be processing (1) all Forms 1040EZ, (2) all FTD coupons, (3) all information returns, (4) 93 percent of the Forms 941, and (5) 50 percent of the Forms 1040PC. During the first 9 months of 1996, as in 1995, SCRIPS processed all the FTD coupons IRS had received but no forms 1040PC or 941. Also, although SCRIPS processed more Forms 1040EZ and information returns during the first 9 months of 1996 compared with the first 9 months of 1995, it still processed only about 50 percent of the Forms 1040EZ and about 60 percent of paper information returns. As discussed in the next few paragraphs, this reduced level of performance, compared to the October 1994 business case, stems in large part from a major change in IRS’ plans after the SCRIPS contract was awarded. Furthermore, IRS officials did not know the extent to which hardware and software modifications that had been made for 1996 and those that were planned for later in the year would affect SCRIPS’ ability to process more Forms 1040EZ and information returns. Therefore, IRS officials did not significantly increase the expectations for SCRIPS in 1996 over those that they had in 1995. As specified in the contract, SCRIPS was designed on the assumption that it would be installed in each of the 10 service centers that were then processing paper returns. However, in December 1993—10 months after the contract was awarded—IRS announced plans to consolidate the processing of paper tax returns in five centers. As a result, although the total workload for SCRIPS remained the same, the volume to be processed by any one of the five SCRIPS service centers on the average doubled. Thus, the system that the contractor had designed would not meet the workload requirements without further systems development work. In May 1994, IRS attempted to revise the original contract to meet the volume requirements at five service centers. According to IRS’ post-implementation review report, a statement of work was written to revise the contract requirements to accommodate the five service center scenario. In December 1994, at IRS’ request, the contractor proposed changes totaling about $21 million. According to contractor officials, that proposal represented an interim attempt to meet the new workload requirements under a five service center scenario. According to IRS officials, IRS could not afford all of the proposed modifications. Also, IRS officials did not believe that all the proposed changes were needed. As a result, negotiations on these proposals were never completed. Thus, IRS decided to purchase five systems, one for each of the five SCRIPS service centers. To compensate for having fewer systems than intended, IRS decided to (1) continue paper processing of Forms 1040EZ in the other five service centers and (2) extend the maintenance contract for the OCR equipment that was being used to process information returns. After the 1995 filing season, IRS issued a statement of work for system enhancements that would help stabilize SCRIPS performance in 1996. According to IRS officials, they purchased those enhancements that they believed offered the greatest potential for improving SCRIPS’ performance in fiscal year 1996. For example, IRS purchased a third scanner for each service center that could (1) be used as a backup if one of the two primary scanners failed and (2) provide the ability to scan documents ahead and have them wait in queue for further processing. Many of these enhancements were implemented late in the 1996 filing season, after most of the Forms 1040EZ and information returns had been processed. According to IRS’ Office of Assistant Chief Counsel, IRS cannot hold the contractor to any specific performance requirements for the number of documents that SCRIPS must process within a specific time period, (e.g., per week, per hour). When IRS modified the contract to reflect that SCRIPS would be put in 5 centers instead of 10, according to the Assistant Chief Counsel’s Office, it did not clearly establish throughput requirements—the number of documents to be scanned per hour. Thus, despite paying for enhancements for 1996, IRS has determined that it cannot currently hold the contractor to any specific performance requirements. In February 1996, IRS Internal Audit recommended that IRS finalize throughput requirements for SCRIPS and do a test to determine whether the contractor is meeting the requirements. IRS officials told us that they are examining options for incorporating throughput requirements into the contract. IRS officials said that they would have a better foundation for establishing throughput requirements once IRS evaluates the impact of the enhancements on SCRIPS’ capacity. According to contractor officials, given the enhancements made in 1996, they believe SCRIPS is capable of processing more documents than it has. They pointed out that SCRIPS’ capability to process documents depends not only on the system’s design but also on IRS’ human resource decisions, such as the number of work stations that are staffed and employee incentives (or lack thereof). IRS officials tested SCRIPS in late September 1996. According to the test plan, the purpose of the test was to determine (1) the maximum number of FTD coupons, Forms 1040EZ, and information returns that SCRIPS can process; (2) the amount of free time, if any, that will be available to process Forms 941; (3) any bottlenecks in the SCRIPS system that can be eliminated to increase system throughput; and (4) the performance thresholds that could be put into the SCRIPS contract. However, because the software application for Form 941 was not complete, it would have been difficult to fully assess SCRIPS’ processing capability. The results of the test were not available when we completed our audit work. Although the workload being processed on SCRIPS continues to be less than expected, SCRIPS’ estimated costs have risen. Also, anticipated labor cost savings have decreased. According to IRS’ post-implementation review report, previous cost estimates for SCRIPS have ranged from $133 million to $209 million. The estimate of $133 million was made in February 1992 and again in April 1992. That cost estimate, however, assumed that SCRIPS would be implemented in 1994 and did not include the cost of maintaining SCRIPS. The current life-cycle cost estimate for SCRIPS is $288 million, which includes at least $20 million for maintenance. That estimate was included in the Department of the Treasury’s May 6, 1996, report to Congress on IRS’ progress in responding to our recommendations on the managerial and technical weaknesses of TSM. We could not determine how much IRS has already spent on SCRIPS since its inception because IRS does not have an accurate cost accounting system. Using the latest life-cycle cost estimate, SCRIPS is estimated to have cost about $145 million from fiscal year 1989 through fiscal year 1996. In October 1994, IRS estimated that SCRIPS would provide about $17 million in labor cost savings from fiscal years 1994 through fiscal year 2000. In September 1995, IRS lowered that estimate to about $5 million. Also, IRS’ September 1996 investment evaluation report on SCRIPS concluded that the system will yield a negative return on investment (i.e., costs will exceed benefits) from 1991 to 2001. However, the evaluation report stated that its return on investment estimate does not fully capture the operational benefits of the hardware and software enhancements that were made since the end of fiscal year 1995. Therefore, the report concludes that the final judgment on SCRIPS’ performance cannot be made until after the 1997 filing season. As discussed previously, SCRIPS was not used to process forms 1040PC and 941 in 1995 or 1996. In July 1995, IRS decided to terminate all software programming for the 1040PC because of SCRIPS’ instability, a lack of system capacity, and a number of software application problems. It is uncertain when, if at all, SCRIPS will be used to process Forms 941. The President’s fiscal year 1997 budget request included $850 million for TSM, about $38 million of which was for SCRIPS. In a June 6, 1996, letter, Treasury submitted a revised TSM funding request of $664 million to the House Appropriations Committee, of which about $30 million was for SCRIPS. According to the letter, at this funding level, SCRIPS would not be used to process Forms 941 for fiscal year 1997. Congress subsequently appropriated $336 million for TSM for fiscal year 1997. About 60 percent of that appropriation is earmarked for operational TSM projects, such as SCRIPS, but it was unclear when we prepared this report how much IRS would allocate to SCRIPS. However, we would expect that IRS will continue funding SCRIPS because the project is operational and IRS has no backup system for FTD coupons. In addition to funding constraints, decisions on the use of SCRIPS beyond fiscal year 1997 for Forms 941, according to IRS officials, will hinge on: (1) the results of the September 1996 test of SCRIPS, which we discussed earlier; and (2) Investment Review Board actions on recommendations made by a paper processing task team. In March 1996, IRS convened a task team to develop a paper processing strategy due to delays in implementing various aspects of IRS’ tax return processing vision. Specifically, IRS had originally expected that (1) a significant number of returns would be received electronically, thereby reducing the need for some manual data entry; and (2) DPS would be positioned to begin processing the remaining paper tax returns. As we reported in October 1995, IRS’ electronic filing program is falling short of expectations. Current estimates indicate that IRS may receive only 33 million electronic returns in 2001 rather than the 80 million that IRS had set as its goal. The shortfall in meeting IRS’ 80-million goal stems from the lack of a comprehensive business strategy to attract taxpayers to electronic filing. IRS is currently trying to develop such a strategy. Also, IRS announced on October 8, 1996, that it was terminating the DPS project because of “revised priorities and budget realities for the next several years.” Among other things, the task team developed a paper processing strategy based on the assumption that the maximum number of electronic returns that can be expected in 2001 is about 33 million. In addition, the team evaluated the need for a replacement system for IRS’ current manual data entry system. According to IRS officials, one part of the paper processing strategy may be to contract out the processing of some documents. These documents could include information returns because the processing of those documents is not as time-sensitive as the processing of income tax returns. In addition, in the future, more FTD coupons are to be submitted electronically as called for in the North American Free Trade Agreement. For example, IRS estimates that about 44 percent of the FTD coupons will be received electronically in fiscal year 1998, compared with only 1 percent in fiscal year 1996. Thus, given that some of SCRIPS’ existing workload could decrease or be contracted out, IRS could possibly increase the number of Forms 1040EZ that SCRIPS could process or add additional forms, such as Forms 941, without requiring any additional system capacity beyond what is currently available. Because SCRIPS was developed before IRS started taking actions to address the managerial and technical weaknesses of TSM that we identified in July 1995, SCRIPS suffered from some of those weaknesses that we said would contribute to such things as cost overruns and failure to meet mission goals. SCRIPS is expected to cost more than originally expected and, according to IRS’ September 1995 estimates, could provide less than one-third of the originally expected labor cost savings. We recognize that IRS is now completely reliant on SCRIPS for processing paper FTD coupons. However, the decrease in expected labor cost savings and the increase in estimated costs raise questions about the cost-effectiveness of SCRIPS. In addition, one of the most critical weaknesses for SCRIPS was a lack of thorough and complete system testing before the system was rolled out to five service centers. Although IRS tested SCRIPS in September 1996 to determine the maximum number of Forms 1040EZ, information returns, and FTD coupons SCRIPS can process, that test does not substitute for a test of SCRIPS’ ability to process any new document types along with the existing ones under a production environment that replicates peak volume conditions. We recommend that before deciding to increase the percentage of Forms 1040EZ or information returns that SCRIPS processes or using SCRIPS to process other tax forms, such as Forms 941, the Commissioner (1) do a cost-benefit analysis that includes examining the costs and benefits of alternative ways for processing those forms, such as those developed by the paper processing task team; and (2) if the analysis shows that it is cost-effective to have SCRIPS process Forms 941, ensure that IRS tests SCRIPS’ ability to process the existing software applications along with any new software applications that may be added using peak volumes or volumes simulating peak workload conditions to ensure that SCRIPS can meet performance expectations. We requested comments on a draft of this report from the Commissioner of Internal Revenue or her designated representative. Responsible IRS officials, including the National Director for Submission Processing, the Assistant Commissioner for Forms and Submission Processing, and the SCRIPS project manager, provided IRS’ comments in a November 14, 1996, meeting. Their comments on our recommendation were reiterated in a November 19, 1996, memorandum from the Acting Chief of Taxpayer Service. Besides commenting on our recommendation, the Deputy Chief Information Officer, Systems Development, provided several factual clarifications that we incorporated in the report where appropriate. The officials also asked that we update our report to reflect processing rate information through September 30, 1996, and they gave us the relevant data. We made that update. IRS officials generally agreed with our recommendation. With respect to the first part of our recommendation, the officials generally agreed that IRS should do a cost-benefit analysis before deciding to increase the percentage of Forms 1040EZ or information returns that SCRIPS processes or to add additional forms, such as Forms 941. They said that this analysis would be done by the end of fiscal year 1997 and would take into account approved results from the paper processing task team. IRS officials said that in the event of workload imbalances during the 1997 tax return filing season, if necessary, they may decide to increase the workload for SCRIPS before the cost/benefit analysis is completed. We recognize that IRS’ priority must be to process tax returns during the filing season in a timely manner, and it needs to reserve the right to do so. With respect to the second part of our recommendation, IRS officials said that they plan to work with the contractor to identify and implement any necessary contract modifications that may be needed to ensure complete testing. They said the testing will ensure that SCRIPS can meet performance expectations in a peak production environment. IRS officials said that a few changes are planned for the 1997 filing season that could help improve SCRIPS’ future performance. Specifically, they mentioned that IRS would be testing different incentive systems for SCRIPS operators to determine the extent to which incentives affect operator performance, which could also affect overall SCRIPS’ performance. They also mentioned that IRS is negotiating with the contractor to provide a new scanner feeder that should resolve some of the problems experienced when information returns are filed on extremely thin paper. IRS officials expressed concern about using $133 million as the baseline cost estimate for SCRIPS. That estimate was included in a February 1992 business case and again in a revision dated April 23, 1992. IRS officials said that we should have used an earlier cost estimate of $209 million that was included in IRS’ Information Systems Initiative Summary Database in August 1991. IRS’ post-implementation review report on SCRIPS also stated that the SCRIPS project office considered the $209 million as the baseline life-cycle cost estimate. However, the report also noted that between fiscal years 1993 and 1996, IRS cited four other different life-cycle cost estimates for budget purposes, including an estimate of $132 million. We revised the report to acknowledge that $209 million was one of the cost estimates for SCRIPS, but we believe that the business case estimate is an appropriate baseline because the business case was a major basis for decisions to go forward with SCRIPS. IRS officials said that IRS developed the SCRIPS business case before it began taking steps to manage information technology projects as investments. Since that time, for example, IRS has developed an investment justification handbook to help ensure that project cost and benefit analyses that are included in business cases are standardized and complete. We are sending copies of this report to the Subcommittee’s Ranking Minority Member, the Chairman and Ranking Minority Member of the House Committee on Ways and Means, the Chairman and Ranking Minority Member of the Senate Committee on Finance, various other congressional committees, the Secretary of the Treasury, the Commissioner of Internal Revenue, the Director of the Office of Management and Budget, and other interested parties. Major contributors to this report are listed in the appendix. Please contact me on (202) 512-9110 if you have any questions. Cecelia Ball, Senior Evaluator Marvin McGill, Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the performance of the Internal Revenue Service's (IRS) Service Center Recognition/Image Processing System (SCRIPS) in 1996, focusing on: (1) the primary causes for performance problems that occurred in 1995; (2) whether SCRIPS performance improved in 1996 as of September 30, 1996; and (3) the status of IRS future plans for SCRIPS. GAO found that: (1) SCRIPS experienced significant performance problems in 1995; (2) two problems were system downtime and slow processing rates; (3) because of performance problems, two of the five centers stopped processing Forms 1040EZ on SCRIPS and reverted to using manual data entry; (4) SCRIPS performance deficiencies in 1995 stemmed primarily from both hardware and software problems; (5) although IRS had expected that SCRIPS would be processing five document types, forms 1040EZ, 941, and 1040PC, federal tax deposit (FTD) coupons, and information returns, IRS postponed plans to process forms 941 and 1040PC on SCRIPS; (6) of the three remaining document types, the Cincinnati test certified the software application only for FTD coupons, therefore, the software applications for information returns and Form 1040EZ were not thoroughly tested before they were put into production, (7) to improve the performance of SCRIPS for 1996, IRS made hardware and software modifications, some of which were made before the start of the 1996 filing season; (8) as a result of some of these enhancements and more staff familiarity with SCRIPS, according to IRS officials in all five SCRIPS service centers, SCRIPS performed significantly better during the 1996 filing season than it did in 1995; (9) in addition to a slower processing rate for information returns, SCRIPS is not processing all the forms that it was expected to process in 1996, is expected to cost more than originally estimated, and is expected to provide lower labor cost savings than IRS originally anticipated; (10) the latest cost estimate for SCRIPS is $288 million, considerably more than previous cost estimates, which, according to IRS' post-implementation review report on SCRIPS, ranged from $133 million to $209 million; (11) July 1995, IRS decided to terminate all software programming for the 1040PC because of SCRIPS' instability, lack of system capacity, and a number of software application problems; (12) IRS has also decided not to use SCRIPS to process Forms 941 beyond fiscal year (FY) 1997; and (13) in addition to funding constraints, according to IRS officials, a decision on using SCRIPS for Forms 941 will depend on the results of a September 1996 capacity test of SCRIPS, which were not available when GAO completed its audit work, and decisions by IRS' Investment Review Board.
The September 2001 Quadrennial Defense Review (QDR) outlined a strategy to sustain and transform the military force structure that has been in place since the mid-1990s. In this review, the Department of Defense (DOD) committed to selectively recapitalize older equipment items to meet near-term challenges and to provide near-term readiness. DOD recognized that the older equipment items critical to DOD’s ability to defeat current threats must be sustained as transformation occurs. DOD also recognizes that recapitalization of all elements of U.S. forces since the end of the Cold War has been delayed for too long. DOD procured few replacement equipment items as the force aged throughout the 1990s, but it recognizes that the force structure will eventually become operationally and technologically obsolete without a significant increase in resources that are devoted to the recapitalization of weapons systems. The annual Future Years Defense Plan (FYDP) contains DOD’s plans for future programs and priorities. It presents DOD estimates of future funding needs based on specific programs. Through the FYDP, DOD projects costs for each element of those programs through a period of either 5 or 6 years on the basis of proposals made by each of the military services and the policy choices made by the current administration. The 2003 FYDP extends from fiscal year 2003 to fiscal year 2007, and the 2004 FYDP extends from fiscal year 2004 to fiscal year 2009. Congress has expressed concerns that the military modernization budget and funding levels envisioned in the FYDP appear to be inadequate to replace aging equipment and incorporate cutting-edge technologies into the force at the pace required by the QDR and its underlying military strategy. As shown in table 1, of the 25 equipment items we reviewed, we assessed the current condition of 3 of these equipment items as red, 11 as yellow, and 10 as green. We were not able to obtain adequate data to assess the condition for the Marine Corps Maverick Missile because the Marine Corps does not track readiness trend data, such as mission capable or operational readiness rates, for munitions as they do for aircraft or other equipment. Rotary wing lift helicopters, specifically the CH-46E and the CH-47D helicopters, had the lowest condition rating among the equipment items we reviewed, followed by fixed wing aircraft. Although we assessed the condition as green for several equipment items such as the Army’s Abrams tank and the Heavy Expanded Mobility Tactical Truck, and the Marine Corps Light Armored Vehicle-Command and Control Variant, we identified various problems and issues that could potentially worsen the condition of some equipment items in the near future if not attended to. Specifically, for the Abrams tank, and similarly for the Heavy Expanded Mobility Tactical Truck, Army officials cited supply and maintenance challenges at the unit level such as repair parts shortages, inadequate test equipment, and lack of trained technicians that could impact the tank’s condition in the near future. While the Marine Corps has a Light Armored Vehicle-Command and Control Variant upgrade program under way, Marine Corps officials caution that any delays in the upgrade program could affect future readiness. According to service officials and prior GAO reports, the services are currently able to alleviate the effects of these problems, in many cases, through increased maintenance hours and cannibalization of parts from other equipment. The military services use a number of metrics to measure equipment condition. Examples include mission capable rates for aircraft, operational readiness rates for equipment other than aircraft, average age, and utilization rates (e.g., flying hours). The equipment items we assessed as red did not meet mission capable or operational readiness goals for sustained periods, were older equipment items, and/or had high utilization rates. For example, 10 of 16 equipment items for which readiness data were available did not meet mission capable or operational readiness goals for extended periods from fiscal year 1998 through fiscal year 2002. The average age of 21 of the equipment items ranged from about 1 year to 43 years. Some equipment items for which we assessed the condition as yellow also failed to meet mission capable or operational readiness goals and were more than 10 years old. However, offsetting factors, such as how frequently the equipment items did not meet readiness goals or by what percentage they missed the goals, indicated less severe and urgent problems than items we assessed as red. Other equipment items may have had high mission capable rates, but because of overall age and related corrosion problems, we assessed these equipment items as yellow to highlight the fact that these items could potentially present problems if not attended to within the next 3-5 years. The equipment items for which we assessed the condition as green generally met mission capable and operational readiness goals. While three of these equipment items—the Army Heavy Expanded Mobility Tactical Truck, the Air Force F-16, and the Marine Corps Light Armored Vehicle- Command and Control Variant—did not meet mission capable or operational readiness goals, we assessed the condition as green because the condition problems identified were less severe than the items we assessed as red or yellow. For example, an equipment item may have been slightly below the goal but only for non-deployed units, or the fleet-wide goals may have been met for the equipment item overall, although the specific model we reviewed did not meet the goals. In addition, although the rates for an equipment item may be slightly below its goal, it may be able to meet operational requirements. We also considered any upgrades that were underway at the time of our review that would extend the service life of the equipment. Maintenance problems were most often cited by the Army and Marine Corps officials we met with as the cause for equipment condition deficiencies for the equipment items we reviewed. Equipment operators and maintainers that we met with believed equipment degradation was the result of maintenance problems in one of two categories—parts or personnel. The parts problems include availability of parts or logistics and supply system problems. Availability problems occur when there are parts shortages, unreliable parts, or obsolete parts due to the advanced age of the equipment items. Logistics and supply system problems occur when it takes a long time to order parts or the unit requesting the parts has a low priority. In June, July, and August of 2003, we issued six reports highlighting deficiencies in DOD’s and the services’ management of critical spare parts. We also issued a report on problems DOD and the services are having dealing with corrosion for military equipment and that they had not taken advantage of opportunities to mitigate the impact of corrosion on equipment. Maintenance problems due to personnel include (1) lack of trained and experienced technicians and (2) increases in maintenance man-hours required to repair some of these aging equipment items. We reported in April 2003, for example, that DOD has not adequately positioned or trained its civilian workforce at its industrial activities to meet future requirements. Consequently, the Department may continue to have difficulty maintaining adequate skills at its depots to meet maintenance requirements. In most cases, the services have developed long-range program strategies for sustaining and modernizing the 25 equipment items that we reviewed. However, some gaps exist because the services either have not validated their plans for the sustainment, modernization, or replacement of the equipment items, or the services’ program strategies for sustaining the equipment are hampered by problems or delays in the fielding of replacement equipment or in the vulnerability of the programs to budget cuts. The two equipment items for which we assessed the program strategy as red are the KC-135 Stratotanker and the Tomahawk Cruise Missile because, although the services may have developed long-range program strategies for these equipment items, they have not validated or updated their plans for sustaining, modernizing, or replacing these items. In the case of the KC-135 Stratotanker, the Air Force has embarked on a controversial, expensive program to replace the tanker fleet, but as we have reported, it has not demonstrated the urgency of acquiring replacement aircraft and it has not defined the requirements for the number of aircraft that will be needed. Similarly, for the Tomahawk missile, the Navy has not identified how many of these missiles it will need in the future, thereby significantly delaying the acquisition process. We assessed the program strategy for eight of the services’ program strategies as yellow, some of them because they will be affected by delays in the fielding of equipment to replace the items in our review. According to service officials, as the delivery of new replacement equipment items is delayed, the services must continue using the older equipment items to meet mission requirements. Consequently, the services may incur increased costs due to maintenance that was not programmed for equipment retained in inventory beyond the estimated service life. For example, the planned replacement equipment for the Marine Corps CH-46E helicopter (i.e., the MV-22 Osprey) has been delayed by about 3 years and is not scheduled to be fielded until 2007. DOD has also reportedly cut the number of replacement aircraft it plans to purchase by about 8 to 10 over the next few years, thus the Marine Corps will have to retain more CH-46E helicopters in its inventory. Program management officials have requested additional funds to repair airframe cracks, replace seats, and move to light- weight armor to reduce aircraft weight, engine overhauls, and avionics upgrades to keep the aircraft safe and reliable until fielding of the replacement equipment. According to Marine Corps officials, the CH-46E program strategy has also been hampered by the 5-year rule, which limits installation of new modifications other than safety modifications into the aircraft unless 5 years of service are left on the aircraft. Procurement of the replacement equipment for the Marine Corps’ Assault Amphibian Vehicle has also been delayed (by 2 years), and it is not scheduled for full fielding until 2012. The program strategy for the Assault Amphibian Vehicle includes upgrades, but for only 680 of the 1,057 vehicles in the inventory. We also assessed the program strategy for some equipment items as yellow if they were vulnerable to budget cuts. For example, according to Navy officials, the Navy frigates’ modernization program is susceptible to budget cuts because the frigates’ future role is uncertain as the Littoral Combat ship is developed. In addition, the program strategy for the frigates is questionable because of the uncertainty about the role frigates will play. Specifically, Navy frigates are increasingly used for homeland defense missions, and their program strategy has not been updated to reflect that they will be used more often and in different ways. The Army’s CH-47D helicopter is also vulnerable to budget cuts. The Army plans to upgrade 279 CH-47D helicopters to F models under its recapitalization program; the upgrade includes a purchase of CH-47F model helicopters planned in fiscal year 2004. The fiscal year 2004 budget for this purchase has already been reduced. Program managers had also planned to purchase 16 engines, but funding was transferred to requests for higher priority programs. We assessed the program strategy for the remaining 15 equipment items as green because the services have developed long-range program strategies for sustaining, modernizing, or replacing these items consistent with their estimated remaining service life. For example, the Army has developed program strategies for all tracked and wheeled vehicles in our sample. Likewise, the Air Force has developed program strategies for most fixed wing aircraft in our sample throughout the FYDP. In the case of munitions, with the exception of the Navy Tomahawk Cruise Missile and Standard Missile-2, the services have developed program strategies for sustaining and modernizing the current missile inventory in our sample. In many cases, the funding DOD has requested or is projecting for future years in the FYDP for the equipment items we reviewed does not reflect the military services’ long-range program strategies for equipment sustainment, modernization, or recapitalization. According to service officials, the services submit their budgets to DOD and the Department has the authority to increase or decrease the service budgets based upon the perceived highest priority needs. According to DOD officials, for future years’ funding, the FYDP strikes a balance between future investment and program risk, taking into consideration the services’ stated requirements as approved by DOD. As shown in table 1, we assessed the funding for 15 of the 25 equipment items as red or yellow because the department’s requested funding did not adequately reflect its long-range program strategies for modernization, maintenance, and spare parts. For example, as shown in table 2, we identified fiscal year 2003 unfunded requirements totaling $372.9 million for four major aircraft equipment items we reviewed. The most significant funding shortfalls occurred when parts, equipment upgrades, and maintenance were not fully funded or when replacement equipment items were not fielded as scheduled. The equipment items for which we assessed the funding as yellow had funding shortfalls of a lesser extent than the red items. Although we assessed the funding as green for the remaining nine equipment items, program managers raised concerns about the availability of operation and maintenance funds in future years, and stated that insufficient operation and maintenance funds could potentially result in more severe condition problems and increased future maintenance costs. According to service officials, funding shortfalls occurred when parts, equipment upgrades, or maintenance were not fully funded or funds were reduced to support higher priority service needs. As we have previously reported, DOD increases or decreases funds appropriated by Congress as funding priorities change. Other shortfalls occur when units subsequently identify maintenance requirements that were not programmed into the original budget requests. In addition, when replacement equipment items are not fielded as scheduled, the services must continue to maintain these aging equipment items for longer than anticipated. Equipment items considered legacy systems such as the Marine Corps CH-46E helicopter may not receive funding on the basis of anticipated fielding of replacement equipment in the near future. The gaps between funding for legacy systems (which are heavily used and critical to the services’ mission) and funding for future replacement equipment result when fielding of the new equipment has been delayed and budgets have been reduced for maintenance of legacy systems. Funding for these legacy systems may also be a target for funding reductions to support higher service priority items. According to the program managers for some of the equipment items we reviewed (including the Army Abrams tank, Heavy Expanded Mobility Tactical Truck, and Navy EA-6B Prowler), as the services retain aging equipment in their inventories longer than expected, maintenance requirements increase, thus increasing operation and maintenance costs. Program managers raised concerns about the availability of sufficient operation and maintenance funding to sustain these aging equipment items in the future. Also, program managers stated that present sustainment funds (i.e., operation and maintenance funds) may only cover a small percentage of the equipment’s requirements, and they frequently rely on procurement funds to subsidize equipment improvements common to multiple equipment items. However, once production of the equipment item has been completed and procurement funds are no longer available for use, program managers must compete with the rest of the service for limited operation and maintenance funds. Program managers expressed concerns that operation and maintenance funds are not currently available to fund equipment improvements and noted operation and maintenance funds may not be available in the future. Based on our analysis of equipment condition, the performance of the equipment items in recent military conflicts, and discussions with service officials, program managers, and equipment operators and maintainers, we found that most of the equipment items we reviewed are capable of fulfilling their wartime missions despite some limitations. In general, the services will always ensure equipment is ready to go to war, often through surges in maintenance and overcoming obstacles such as obsolete parts, parts availability, and cannibalization of other pieces of equipment. Some of these equipment items (such as the Marine Corps CH-46E helicopter and all Air Force aircraft except the B-2) were used in Operation Desert Storm and have been used in other diverse operations such as those in Kosovo and Afghanistan. With the exception of the Army Stryker and GMLRS, all of the equipment items we reviewed were used recently in Operation Iraqi Freedom. The services, in general, ensure that equipment is ready for deployment by surging maintenance operations when necessary. Only one equipment item, the Marine Corps CH-46E helicopter, could not accomplish its intended wartime mission due to lift limitations. However, Marine Corps officials stated that they were generally satisfied that the CH-46E met its mission in Operation Iraqi Freedom despite these limitations. Of the remaining equipment items we reviewed, including all Air Force fixed-wing aircraft, all tracked and wheeled vehicles, and most munitions, service officials believe that most of these items are capable of fulfilling their wartime missions. According to service officials and program managers, while final Operation Iraqi Freedom after action reports were not available at the time of our review, initial reports and preliminary observations have generally been favorable for the equipment items we reviewed. However, these officials identified a number of specific concerns for some of these equipment items that limit their wartime capabilities to varying degrees. For example, only 26 out of 213 Marine Corps Assault Amphibian Vehicles at Camp Lejeune had been provided enhanced protective armor kits prior to Operation Iraqi Freedom. According to Marine Corps officials at Camp Lejeune, lack of the enhanced protective armor left the vehicles vulnerable to the large caliber ammunition used by the Iraqi forces. According to Navy officials, warfighting capabilities of the Navy EA-6B Prowler aircraft will be degraded if their capabilities are not upgraded and the outer wing panels are not replaced. Fleet commanders expressed concerns about potentially deploying some ships we reviewed with only one of three weapons systems capable of being used. However, program managers stated that plans were in place to reduce the vulnerability of these ships by fielding two compensating weapons systems. Although the military services are generally able to maintain military equipment to meet wartime requirements, the ability to do so over the next several years is questionable especially for legacy equipment items. Because program strategies have not been validated or updated and funding requests do not reflect the services’ long-range program strategies, maintaining this current equipment while transforming to a new force structure as well as funding current military operations in Iraq and elsewhere will be a major challenge for the department and the services. We do not believe, however, that the funding gaps we identified are necessarily an indication that the department needs additional funding. Rather, we believe that the funding gaps are an indication that funding priorities need to be more clearly linked to capability needs and to long- range program strategies. The military services will always need to meet mission requirements and to keep their equipment ready to fulfill their wartime missions. However, this state of constant readiness comes at a cost. The equipment items we reviewed appear to have generally fulfilled wartime missions, but often through increased maintenance for deployed equipment and other extraordinary efforts to overcome obstacles such as obsolete parts, parts availability, and cannibalization of other pieces of equipment. The reported metrics may not accurately reflect the time needed to sustain and maintain equipment to fulfill wartime missions. Substantial equipment upgrades or overhauls may be required to sustain older equipment items until replacement equipment items arrive. While our review was limited to 25 equipment items and represents a snapshot at a particular point in time, the department should reassess its current processes for reviewing the condition, program strategy, and funding for key legacy equipment items. Specifically we recommend that the Secretary of Defense, in conjunction with the Secretaries of the Army, Air Force, and the Navy, reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies with the services’ funding requests to ensure that key legacy equipment, especially those items needed to meet the strategy outlined in the September 2001 Quadrennial Defense Review, are sustained until replacement equipment items can be fielded. In reconciling these program strategies to funding requests, the Secretary of Defense should highlight for the Congress, in conjunction with the department’s fiscal year 2005 budget submissions, the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks. As part of this process the department should identify the key equipment items that, because of impaired conditions and their importance to meeting the department’s military strategy, should be given the highest priority for sustainment, recapitalization, modernization, or replacement. If the Congress wants a better understanding of the condition of major equipment items, the department’s strategy to maintain or recapitalize these equipment items, and the associated funding requirements for certain key military equipment needed to meet the strategy outlined in the QDR, the Congress may wish to consider having the Secretary of Defense provide an annual report, in conjunction with its annual budget submissions, on (1) the extent to which key legacy equipment items, particularly those that are in a degraded condition, are being funded and sustained until replacement equipment items can be fielded; (2) the risks involved in sustaining key equipment items if adequate funding support is not requested; and (3) the steps the department is taking to address those risks. In written comments on a draft of this report, the Department of Defense partially concurred with our recommendation that it should reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies to the services’ funding requests. However, the department did not concur with our other two recommendations that it should (1) highlight for the Congress the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks, and (2) identify the equipment items that should be given the highest priority for sustainment, recapitalization, modernization, or replacement. The department’s written comments are reprinted in their entirety in appendix III. In partially concurring with our first recommendation that it should reassess the program strategies for equipment modernization and recapitalization, and reconcile those strategies to the services’ funding requests, the department agreed that, while the overall strategy outlined in the September 2001 Quadrennial Defense Review may be unchanged, events over time may dictate changes in individual program strategies, that requires an order to meet the most current threat. The department stated, however, that through its past Planning, Programming, and Budgeting System and the more current Planning, Programming, Budgeting, and Execution processes, the department had and continues to have an annual procedure to reassess program strategies to ensure equipment maintenance, modernization, and recapitalization funding supports the most recent Defense strategy. While we acknowledge that these budget processes may provide a corporate, department-level review of what is needed to accomplish the national defense mission, the department’s budget and the information it provides to the Congress do not clearly identify the funding priorities for individual equipment items. For example, although the funding to sustain the department’s major equipment items is included in its Operation and Maintenance budget accounts, these budget accounts do not specifically identify funding for individual equipment items. We continue to believe that the department, in conjunction with the military services, needs to develop a more comprehensive and transparent approach for assessing the condition of key legacy equipment items, developing program strategies to address critical equipment condition deficiencies, and prioritizing the required funding. The department did not concur with our second recommendation that, in reconciling the program strategies to funding requests, it should highlight for the Congress, in conjunction with its fiscal year 2005 budget submissions, the risks involved in sustaining key equipment items if adequate funding support is not requested and the steps the department is taking to address those risks. Specifically, the department stated that its budget processes and the annual Defense budget provide the Congress a balanced program with all requirements “adequately” funded and that the unfunded requirements identified by the program managers or the services may not be validated at the department level. While we agree that the department’s budget may identify its highest funding priorities at the department wide level, it does not provide the Congress with an assessment of equipment condition deficiencies, unfunded requirements identified by the services, and the potential risks associated with not fully funding the services’ program strategies. In this report, we identify a number of examples of equipment condition deficiencies and inconsistencies between the program strategies and the funding requests to address those deficiencies that were not fully addressed in the department’s budget documents. We believe that the Congress, in its oversight of the department’s major equipment programs, needs to be better informed of specific equipment condition deficiencies, the long- range strategies and required funding to address those deficiencies, and the risks associated with not adequately funding specific equipment modernization and recapitalization requirements. The department also did not concur with our recommendation that it should identify for the Congress the key equipment items that, because of impaired condition and their importance to meeting the department’s military strategies, should be given the highest priority for sustainment, recapitalization, modernization, or replacement. In its comments, the department stated that, in developing the annual Defense budget, it has already allocated resources according to its highest priorities. The department further stated that key items that are vital to accomplishing the department’s mission are allocated funding in order to meet the requirements of the most current Defense strategy, and that there is no need to restate these priorities with a list. Similar to our rebuttal to the department’s response to our second recommendation as discussed above, we do not believe that the department’s annual budget provides the Congress with sufficient information on the most severe equipment condition deficiencies and the funding priorities for addressing those deficiencies. We believe that a separate analysis, in conjunction with the department’s budget submissions, that highlights the most critical equipment condition deficiencies, the planned program strategies for addressing those deficiencies, and the related funding priorities is needed to provide the Congress with the information it needs to make informed budget decisions. The department also noted in its written comments that our report identifies the CH-47D, CH-46E, KC-135, EA-6B, Standard Missile-2, and the Tomahawk missile as equipment items with problems and issues that warrant action within the next 1 to 3 years. The department stated that it would continue to reassess these equipment items as it goes through its resource allocation process. Lastly, the department provided technical comments concerning our assessments of specific equipment items in appendix II, including the KC-135 Stratotanker, Assault Amphibian Vehicle, MV-22, Tomahawk Cruise Missile, and the CH-46E Sea Knight Helicopter. We reviewed and incorporated these technical comments, as appropriate. The revisions that we made based on these technical comments did not change our assessments for the individual equipment items. In some cases, the data and information the department provided in its technical comments resulted from program and funding decisions that were made subsequent to our review. We are sending copies of this report to the Secretary of Defense; the Secretaries of the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me on (202) 512-8365 if you or your staffs have any questions concerning this report. Major contributors to this report are included in appendix IV. To determine the level of attention required by the Department of Defense, the military services, and/or the Congress for each of the 25 equipment items we reviewed, we performed an independent evaluation of the (1) equipments’ current condition; (2) services’ program strategies for the sustainment, modernization, or replacement of the equipment items; (3) current and projected funding levels for the equipment items in relation to the services’ program strategies; and (4) equipments’ wartime capabilities. Based on our evaluation of the condition, program strategy, and funding for each of the 25 equipment items, we used a traffic light approach—red, yellow, or green—to indicate the severity and urgency of problems or issues. We established the following criteria to assess the severity and urgency of the problems. indicates a problem or issue that is severe enough to warrant action by DOD, the military services, and/or the Congress within the next 1-3 years. We selected this time frame of 1-3 years because it represents the time frame for which DOD is currently preparing annual budgets. indicates a problem or issue that is severe enough to warrant action by DOD, the military services, and/or the Congress within the next 3-5 years. We selected this time frame of 3-5 years because it represents the near-term segment of DOD’s Future Years Defense Plan. indicates that we did not identify any specific problems or issues at the time of our review, or that any existing problems or issues we identified are not of a severe enough nature that we believe warrant action by DOD, the military services, and/or the Congress within the next 5 years. We selected this time frame of 5 years because it represents the longer-term segment of DOD’s Future Years Defense Plan. We also reviewed the wartime capability of the selected equipment items, focusing on the extent to which each equipment item is capable of fulfilling its wartime mission. Because of ongoing operations in Iraq and our limited access to the deployed units and related equipment performance data, we were unable to obtain sufficient data to definitively assess the wartime capability for each of the 25 equipment items we reviewed, as we did for each of the other three assessment areas. To select the 25 equipment items we reviewed, we worked with the military services and your offices to judgmentally select approximately two weapons equipment items, two support equipment items, and two munitions items from the equipment inventories of each of the four military services—Army, Air Force, Navy, and Marine Corps. We relied extensively on input from the military services and prior GAO work to select equipment items that have been in use for a number of years and are critical to supporting the services’ mission. We based our final selections on the equipment items that the military services believed were most critical to their missions. The 25 equipment items we selected for review include 7 Army equipment items, 6 Air Force equipment items, 7 Navy equipment items, and 5 Marine Corps equipment items. Our assessments apply only to the 25 equipment items we reviewed, and the results of our assessments cannot be projected to the entire inventory of DOD equipment. To assess equipment condition, we obtained and analyzed data on equipment age, expected service life, and the services’ equipment condition and performance indicators such as mission capable rates, operational readiness rates, utilization rates, failure rates, cannibalization rates, and depot maintenance data for each of the equipment items we reviewed. The specific data that we obtained and analyzed for each equipment item varied depending on the type of equipment and the extent to which the data were available. The scope of our data collection for each of the equipment items included both the active and reserve forces. We also met with the services’ program managers and other cognizant officials from each of the four military services for each of the 25 equipment items. In addition, we visited selected units and maintenance facilities to observe the equipment during operation or during maintenance and to discuss equipment condition and wartime capability issues with equipment operators and maintainers. Our observations and assessments were limited to equipment in the active duty inventory. To assess the program strategy for these equipment items, we reviewed the services’ plans for future sustainment, modernization, recapitalization, or replacement of the equipment items in order to meet the services’ mission and force structure requirements. We met with the services’ program managers and other military service officials to discuss and assess the extent to which the services have a strategy or roadmap for each of the 25 equipment items, and whether the program strategy is adequately reflected in DOD’s current budget or the Future Years Defense Plan. To assess equipment funding, we obtained and analyzed data on historical, current, and future years’ budget requests for each of the 25 equipment items we reviewed. We also reviewed the services’ budget requests, appropriations, and obligations for fiscal year 1998 through fiscal year 2003 to determine how the funds that had been requested and appropriated for each of the equipment items were used. In addition, we reviewed the Future Years Defense Program for fiscal year 2003 to fiscal year 2007 and for fiscal year 2004 to fiscal year 2008 to determine if the projected funding levels were consistent with the services’ program strategies for sustainment, modernization, recapitalization, or replacement of the selected equipment items. We also met with the services’ program managers for each of the 25 equipment items to identify budget shortfalls and unfunded requirements. We did not independently validate the services’ requirements. We were unable, however, to obtain specific information from the Office of the Secretary of Defense or the Joint Staff on the long-term program strategies and funding priorities for these equipment items because officials in these offices considered this information to be internal DOD data and would not make it available to us. To review the wartime capability of each equipment item, we discussed with military service officials, program managers, and equipment operators and maintainers the capabilities of the equipment items to fulfill their wartime missions and the equipments’ performance in recent military operations. Because of ongoing operations in Iraq and our limited access to the deployed units and related equipment performance data, we were unable to collect sufficient data to definitively assess wartime capability or to assign a color-coded assessment as we did with the other three assessment areas. We also reviewed related Defense reports, such as after action reports and lessons learned reports, from recent military operations to identify issues or concerns regarding the equipments’ wartime capabilities. We performed our work at relevant military major commands, selected units and maintenance facilities, and one selected defense combatant command. Our access to specific combatant commands and military units was somewhat limited due to their involvement in Operation Iraqi Freedom. The specific military activities that we visited or obtained information from include the following: U.S. Army, Headquarters, Washington, D.C.; U.S. Army, Office of the Assistant Secretary of the Army for Acquisitions, Logistics, and Technology, Washington, D.C.; U.S. Army Forces Command Headquarters, Atlanta, Ga.; U.S. Army, 1st Calvary Division, 118th Corps, Ft. Hood, Tx.; U.S. Army, Aviation and Missile Command, Redstone Arsenal, Precision Fire and Missile Project Office, Huntsville, Al.; U.S. Army, Tank and Armament Automotive Command, Warren, Mi.; U.S. Army, Cost and Economic Analysis Center, Pentagon, U.S. Army, Pacific, Ft. Shafter, Hawaii; and U.S. Army, 25th Infantry Division (Light), Schofield Barracks, Hawaii; U.S. Air Force, Headquarters, Plans and Programs Division, U.S. Air Force, Combat Forces Division, and Global Mobility Division, U.S. Air Force, Munitions Missile and Space Plans and Policy Division, U.S. Air Force, Air Logistics Center, Robins Air Force Base, Ga.; U.S. Air Force, Air Combat Command, Directorate of Requirements and Plans, Aircraft Division, and the Installation and Logistics Division, Langley Air Force Base, Va.; U.S. Air Force, Pacific, Hickam Air Force Base, Hawaii; U.S. Navy, Naval Surface Forces, Atlantic Fleet, Norfolk Naval Base, Va.; U.S. Navy, Naval Air Force, Atlantic Fleet, Norfolk Naval Base, Va.; U.S. Navy, Naval Weapons Station Yorktown, Va.; U.S. Navy, Naval Surface Forces, Pacific Fleet, Pearl Harbor, Hawaii; U.S. Navy, Naval Surface Forces, Pacific Fleet, Naval Amphibious Base, U.S. Navy, Naval Air Forces, Naval Air Station North Island, Coronado, U.S. Navy, Naval Weapons Station Seal Beach, Calif.; U.S. Navy, Electronic Attack Wing, U.S. Pacific Fleet, Naval Air Station Whidbey Island, Wash.; U.S. Navy, Naval Sea Systems Command, Washington Navy Yard, U.S. Navy, Naval Air Systems Command, Naval Air Station Patuxent River, Md.; U.S. Navy, Naval Air Depot, Naval Air Station North Island, Calif.; and U.S. Navy, Avondale Shipyard, Avondale, La.; U.S. Marine Corps, Systems Command, Quantico, Va.; U.S. Marine Corps, Aviation Weapons Branch, Pentagon, Washington, U.S. Marine Corps, Tank Automotive and Armaments Command, Warren, Mich.; U.S. Marine Corps, I Marine Expeditionary Force, Camp Pendleton, U.S. Marine Corps, II Marine Expeditionary Force, Camp Lejeune, N.C.; U.S. Marine Corps, Naval Research Lab, Washington, D.C.; U.S. Marine Corps, AAAV Technology Center, Woodbridge, Va.; and U.S. Marine Corps, Marine Forces Pacific, Camp Smith, Hawaii. We also obtained and reviewed relevant documents and reports from DOD and the Congressional Budget Office, and relied on related prior GAO reports. We performed our review from September 2002 through October 2003 in accordance with generally accepted government auditing standards. For the 25 equipment items, each assessment provides a snapshot in time of the status of the equipment item at the time of our review. The profile presents a general description of the equipment item. Each assessment area contains a highlighted area indicating the level of DOD, military service, and/or congressional attention each equipment item needs, in our opinion, based on our observations of each equipment item, discussions with service officials, and reviews of service-provided metrics. First delivered in the early 1980s, the Abrams is the Army’s main battle tank and destroys enemy forces using enhanced mobility and firepower. Variants of the Abrams include the M1, M1A1, and M1A2. The M1 has a 105mm main gun; the M1A1 and M1A2 have a 120 mm gun, combined with a powerful turbine engine and special armor. There are 5,848 tanks in the inventory, and the estimated average age is 14 years. The M1 variant will be phased out by 2015. The M1 and M1A2 variant are being upgraded to the M1A2 Systems Enhancement Program (SEP) by July 2004. We assessed the condition of the Abrams Tank as green because it consistently met its mission capable goal of 90 percent from fiscal year 1998 through fiscal year 2002. Although the Abrams met its mission capable goal, supply and maintenance operations at the unit-level are a challenge because of repair parts shortages, unreliable components, inadequate test equipment, and lack of trained technicians. There are concerns that the future condition of the Abrams could deteriorate in the next 5 years due to insufficient sustainment funds. The lack of funds could result in an increase of aging tanks and maintenance requirements. We assessed the program strategy for the Abrams as green because the Army has developed a long-term strategy for upgrading and phasing out certain variants of aging tanks in its inventory. The Army’s Recapitalization Program selectively implements new technology upgrades to reduce operations and support cost. Additionally, the Army is phasing out the M1A2 from its inventory by 2009, and procuring 588 M1A2 SEPS. The SEP enhances the digital command and control capabilities of the tank. The Army also developed a program for improving the Abrams M1A2 electronics called the Continuous Electronic Evolution Program, which is part of the SEP. The first phase of this program has been approved and funded. According to an Army official, the next phase is expected to start in approximately 5 years. We assessed the funding for the Abrams as yellow because current and projected funding is not consistent with the Army’s stated requirements to sustain and modernize the Abrams tank inventory. The Army reduced the recapitalization budget by more than 50 percent for the M1A2 SEP, thereby decreasing the number of upgrades from 1,174 to 588. Unfunded requirements for the Abrams tank include the vehicle integrated defense systems, safety and environmental fixes, and an improved driver’s viewer system. Without adequate funding, obsolescence may become a major issue once tank production ends and procurement funds are no longer available to subsidize tank requirements. Procurement funding for the M1A2 SEP will be completed by 2003 and deliveries completed by 2004. According to an Army official, the Abrams procurement funding provides approximately 75 percent to 80 percent of the tank requirements due to commonality among the systems. While we did not have sufficient data to definitively assess the wartime capability for the Abrams, a detailed pre-war assessment prepared by the program manager’s office indicated that the tank is not ready or able to sustain a long-term war. During Operation Iraqi Freedom, the Abrams tank was able to successfully maneuver, provide firepower, and protect the crew. Losses were attributed to mechanical breakdown and cannibalization. The detailed assessment by the program manager’s office, however, indicated that limited funding, war reserve spare part shortages, and supply availability could impact the tank’s ability to sustain a long-term war. The Apache is a multi-mission aircraft designed to perform rear, close, deep operations and precision strikes, armed reconnaissance and security during day, night, and adverse weather conditions. There are approximately 728 Apache helicopters in the Army’s inventory—418 AH-64A models and 310 AH-64D models. The fleet average age is about 12 years. We assessed the condition of the Apache as yellow because the Apache AH- 64D model failed to meet the mission capable goal of 75 percent approximately 50 percent of the time, from fiscal year 1999 through fiscal year 2002; however, according to officials, the Apache mission capable rates have consistently exceeded the 75 percent goal in calendar year 2003. Aviation safety restrictions were cited as the reason why the Apache failed to meet mission capable goals. A safety restriction pertains to any defect or hazardous condition that can cause personal injury, death, or damage to the aircraft, components, or repair parts for which a medium to high safety risk has been determined. These restrictions included problems with the (1) aircraft Teflon bushings, (2) transmission, (3) main rotor blade attaching pins, (4) generator power cables, and (5) the removal, maintenance and inspection of the Auxiliary Power Unit Takeoff Clutch. The Army’s Recapitalization Program includes modifications that are intended to address these safety restrictions. We assessed the program strategy for the Apache as green because the Army has developed a long-term program strategy to sustain and upgrade the aging Apache fleet. The Army’s Recapitalization Program addresses costs, reliability, and safety problems, fleet groundings, aging aircraft, and obsolescence. The Army plans to remanufacture 501 AH-64A helicopters to the AH-64D configuration. The goal is to reduce the fleet average age to 10 years by 2010, increase the unscheduled mean time between removal by 20 percent for selected components, and generate a 20 percent return on investment for the top 10 cost drivers. The Army is on-schedule for fielding the Apache AH-64D. While we did not have sufficient data to definitively assess the wartime capability of the Apache, Army officials did not identify any specific concerns. These officials indicated that the Apache successfully fulfilled its wartime missions in Afghanistan and Operation Iraqi Freedom. In Operation Iraqi Freedom, the AH-64D conducted combat operations for both close combat and standoff engagements. Every mission assigned was flown and accomplished with the Apache AH-64D. The Longbow performance has been enhanced by targeting and weapon systems upgrades that have improved the Longbow performance over the AH-64A. The Stryker is a highly deployable-wheeled armored vehicle that employs 10 variations—the Infantry Carrier Vehicle (ICV), Mortar Carrier (MC), Reconnaissance Vehicle (RV), Commander Vehicle (CV), Medical Evacuation Vehicle (MEV), Engineer Squad Vehicle (ESV), Anti-Tank Guided Missile Vehicle (ATGM), and Fire Support Vehicle (FSV), the Mobile Gun System (MGS), and the Nuclear Biological and Chemical Reconnaissance Vehicle (NBCRV). There are 600 Stryker vehicles in the Army’s inventory, and the average age is less than 2 years. The Army plans to procure a total of 2,121 Stryker vehicles through fiscal year 2008. We assessed the condition of the Stryker as green because it has successfully achieved the fully mission capable goal of 95 percent, based on a 3-month average from April 2003 through July 2003. The Congress mandated that the Army compare the operational effectiveness and cost of an infantry carrier variant of the Stryker and a medium Army armored vehicle. The Army compared the cost and operational effectiveness of the Stryker infantry carrier against a medium armored vehicle. The Army selected the M113A3, and the comparison shows the Stryker infantry carrier vehicle is more survivable and provides better overall performance and mobility when employed in combat operations than the M113A3. We assessed the program strategy for the Stryker as green because the Army developed a long-term program strategy for procuring a total of 2,121 vehicles through fiscal year 2008, which will satisfy the total requirement. Of the 600 currently in the inventory, 449 are at 2 brigades—a 3rd brigade of the 2nd Infantry Division and the 1st brigade of the 25th Infantry Division, both of which are located at Fort Lewis, Washington. The other 151 are at fielding sites, training centers, and the Army Test and Evaluation Center. The remaining 1,521 will be procured through fiscal year 2007 with expected deliveries through fiscal year 2008. The next brigade scheduled to receive the Stryker is the 172nd Infantry Brigade at Forts Richardson and Wainwright, Alaska. The remaining Stryker Brigades Combat Teams to be equipped with the Stryker are the 2nd Cavalry Regiment, Fort Polk, Louisiana; 2nd Brigade, 25th Infantry Division, Schofield Barracks, Hawaii; and 56th Brigade of the 28th Infantry Division, Pennsylvania Army National Guard. We assessed the funding for the Stryker as green because current and projected funding is consistent with the Army’s stated requirements to sustain the Stryker program. The program is fully funded to field the six Stryker brigade combat teams. Approximately $4.1 billion has been allocated for all six combat teams through fiscal year 2009. The Secretary of Defense has authorized the procurement of the first three brigades, but the fourth brigade cannot be procured until the Secretary of Defense solidify to Congress that the results of the Operational Evaluation mandated by Congress indicated that the design for the interim brigade combat team is operationally effective and operationally suitable. The evaluation was completed in May 2003 and results are being finalized. While we did not have sufficient data to definitively assess the wartime capability of the Stryker, the Army did not identify any specific concerns regarding the system being able to meet its wartime mission. The Stryker has not yet been used in any conflict situation. In May 2003, GAO reported that the Army Test and Evaluation Command concluded that the Stryker provided more advantages than the M113A3 in force protection, support for dismounted assault, and close fight and mobility, and was more survivable against ballistic and non-ballistic threats. The CH-47 helicopter is a twin-engine, tandem rotor helicopter designed for transportation of cargo, troops, and weapons. The Army inventory consists of 426 CH-47D models and 2 CH-47F models. The CH-47F Improved Cargo Helicopter is a remanufactured version of the CH-47D and includes a new digital cockpit and a modified airframe to reduce vibration. The overall average age of the CH-47 is 14 years old. The Army plans to convert 76 D model aircraft to the F model between fiscal years 2005 and fiscal year 2009. We assessed the condition of the Chinook as red because it consistently failed to meet the Army’s mission capable goal of 75 percent from fiscal year 1998 to fiscal year 2002. Actual mission capable rates ranged from 61 percent to 69 percent. Army officials attributed the failure to meet the 75 percent mission capable goal to aging equipment, supply shortages, and inexperienced technicians. Maintaining aircraft has become increasingly difficult with the CH-47D failing to meet the non-mission capable maintenance goal of 15 percent, increasing from 27 percent in fiscal year 1998 to 31 percent in fiscal year 2002. We assessed the program strategy for the Chinook as yellow because the Army has developed a long-term strategy for upgrading and replacing the Chinook, but the strategy is not consistent with the Army’s funding priorities. There has been a delay in the plan to upgrade 279 D models to F models between fiscal year 2003 and fiscal year 2017 under the Army’s Recapitalization Program, reducing the number of CH-47F helicopters planned in the fiscal year 2004 budget by five due to unexpected funding constraints. These budgetary constraints also delayed the Army’s plans to purchase 16 engines because funding was transferred to support other non- recurring requirements. Readiness may be adversely affected if these engines are not procured because unit requisitions for these engines will not be filled and aircraft will not be fully mission capable. We assessed the funding for the Chinook as yellow because current and projected funding is not consistent with the Army’s requirements for sustaining and upgrading the Chinook helicopter. At present, the Army has identified unfunded requirements totaling $316 million, with $77 million needed to procure the five CH-47Fs and the 16 engines for which the funds had been previously diverted. The remaining $239 million would support other improvements including common avionics system, rotor heads, crashworthy crew seats, and engine modifications. The Army will resolve some or all of these requirements with projected funding of $3 billion to support the CH-47 program through fiscal year 2017. While we did not have sufficient data to definitively assess the wartime capability for the Chinook, Army officials indicated that it successfully fulfilled its wartime mission for Operation Iraqi Freedom despite current condition problems. These officials stated that the deployed units were able to overcome these condition problems because the deployed aircraft were provided a higher priority than non-deployed aircraft for spare parts. As a result, the estimated mission capable rates for deployed aircraft increased to about 86 percent during the operation. The HEMTT provides transport capabilities for re-supply of combat vehicles and weapon systems. The HEMTT’s five basic configurations include the cargo truck, the load handling system, wrecker, tanker, and tractor. The HEMTT entered into the Army’s inventory in 1982. The current inventory totals about 12,500 and the average age is 13 years. We assessed the condition of the HEMTT as green because mission capable rates have been close to the Army’s 90 percent goal, averaging 89 percent between fiscal year 1998 and fiscal year 2002. Moreover, the overall supply availability rates have exceeded the 85 percent goal from May 2002 to October 2002, averaging between 96 percent and 99 percent, respectively. In some instances, however, meeting the operational goals has been continually challenging because of aging equipment, heavy equipment usage, and the lack of trained mechanics. The lack of trained mechanics may also impact the Army’s future ability to meet the specified mission capable goals. In addition, a detailed pre-war assessment by the program manager’s office indicated that concerns regarding shortages of spare parts would significantly degrade the HEMTT readiness rates. We assessed the program strategy for the HEMTT as green because the Army has developed a long-term program strategy for sustaining and modernizing the HEMTT inventory. The Army’s plans include procuring 1,485 new tankers and wreckers through fiscal year 2007, which will satisfy the Army’s stated requirement. The Army also plans to rebuild some of the existing vehicles through the HEMTT Extended Service Program. This program, scheduled to be complete in fiscal year 2012, will insert technology advancements and will provide continuous improvements to the vehicle. Although there has been a reduction in the Army’s budget for the Extended Service Program, the plan is to continue rebuilding trucks in smaller quantities and at a slower pace. The Army’s Forces Command has implemented a Vehicle Readiness Enhancement Program that serves as an interim maintenance program for the HEMTT awaiting induction into the Extended Service Program. We assessed the funding for the HEMTT as yellow because current and projected funding is not consistent with the Army’s stated requirements to sustain and modernize the HEMTT inventory. Specifically, the Army has unfunded requirements of $10.5 million as of fiscal year 2003, of which $3.9 million is for spare parts and $6.6 million is for war reserves. In addition, the Army reduced the Recapitalization Program by $329 million. The Army had planned to upgrade 2,783 vehicles currently in the inventory; however, 1,365 will not be upgraded as a result of the reductions in the Recapitalization Program. Consequently, according to Army officials, maintenance and operating and support costs will likely increase. While we did not have sufficient data to definitively assess the wartime capability for the HEMTT, Army officials indicated that it has successfully fulfilled its wartime requirements during recent combat operations. Based on the program manager’s preliminary observations, the HEMTT performed successfully during Operation Iraqi Freedom. A detailed pre-war assessment by the program manager’s office indicated that the HEMTT was ready for war, but could experience sustainment problems due to a shortage of war reserve spare parts. The program manager’s office is currently assessing the condition of the active and war reserve equipment used in Operation Iraqi Freedom. The PAC-3 missile is considered a major upgrade to the Patriot system. Sixteen PAC-3 missiles can be loaded on a launcher versus four PAC-2 missiles. The Army plans to buy 2,200 PAC-3 missiles. The Army had a current inventory of 88 PAC-3 missiles as of July 2003. The average age of the PAC-3 missile is less than 1 year. We assessed the condition of the PAC-3 missile as green because approximately 89 percent of the missiles in the inventory were ready for use as of July 2003. Specifically, of the 88 PAC-3 missiles currently in the inventory, 78 were ready for use and 10 were not. In addition, the Army has not experienced any chronic or persistent problems during production. The PAC-3 missile completed operational testing and was approved for full production of 208 missiles in 2003 and 2004. We assessed the program strategy for the PAC-3 missile as green because the Army has developed a long-term strategy for sustaining the PAC-3 inventory, including procurement of 2,200 missiles that will satisfy the total requirement. The Army plans to purchase 1,159 PAC-3 missiles through fiscal year 2009. The remaining 1,041 missiles will be procured after fiscal year 2009. During the low-rate initial production, the Army procured 164 PAC-3 missiles from 1998 to 2002 at $1.7 billion. The Army has completed the low-rate initial production and has been granted approval for full production of 208 PAC-3 missiles beginning in fiscal year 2003, at a total estimated cost of $714 million. We assessed the funding for the PAC-3 missile as green primarily because current and projected funding is consistent with the Army’s stated requirements to sustain the PAC-3 inventory. The program manager’s office has not identified any funding shortfalls for the missile. Funding has been approved for the production of 1,159 PAC-3 missiles through fiscal year 2009 at an average production rate of nearly 100 missiles per year. The total production cost of the 1,159 PAC-3 missiles equates to $4.3 billion. The remaining 1,041 missiles will be procured after fiscal year 2009. While we did not have sufficient data to definitively assess the wartime capability of the PAC-3 missile, Army officials indicated that it successfully fulfilled its wartime mission during Operation Iraqi Freedom, successfully hitting enemy targets within two missile shots. The PAC-3 has also completed the operational testing phase and has been approved for full production. The Guided Multiple Launch Rocket System Dual Purpose Improved Convention Munition (GMLRS-DPICM) is an essential component of the Army’s transformation. It upgrades the M26 series MLRS rocket and is expected to serve as the baseline for all future Objective Force rocket munitions. The Army plans to procure a total of 140,004 GMLRS rockets. There are currently no GMLRS rockets in inventory, but it was approved in March 2003 to enter low rate initial production to produce 108 missiles. We assessed the condition of the GMLRS as green because the system has demonstrated acceptable performance during the System Development and Demonstration Phase, and was approved to enter low rate initial production in March 2003. We assessed the program strategy for the GMLRS as green because the Army has developed a long-term program strategy for sustaining the GMLRS inventory, including procurement of a total of 140,004 missiles that will satisfy the total requirement. Of this total, the Army plans to procure 18,582 missiles by fiscal year 2009. The remaining 121,422 will be procured after fiscal year 2009. The Army approved low rate initial production for a total of 1,920 missiles through fiscal year 2005. The initial operational capability date is scheduled for 2nd quarter fiscal year 2006. The Army has also preplanned a product improvement to the GMLRS-DPICM called the GMLRS—Unitary. This improvement is in the concept development phase and is scheduled to begin a spiral System Development and Demonstration. The Army has not decided how many of the 1,920 initial production rockets will include the guided unitary upgrade. We assessed the funding for the GMLRS as green because current and projected funding is consistent with the Army’s stated requirements to sustain the GMLRS Munitions program. The GMLRS program is fully funded and properly phased for rapid acquisition. The Army plans to purchase a total of 140,004 GMLRS rockets for $11.7 billion. Of the 140,004 GMLRS rockets, the Army plans to procure 18,582 through fiscal year 2009 for $1.7 billion. The remaining 121,422 rockets will cost the Army approximately $10 billion. In March 2003, the system met all modified low rate initial production criteria to enter the first phase to produce 108 rockets for $36.6 million. Phases II and III will procure the remaining 1,812 rockets during fiscal year 2004 (786 rockets) and fiscal year 2005 (1,026 rockets) for $220.4 million. While we did not have sufficient data to definitively assess the wartime capability of the GMLRS, Army officials did not identify any specific capability concerns. The GMLRS-DPICM is expected to achieve greater range and precision accuracy. The upgraded improvement will reduce the number of rockets required to defeat targets out to 60 kilometers or greater, and reduce collateral damage. It is also expected to reduce hazardous duds to less than 1 percent. The F-16 is a compact, multi-role fighter with air-to-air combat and air-to-surface attack capabilities. The first operational F-16A was delivered in January 1979. The Air Force currently has 1,381 F-16 aircraft in its inventory, and the average age is about 15 years. The F-16B is a two-seat, tandem cockpit aircraft. The F-16C and D models are the counterparts to the F-16A/B, and incorporate the latest technology. Active units and many reserve units have converted to the F-16C/D. The Air Force plans to replace the F-16 with the F-35 Joint Strike Fighter beginning in 2012. We assessed the condition of the F-16 as green because mission capable rates have been near the current goal of 83 percent with mission capable rates for all of the Air Force’s Air Combat Command (ACC) F-16s ranging from 75 percent to 79 percent during the past 5 years. Although these rates are below the goal, officials said they were sufficient to provide flying hours for pilot training, and to meet operational requirements. In fiscal year 2002, the planned utilization rate, (i.e., the average number of sorties per aircraft per month) for ACC aircraft was 17.5 sorties per month, and the actual utilization was 17.7 sorties. Although the average age of the F-16 is about 15 years, there are no material deficiencies that would limit its effectiveness and reliability. Known and potential structural problems associated with aging and accumulated flying hours are being addressed through ongoing depot maintenance programs. We assessed the program strategy for the F-16 as green because the Air Force has developed a long-term program strategy for sustaining and replacing the F-16 inventory. The program should ensure that the aircraft remains a viable and capable weapons system throughout the FYDP. Subsequently, the Air Force intends to begin replacing the F-16 with the Joint Strike Fighter (F-35), which is already in development. We assessed the funding for the F-16 as yellow because current and projected funding is not consistent with the Air Force’s stated requirements to sustain and replace the F-16 inventory. There are potential shortfalls in the funding for depot maintenance programs and modifications during the next 3-5 years. Although funding has been programmed for this work, unexpected increases in depot labor rates have been significant, and additional funding may be required to complete the work. For fiscal year 2004, the Air Force included $13.5 million for the F-16 in its Unfunded Priority List. While we did not have sufficient data to definitively assess the wartime capability for the F-16, the aircraft has successfully fulfilled its recent wartime missions. F-16 fighters were deployed to the Persian Gulf in 1991 in support of Operation Desert Storm, and flew more sorties than any other aircraft. The F-16 has also been a major player in peacekeeping operations including the Balkans since 1993. Since the terrorist attack in September 2001, F-16s comprised the bulk of the fighter force protecting the skies over the United States in Operation Noble Eagle. More recently, F-16s played a major role in Afghanistan in Operation Enduring Freedom, and have performed well in combat in Operation Iraqi Freedom, in which the F-16 once again provided precision-guided strike capabilities and suppression of enemy air defenses. During Operation Iraqi Freedom, the Air Force deployed over 130 F-16s that contributed significantly to the approximately 8,800 sorties flown by Air Force fighter aircraft. The B-2 is a multi-role heavy bomber with stealth characteristics, capable of employing nuclear and conventional weapons. The aircraft was produced in limited numbers to provide a low observable (i.e., stealth) capability to complement the B-1 and B-52 bombers. Its unique stealth capability enables the aircraft to penetrate air defenses. The Air Force currently has 21 B-2 aircraft in its inventory, and the average age is about 9 years. The first B-2 was deployed in December 1993, and currently all B-2s in the inventory are configured with an enhanced terrain-following capability and the ability to deliver the Joint Direct Attack Munition and the Joint Stand Off Weapon. We assessed the condition of the B-2 as yellow because the B-2 did not meet its mission capable goal of 50 percent. Officials said that the aircraft itself is in good condition, but it is the maintainability of its stealth characteristics that is driving the low mission capable rates. Officials pointed out that despite low mission capable rates the B-2 has been able to meet requirements for combat readiness training and wartime missions. For example, four B-2 aircraft were deployed and used during Operation Iraqi Freedom, and maintained a mission capable rate of 85 percent. Mission capable rates have improved slightly, and officials said that recent innovations in low observable maintenance technology and planned modifications are expected to foster additional improvement. We assessed the program strategy for the B-2 as green because the Air Force has developed a long-term program strategy for sustaining the B-2 inventory. Program plans appear to ensure the viability of this system through the Future Years Defense Plan. Procurement of this aircraft is complete. The Air Force plans to maintain and improve its capabilities, ensuring that the B-2 remains the primary platform in long-range combat aviation. We assessed the funding for the B-2 as green because current and projected funding is consistent with the Air Force’s stated requirements to sustain the B-2 inventory. The programmed funding should allow execution of the program strategy to sustain, maintain, and modify the system through the Future Years Defense Plan. The B-2 is of special interest to the Congress, which requires an annual report on this system, including a schedule of funding requirements through the Future Years Defense Plan. No items specific to the B-2 were included in the Air Force’s fiscal year 2004 Unfunded Priority List. While we did not have sufficient data to definitively assess the wartime capability for the B-2, the aircraft has successfully fulfilled its wartime missions despite current condition weaknesses. The Air Force demonstrated the aircraft’s long-range strike capability by launching missions from the United States, striking targets in Afghanistan, and returning to the States. More recently, the Air Force deployed four B-2 aircraft to support Operation Iraqi Freedom, where they contributed to the 505 sorties flown by bombers during the conflict. The B-2 Annual Report to the Congress states that the B-2 program plan will ensure that the B-2 remains the primary platform in long-range combat aviation. The C-5 Galaxy is the largest of the Air Force’s air transport aircraft, and one of the world’s largest aircraft. It can carry large cargo items over intercontinental ranges at jet speeds and can take off and land in relatively short distances. It provides a unique capability in that it is the only aircraft that can carry certain Army weapon systems, main battle tanks, infantry vehicles, or helicopters. The C-5 can carry any piece of army combat equipment, including a 74-ton mobile bridge. With aerial refueling, the aircraft’s range is limited only by crew endurance. The first C-5A was delivered in 1969. The Air Force currently has 126 C-5 aircraft in its inventory, and the average age is about 26 years. We assessed the condition of the C-5 as yellow because it consistently failed to meet its mission capable goal of 75 percent; however, mission capable rates have been steadily improving and, in April 2003, active duty C-5s exceeded the goal for the first time. Program officials pointed out that, although the total fleet has never achieved the 75 percent goal, there has been considerable improvement over time, with the rate rising from about 42 percent in 1971 to about 71 percent in 2003. The Air Force Scientific Advisory Board has estimated that 80 percent of the airframe structural service life remains. Furthermore, Air Force officials said that the two major modification programs planned, the avionics modernization program and reliability enhancement and re-engining program, should significantly improve mission capable rates. We assessed the program strategy for the C-5 as green because the Air Force has developed a long-term program strategy for sustaining and modernizing the aging C-5 inventory. The Air Force has planned a two-phase modernization program through the future years defense program that is expected to increase the aircraft’s mission capability and reliability. The Air Force plans to modernize the C-5 to improve aircraft reliability and maintainability, maintain structural and system integrity, reduce costs, and increase operational capability. Air Force officials stated that the C-5 is expected to continue in service until about 2040 and that, with the planned modifications, the aircraft could last until then. As an effort to meet strategic airlift requirements, the Air Force has contracted to buy 180 C-17s, will retire 14 C-5s by fiscal year 2005, and may retire additional aircraft as more C-17s are acquired. We assessed the funding for the C-5 as yellow because current and projected funding is not consistent with the Air Force’s stated requirements to sustain and modernize the aging C-5 inventory. According to officials, the program lost production funding because of problems during the early stage of the program. Currently 49 aircraft are funded for the avionics program through the Future Years Defense Plan. For fiscal year 2004, the Air Force included $39.4 million in its Unfunded Priority List to restore the program to its prior timeline. While we did not have sufficient data to definitively assess the wartime capability of the C-5, Air Force officials indicated that the aircraft has successfully fulfilled its recent wartime missions. The Air Force has not noted any factors or capability concerns that would prevent the C-5 from effectively performing its wartime mission. The KC-135 is one of the oldest airframes in the Air Force’s inventory, and represents 90 percent of the tanker fleet. Its primary mission is air refueling, and it supports Air Force, Navy, Marine Corps, and allied aircraft. The first KC-135 was delivered in June 1957. The original A models have been re-engined, modified, and designated as E, R, or T models. The E models are located in the Air Force Reserve and Air National Guard. The total inventory of the KC-135 aircraft is 543, and the average age is about 43 years. We assessed the condition of the KC-135 as yellow because it maintained mission capable rates at or near the 85 percent goal despite the aircraft’s age and potential corrosion of its structural components. Although the aircraft is about 43 years old, average flying hours are slightly over a third of its expected life of 39,000 hours, and an Air Force study projected the KC-135 would last until about 2040. All KC-135s were subjected to an aggressive corrosion preventive program and underwent significant modifications, including replacement of the cockpit. Nevertheless, citing increases in the work needed during periodic depot maintenance, costs, and risk of the entire fleet being grounded, the Air Force decided to accelerate recapitalization from 2013 to about 2006. We assessed the program strategy for the KC-135 as red because the Air Force has developed a long-term program strategy to modernize the aging KC-135 tanker fleet, but it has not demonstrated the urgency of acquiring replacement aircraft and has not defined the requirements for the number of aircraft that will be needed. As we stated in testimony before the House Committee on Armed Services, Subcommittee on Projection Forces, the department does not have a current, validated study on which to base the size and composition of either the current fleet or a future aerial refueling force. The Air Force has a large fleet of KC-135s (about 543), which were flown about 300 hours annually between 1995 and September 2001. Since then utilization is about 435 hours per year. Furthermore, the Air Force has a shortage of aircrews to fly the aircraft it has. In Operation Iraqi Freedom, a relatively small part of the fleet was used to support the conflict (149 aircraft). Without a definitive analysis, it is difficult to determine if recapitalization is needed and what alternatives might best satisfy the requirement. We assessed the funding of the KC-135 as red because current and future funding is not consistent with the Air Force stated requirements to sustain and modernize the KC-135 tanker fleet. The Air Force has not addressed recapitalization funding in the current defense budget or in the Future Years Defense Plan. The Air Force plans to begin acquiring new aircraft almost immediately, but does not want to divert funding from other programs to pay for them. The Air Force proposed a unique leasing arrangement with Boeing that will provide new tankers as early as 2006. There remains controversy over the lease terms, aircraft pricing, and how the Air Force will pay for the lease. While we did not have sufficient data to definitively assess the wartime capability of the KC-135, Air Force officials indicated that the aircraft has successfully fulfilled its recent wartime missions despite current condition problems. The KC-135 comprised 149 of the 182 tanker aircraft the Air Force used during Operation Iraqi Freedom, and those aircraft flew almost 6,200 sorties and offloaded over 376 million pounds of fuel. The KC-135 maintained a mission capable rate above the current goal of 85 percent during Operation Iraqi Freedom. The CALCM is an accurate long-range standoff weapon with an adverse weather, day/night, and air-to-surface capability. It employs a global positioning system coupled with an inertial navigation system. It was developed to improve the effectiveness of the B-52 bombers and became operational in January 1991. Since initial deployment, an upgraded avionics package, including a larger conventional payload and a multi-channel global positioning system receiver, has been added on all of the missiles. The CALCM total inventory is about 478, and the average age is about 15 years. We assessed the condition of the CALCM as green because the CALCM has demonstrated high reliability. The Air Force has not noted any chronic factors or problems that limit the effectiveness or reliability of the missile. However, according to officials, the diagnostics test equipment needs to be upgraded because it is old and was designed to support less sophisticated missiles. Currently, the Air Force uses the same test equipment for both the conventional and nuclear weapons. We assessed the program strategy for the CALCM as green because the Air Force has a long-term program strategy for sustaining and modernizing its current inventory of cruise missiles. The Air Force does not have any future plans to convert or purchase any additional nuclear missiles. The Joint Chief of Staff must authorize the use of the conventional weapons and approve the program in order to procure additional missiles. As the inventory is depleted, the conventional weapon will be replaced with other systems with similar capabilities, such as the Joint Air-to-Surface Standoff Missile, which is currently under development. The Joint Air-to-Surface Standoff Missile will not be a one-for-one replacement for the conventional missile. We assessed the funding for the CALCM as green because current and projected funding is consistent with the Air Force stated requirements to sustain and modernize its cruise missile inventory. Procurement of the cruise missile is complete, and no funding has been provided for research and development or procurement in the fiscal year 2003 budget. While we did not have sufficient data to definitively assess the wartime capability for the CALCM, Air Force officials indicated that it successfully fulfilled its recent wartime missions. These officials indicated that the cruise missile played a significant role in the initial strikes during Operation Iraqi Freedom. During Operation Iraqi Freedom, 153 missiles were expended, and the version that is designed to penetrate hard targets was first employed. The Joint Direct Attack Munition is a guidance tail kit that converts existing unguided bombs into accurate, all-weather “smart” munitions. This is a joint Air Force and Navy program to upgrade the existing inventory of 2,000 and 1,000-pound general-purpose bombs by integrating them with a guidance kit consisting of a global positioning system-aided inertial navigation system. In its most accurate mode, the system will provide a weapon circular error probable of 13 meters or less. The JDAM first entered the inventory in 1998. The total projected inventory of the JDAM is about 92,679, and the current average age is less than 5 years. Future upgrades will provide a 3-meter precision and improved anti-jamming capability. We assessed the condition of the JDAM as green because it consistently met its reliability goal of 95 percent. The munitions are used as they become available; therefore, no maintenance is involved. Although the Air Force does not monitor the condition of munitions, they keep track of each component of the guidance kit, which is tracked for serviceability. The kit is under a 20-year warranty. The munitions are purchased serviceable and are tested before used by the operational units. In addition to high reliability, the JDAM can be purchased at a low cost and are being delivered more than three times as fast as planned. We assessed the program strategy for the JDAM as green because the Air Force has a long-term program strategy for sustaining and maintaining its production of the munitions. The Joint Direct Attack Munition requirements are driven by assessments of war readiness and training requirements. Currently, Boeing is in full production and is increasing its production to about 2,800 per month for the Air Force and Navy, an increase from approximately 700–900 a month. The second production line is up and running. We assessed the funding for the JDAM as green because current and projected funding is consistent with the Air Force’s stated requirements to sustain and maintain production of the munitions. The President’s fiscal year 2003 budget provided funding for the procurement of the system through the future years defense plan. Air Force officials stated that the munitions have all the funding it needs; however, it is limited by the production capability of its contractor, Boeing. While we did not have sufficient data to definitively assess the wartime capability of the JDAM, Air Force officials indicated that it has successfully fulfilled its recent wartime missions. The weapon system played a role in operations in Kosovo, Afghanistan, and Iraq. According to the Air Force, the weapon has operationally proven to be more accurate, reliable, and effective than predicted. The Air Force has not noted any factors or capability concerns that would prevent the Joint Direct Attack Munitions from effectively fulfilling its wartime mission. Navy Destroyers are multi-mission combatants that operate offensively and defensively, independently, or as part of carrier battle groups, surface action groups, and in support of Marine amphibious task forces. This is a 62-ship construction program, with 39 in the fleet as of 2003. The average age of the ships is 5.8 years, with the Arleigh Burke (DDG-51) coming into service in 1991. The follow-on program is the DD(X), with initial construction funding in 2005 and delivery beginning 2011. We assessed the condition of the DDG-51 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. Those items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance affects corrosion issues, particularly the ship’s hull. Engineering and combat systems have priority for resources with desirable, though not necessarily essential, crew quality of life improvements deferred to a later time. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. We assessed the program strategy for the DDG-51 as yellow because the Navy has developed a long-term program strategy for sustaining and upgrading the DDG-51 fleet; however, budget cuts in the Navy’s shipbuilding program affect upgrades to the warfighting systems and may lead to potential problems in the industrial base when transitioning from DDG to DD(X) ships. Navy officials noted that these budget cuts prevent them from buying the latest available technologies. These technologies are usually in warfighting systems, such as command and control and system integration areas. Management of the transition period from DDG to DD(X) shipbuilding between 2005 and 2008 will be key to avoid problems from major fluctuations in the workload and workforce requirements. We assessed the funding for the DDG-51 as yellow because current and projected funding is not consistent with the Navy’s statement requirements to sustain and upgrade the DDG-51 fleet. Lack of multiyear budget authority creates budget inefficiencies because the Navy is required to spend supplemental and 1-year funds within the year in which it is appropriated. The Navy attempts to reduce ship maintenance costs by leveling the maintenance workload for ship contractors, which provides the Navy and contractors greater flexibility and predictability. The lack of multiyear budgeting and the need to spend supplemental and 1-year funds in the current year limits that effort. Ports are not equipped or manned to accomplish the volume of work required in the time-span necessary to execute 1-year appropriations. In some cases, differences between the Navy estimate of scheduled maintenance costs and the contractor bid to do the work requires cuts to the ship’s planned work package, further contributing to the deferred maintenance backlog. While we did not have sufficient data to definitively assess the wartime capability for the DDG-51, Navy officials raised a number of capability concerns. Specifically, these officials indicated that the DDG-51 has successfully fulfilled its recent wartime mission, but with some limitations such as communications shortfalls and force protection issues. Although the DDG-51 class is the newest ship in the fleet with the most up to date technologies, fleet officers said there is insufficient bandwidth for communications during operations. Navy officials cited effective management of available communications assets rather than the amount of available bandwidth as the more immediate challenge. In the current threat environment, force protection issues remain unresolved. The use of the Ridged Hull Inflatable Boat (RHIB) during operations at sea without on- board crew-served weapons and hardening protection concerns commanders. The small caliber of sailors’ personal and crew-served weapons limits their effectiveness against the immediate and close-in threat from small boat attack. Navy FFG-7 Frigates are surface combatants with anti-submarine warfare (ASW) and anti-air warfare (AAW) capabilities. Frigates conduct escort for amphibious expeditionary forces, protection of shipping, maritime interdiction, and homeland defense missions. There are 32 FFGs in the fleet, with 30 programmed for modernization. The average age of the fleet is 19 years. The FFGs are expected to remain in the fleet until 2020. We assessed the condition of the FFG-7 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. These items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance affects corrosion issues, particularly the ship’s hull. Engineering and combat systems have priority for resources with desirable, though not necessarily essential, crew quality of life improvements deferred to a later time. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. There is the additional burden of maintaining older systems on the frigates. We assessed the program strategy for the FFG-7 as yellow because the Navy has developed a long-term program strategy for sustaining and modernizing the FFG-7 fleet; however, the program is susceptible to budget cuts. The modernization program is essential to ensure the frigates’ continued viability. There is also uncertainty about the role frigates will play as the Littoral Combat Ship is developed. We assessed the funding for the FFG-7 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and modernize the FFG-7 fleet. Uncertainty about modernization program funding and budget inefficiencies created by the lack of multiyear budget authority and the requirement to spend supplemental and 1-year funds when they are appropriated. The Navy attempts to reduce ship maintenance costs by leveling the maintenance workload for ship contractors, which provides the Navy and contractors greater flexibility and predictability. The lack of multiyear budget authority and the need to spend supplemental and 1-year funds in the current year in which they are appropriated limits that effort. Ports are not equipped or manned to accomplish the volume of work required in the time span necessary to execute 1-year appropriations. In some cases, differences between the Navy estimate of scheduled maintenance costs and the contractor bid to do the work requires cuts to the ship’s planned work package, further contributing to the deferred maintenance backlog. While we did not have sufficient data to definitively assess the wartime capability of the FFG-7, Navy officials identified a number of capability concerns including communications shortfalls and potential vulnerabilities to air warfare. The frigate’s ability to operate in a battle group environment is limited by insufficient bandwidth and lack of command circuits for communications requirements. The Navy shut down the frigate’s missile launcher because of excessive maintenance costs. Ship commanders in the fleet expressed concern about potentially deploying with only one of three compensating systems for anti-air warfare missions, the on-board 76-mm rapid-fire gun (CWIS-1B, Close-In Weapons System). Officials in the program manager’s office stated fielding plans were in place for the other two systems, the MK53 Decoy Launch System, called NULKA, and the Rolling Airframe Missile (RAM). These systems will help mitigate the frigate’s vulnerability after shutting down the missile launcher. The frigate’s value to surface groups operating independently of carriers is as a helicopter platform. The F/A-18 is an all-weather fighter and attack aircraft expected to fly in the fleet to 2030. There are six models in the current inventory of 875: A, 178; B, 30; C, 405; D, 143; E, 55; and F, 64. Average age in years is: A, 16.4; B, 18.0; C, 10.6; D, 10.1; E, 1.7; and F, 1.5. The Navy plans to eventually replace the F/A-18 with the Joint Strike Fighter. We assessed the condition of the F/A-18 as yellow because it consistently failed to meet mission capable and fully mission capable goals of 75 percent and 58 percent, respectively. Squadrons that are deployed or are training for deployment generally exceed these goals. Maintaining the aircraft is increasingly difficult because of personnel shortfalls, increased flying requirements, and lack of ground support equipment. Navy depot personnel indicated that the availability of spare parts remains the largest issue in repairing and returning aircraft to the fleet. We assessed the program strategy for the F/A-18 as yellow because the Navy has developed a long-term program strategy for sustaining and maintaining the F/A-18 fleet; however, it lacks a common baseline capability for all aircraft. Navy officials stated managing the configuration of the various versions of the aircraft is challenging. Each version of the aircraft has different repair parts, unique on-board equipment, and specially trained maintainers and pilots. To increase the service life of the aircraft, the Navy initiated the Center Barrel Replacement (CBR) program. CBR replaces those parts of the F/A-18 fuselage that have the greatest stress placed on them from landing on aircraft carriers. The Navy is also initiating a Navy/Marine Tactical Air Integration program that combines low flying-hour / low carrier-landing aircraft for carrier use and high flying- hour / high carrier-landing aircraft for shore basing. If CBR is adequately funded and the Tactical Air Integration initiative proceeds, the F/A-18 will remain a viable system into the future. We assessed the funding for the F/A-18 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and maintain the F/A-18 fleet. The Navy intends to fly the F/A-18A-D models until 2020 and the E/F models to at least 2030. Funding for ground support equipment for the A–D models was eliminated, leaving operators and program managers to find resources elsewhere. Program dollars are often drawn back, pushing modernization to the out years. This is a problem for the CBR program that is $72 million short in the current Future Years Defense Plan. Navy personnel state that the CBR program must be fully funded to meet the number of aircraft required to support the Tactical Air Integration initiative and standards in the new Fleet Response Plan. While we did not have sufficient data to definitively assess the wartime capability for the F/A-18, Navy officials indicated that the aircraft has successfully fulfilled its wartime missions despite current condition problems. The A-D models, along with the E/F models coming into the inventory, provide a multi-capable aircraft for the many roles the war fighting commanders require. These multi-role capabilities were demonstrated during Operation Iraqi Freedom with the F/A-18 performing air, ground attack, and refueling missions. Navy officials stated that they will do whatever is necessary to accomplish the mission, but raised concerns that maintenance costs are increasing due to current conditions problems. Specifically these officials stated that increased maintenance man hours per aircraft sortie, increased cannibalization rates, and decreased readiness rates are creating more stress on the aircraft and the personnel who fly and maintain them. The EA-6B is an integrated electronic warfare aircraft system combining long-range, all-weather capabilities with advanced electronic countermeasures. Its primary mission is to support strike aircraft and ground troops by jamming enemy radar, data links, and communications. The current inventory is 121 with an average age of 20.7 years. The follow-on aircraft is the E/A-18G Growler Airborne Electronic Attack aircraft, a variant of the F/A-18 E/F. We assessed the condition of the EA-6B as yellow because it consistently failed to meet the mission capable goal of 73 percent. However, squadrons training for deployment or those that are deployed generally exceed this goal. Fatigue life expenditure (FLE), the predictable rate of wear and deterioration of wing center sections and outside wing panels, is a critical problem and has caused aircraft to be temporarily grounded or placed under flying restrictions to mitigate risk to the aircraft. Wing center sections are that part of the plane where the wings and fuselage attach. Outer wing panels are that part of the wing that fold up when the plane is onboard carriers. The Navy is aggressively managing the problem and has programs in place to replace these items in the near term. We assessed the program strategy for the EA-6B as yellow because the Navy has developed a long-term program strategy for upgrading the EA-6B fleet; however, aircraft capability requirements may not be met in the future. The Improved Capability 3rd Generation (ICAPIII) upgrade is a significant technology leap in jamming capabilities over the current second-generation capability. ICAPIII will counter threats through 2015 and provides an advanced jamming capability, accurate target location, and full circle coverage. By 2007, 30 percent of the fleet will be ICAPIII equipped. The Navy plans for the follow-on EA-18G Growler to join the fleet between 2008 and 2012. The Navy purchase plan calls for 90 aircraft with over two- thirds (65 aircraft) procured by 2009. We assessed the funding for the EA-6B as red because current and projected funding is not consistent with the Navy’s stated requirements to sustain and upgrade the EA-6B fleet. The Navy relies upon additional congressional appropriations rather than requesting funds to meet program requirements. In fiscal year 2003, the Congress appropriated an additional 17 percent ($40 million) over DOD’s request for the EA-6B. The Navy is not funding modernization programs to the stated requirements. The Navy’s requirement for the ICAPIII electronic attack upgrade is 42 systems, although the Navy is only funding 35 systems. According to the program manager, funding for replacing the EA-6B’s outside wing panels is still uncertain. While we did not have sufficient data to definitively assess the wartime capability for the EA-6B, Navy officials indicated that the aircraft has successfully fulfilled its wartime missions with some limitations. Potential funding shortfalls and capability limitations may affect the aircraft’s ability to perform its mission. Only 98 out of 108 aircraft in the Navy’s EA-6B inventory are available to the fleet. Current EA-6B capabilities can meet the threat, although without an increase in the number of ICAPIII capable aircraft, the Navy may not be able to meet future threats. According to Navy officials, there is an impending severe impact on warfighting capabilities if the Navy does not receive fiscal year 2003 procurement funding for outside wing panels as requested. Specifically, the combination of the expected wear and tear on the panels and the normal aircraft attrition rate could reduce the total EA-6B inventory by 16 in 2005. The LPD-4 ships are warships that embark, transport, and land elements of a Marine landing force and its equipment. There are currently 11 in the inventory with an average age of 35 years. These ships are expected to remain in the fleet until 2014. The San Antonio-class LPD-17 (12-ship construction program, LPD-17 through LPD-28) will eventually replace the LPD-4. We assessed the condition of the LPD-4 as yellow because work programmed for scheduled maintenance periods is often not accomplished. Because of budget limitations for each ship’s dry-dock period and a Navy effort to level port workloads and provide stability in the industrial base, maintenance items are often cut from the planned work package during dry-dock periods. These items are then deferred to the next scheduled docking or accomplished as possible in the ship’s continuous maintenance phase. Deferring maintenance increases corrosion problems, particularly for the ship’s hull. There are consistent problems with the engagement system for on-board weapons and the hull, mechanics, and electrical (HME) systems associated with the ship’s combat support system. The age of the LPD-4 fleet directly contributes to the deteriorating condition of the ships, particularly the hydraulic systems. The Navy balances risk between available resources and deferring maintenance to make the most cost-effective decision and ensure ships deploy without or with minimal safety or combat system deficiencies. We assessed the program strategy for the LPD-4 as green because the Navy has developed a long-term program strategy to sustain and replace amphibious dock ships and improve support to Marine amphibious forces. The Extended Sustainment Program was initiated because of delay in delivery of the new LPD-17 class ships. The program will extend the service life of 6 of 11 ships for an average of 7.3 years to the 2009–2014 time frame. The program consists of 37 prioritized work items endorsed by the Navy. The follow-on LPD-17 ship construction program incorporates innovative design and total ownership cost initiatives; however, no modernization or upgrades are planned in the construction timeline from 1999 to 2013. We assessed the funding for the LPD-4 as yellow because current and projected funding is not consistent with the Navy’s stated requirements to sustain and replace amphibious dock ships. The age and decommissioning schedule for the ships means funding priorities are placed elsewhere. The Navy is seeking cost savings through efforts to level the industrial base in ports and provide predictability and management flexibility for programmed maintenance work. A significant limitation in that effort is the inability to use multiyear budgeting and the need to spend supplemental and 1-year funds in the year of appropriation. Ports are often not equipped and manned to accomplish the volume of work required in the time-span necessary to execute 1-year budgets. While we did not have sufficient data to definitively assess the wartime capability for the LPD-4, Navy officials did not identify any specific capability concerns. These officials indicated that the LPD-4 fulfilled its recent wartime missions of transporting and moving Marines and their equipment ashore. The Standard Missile-2 (SM-2) is a medium to long-range, shipboard surface- to-air missile with the primary mission of fleet area air defense and ship self-defense, and a secondary mission of anti-surface ship warfare. The Navy is currently procuring only the Block IIIB version of this missile. While the actual number in the inventory is classified, the Navy plans to procure 825 Block IIIB missiles between fiscal years 1997 and 2007. Currently, 88 percent of the inventory is older than 9 years. A qualitative evaluation program adjusted the initial 10-year service life out to 15 years. We assessed the condition of the Standard Missile–2 as red because it failed to meet the asset readiness goal of 87 percent and only 2 of 5 variants achieved the goal in fiscal year 2002. The asset readiness goal is the missile equivalent of mission capable goals. The percent of non-ready for issue missiles (currently at 23 percent of the inventory) will increase because of funding shortfalls. We assessed the program strategy for the Standard Missile-2 as yellow because the Navy has developed a long-term program strategy for upgrading the Standard Missile-2 inventory; however, the Navy’s strategy mitigates risk with complementary systems as the SM-2 inventory draws down and upgrades to counter known threats are cut from the budget. In 2002, the Navy cancelled production of the most capable variant at the time, the SM-2 Block IVA. Currently, the most capable missile is the SM-2 Block IIIB, which is the only variant in production. This missile will be the main anti-air warfare weapon on board Navy ships into the next decade. Improved Block IIIB missiles will be available in 2004. The SM-6 Extended Range Active Missile (ERAM) is programmed for initial production in 2008 and will be available to the fleet in 2010. We assessed the funding for the Standard Missile-2 as red because current and projected funding is not consistent with the Navy’s stated requirements to upgrade the Standard Missile-2 inventory. There is a $72.6 million shortfall for maintenance and a shortfall of approximately $60 million for procurement in the current Future Years Defense Plan. While we did not have sufficient data to definitively assess the wartime capability of the Standard Missile-2, Navy officials indicated that it successfully fulfilled its recent wartime missions but with some limitations. Block IIIB and improved Block IIIB missiles successfully counter the threats they were designed to counter. However, the most capable variant in the current inventory cannot handle the more sophisticated known air threats. The Navy lost a capability to intercept extended range and ballistic missiles when development of the Block IVA variant was cancelled. The improved Block IIIB missiles will mitigate some risk until the SM-6 ERAM is deployed in 2010. Further, Navy officials stated that the Navy accepts an element of risk until the SM-6 is deployed because the threat is limited in both the number of missiles and the scenarios where those missiles would be employed. Officials also described the Navy’s anti-air warfare capability as one of complementary systems and not singularly dependent on the SM-2 missile. The Navy successfully increased the deployment of these missiles to the fleet for the recent operations in Afghanistan and Iraq, but the growing shortage of ready-for-issue missiles in future years could severely limit the Navy’s ability to meet future requirements. The Tomahawk Cruise Missile is a long-range, subsonic cruise missile used for land attack warfare, and is launched from surface ships and submarines. The current inventory is 1,474 missiles, with an average age of 11.88 years and a 30-year service life. During Operation Iraqi Freedom, 788 Tomahawk’s were expended. The follow-on Tactical Tomahawk (TACTOM) is scheduled to enter the inventory in 2005. We assessed the condition of the Tomahawk Cruise Missile as green because it consistently met asset readiness goals in recent years. The asset readiness goal is classified. We assessed the program strategy for the Tomahawk Cruise Missile as red because the Navy has developed a long-term program strategy for upgrading the Tomahawk Cruise Missile inventory; however, the future inventory level will not be determined until funding questions are resolved. During Operation Iraqi Freedom, 789 Tomahawks were expended with a remaining inventory of 1,474. The replenishment missiles are all programmed to be the new Tactical Tomahawk missile. Even when funding is appropriated and executed this fiscal year, the first available date for new missiles entering the inventory will be late 2005–2006. A remanufacturing program planned for 2002–2004 is upgrading the capabilities of older missiles. There are 249 missiles remaining to be upgraded. We assessed the funding for the Tomahawk Cruise Missile as red because current and projected funding is not consistent with the Navy’s stated requirements to replenish the inventory and new production is unresolved. Inventory replenishment funding was authorized by the Congress and, at the time of our review, was in conference to resolve differences between the two bills. While we did not have sufficient data to definitively assess the wartime capability for the Tomahawk Cruise Missile, Navy officials indicated that it has successfully fulfilled its wartime missions during recent operations in Afghanistan and Iraq. Improved Tomahawks came into the inventory in 1993 and provided enhanced accuracy on targets. The newest variant, the Tactical Tomahawk (TACTOM), is scheduled to come into the inventory in 2005 and improves the missile with an upgraded guidance system and in- flight re-programming capability. This upgrade program is also expected to lower the missile’s production unit and life-cycle support costs. The AH-1W Super Cobra provides en route escort and protection of troop assault helicopters, landing zone preparation immediately prior to the arrival of assault helicopters, landing zone fire suppression during the assault phase, and fire support during ground escort operations. There are 193 aircraft in the inventory with an average age of 12.6 years. We assessed the condition of the AH-1W as yellow because it consistently failed to meet its mission capable goals from fiscal year 1998 to fiscal year 2002. Although Camp Pendleton and Camp Lejeune AH-1W maintainers cited insufficient spare parts and cannibalization as problems, overall, operators were always positive in their comments about the condition of the AH-1W. Condition concerns will be remedied in the near term by the AH-1W upgrade program that is proceeding as scheduled with an October 1, 2003, anticipated start date. We assessed the program strategy for the AH-1W as green because the Marine Corps has developed a long-term program strategy for upgrading the AH-1W helicopter to the AH-1Z, achieving 85 percent commonality with the UH-1Y helicopter fleet. Estimated savings of $3 billion in operation and maintenance costs over the next 30 years have been reported. Additionally, the upgrade program will enhance the helicopter’s speed, maneuverability, fuel capacity, ammunition capacity, and targeting systems. We assessed the funding for the AH-1W as green because current and projected funding is consistent with the Marine Corps’ stated requirements to sustain and upgrade the AH-1W fleet. Although we assessed funding as green, Marine Corps officials at Camp Pendleton cited the need for additional funding for spare parts and noted that cost overruns have occurred in recent years for the AH-1W upgrade program. While we did not have sufficient data to definitively assess the wartime capability of the AH-1W, Marine Corps officials indicated that it successfully fulfilled its recent wartime missions but with some limitations. Specifically, prior to Operation Iraqi Freedom, Marine Corps operators at Camp Pendleton stated that the AH-1W’s ammunition and fuel capacity was insufficient for some operations, such as Afghanistan. The AH-1Z upgrade program, however, will address these concerns. The Sea Knight helicopter provides all weather, day/night, night-vision capable assault transport of combat troops, supplies and equipment during amphibious and subsequent operations ashore. There are 226 aircraft in the inventory. The CH-46E is more than 30 years old. The MV-22 Osprey is the planned replacement aircraft for the CH-46E. We assessed the condition of the CH-46E as red because it consistently failed to meet mission capable goals between fiscal year 1998 and fiscal year 2002. The operational mean time between failures decreased from 1.295 hours to 0.62 hours during the course of our review. Marine Corps officials cited concern over the aircrafts age and the uncertainty about the fielding of the MV-22 to replace the Sea Knight. Marine Corps officials called the current maintenance programs critical to meeting condition requirements. We assessed the program strategy for the CH-46E as yellow because the Marine Corps has developed a long-term program strategy to sustain and replace the CH-46E fleet. The sustainment strategy, dated August 19, 2003, outlines the service’s plans to sustain the CH-46E until retirement in 2015 or longer. However, according to press reports, DOD has decided to reduce the purchase of replacement systems by about 8 to 10 aircraft over the next few years. If DOD buys fewer replacement systems, the service will have to adjust the sustainment strategy to retain additional CH-46E aircraft in its inventory longer. We assessed the funding for the CH-46E as red because current and projected funding is not consistent with the Marine Corps’ stated requirements to sustain and replace the CH-46E fleet. Marine Corps officials asserted continued funding for maintaining the CH-46E is essential. The fiscal year 2004 budget request included a request for funding of safety improvement kits, long-range communications upgrade, aft transmission overhaul, and lightweight armor. The Navy lists CH-46E safety improvement kits as a $4 million unfunded requirement. While we did not have sufficient data to definitively assess the wartime capability of the CH-46E, Marine Corps raised a number of specific capability concerns. Specifically, these officials stated that the intended mission cannot be adequately accomplished due to lack of payload. The CH-46E has lost 1,622 pounds of lift since its fielding over 35 years ago due to increased weight and can only carry a 12-troop payload on a standard day. More recently, Marine Corps officials rated the performance of the CH-46E during Operation Iraqi Freedom as satisfactory despite these lift limitations. The AAV is an armored, fully-tracked landing vehicle that carries troops in water operations from ship to shore through rough water and surf zone, or to inland objectives ashore. There are 1,057 vehicles in the inventory. The Marine Corps plans to replace the AAV with the Expeditionary Fighting Vehicle (formerly the AAAV—Advanced Amphibious Assault Vehicle). We assessed the condition of the AAV as yellow because of its age and the fact that the Marine Corps plans to upgrade only 680 of the 1,057 AAVs currently in the inventory. Furthermore, the planned upgrade program will only restore the vehicle to its original operating condition rather than upgrading it to perform beyond its original operating condition. We could not base our assessment of the condition on readiness rates in relation to the readiness rate goals because the Marine Corps did not provide sufficient trend data. Marine Corps officials at Pacific Command stated that the heavy usage of the AAV during Operation Iraqi Freedom and the long fielding schedule of the replacement vehicle present significant maintenance challenges. However, we assessed the condition yellow instead of red based on favorable comments about the current condition of the AAV from operators and maintainers. We assessed the program strategy for the AAV as yellow because the Marine Corps has developed a long-term program strategy for overhauling the AAV; however, the program only restores the vehicle to its original operating condition and does not upgrade the vehicles beyond original condition. The Marine Corps initiated a Reliability, Availability and Maintenance/Rebuild to Standard (RAM/RS) upgrade program in 1998 to restore capabilities and lengthen the expected service life of the AAV to sustain the vehicles until the replacement system, the Expeditionary Fighting Vehicle (formerly the Advanced Amphibious Assault Vehicle), can be fielded. The RAM/RS is expected to extend the AAV service life an additional 10 years. These vehicles will be needed until the replacement vehicles can be fielded in 2012. However, the procurement of the replacement vehicles has reportedly already been delayed by 2 years. We assessed the funding for the AAV as yellow because current and projected funding is not consistent with the Marine Corps’ requirements to upgrade the AAV inventory. Requested funding rose from $13.5 million in fiscal year 1998 to $84.5 million in fiscal year 1999 as the Marines initiated the RAM/RS program. The requested funding level declined to $66.2 million by fiscal year 2002. The Marine Corps identified a $48.9 million unfunded program in the fiscal year 2004 budget request to extend RAM/RS to more vehicles. Marine Corps officials are concerned reconstitution of the vehicles from Operation Iraqi Freedom will not include funding for vehicles returning from Operation Iraqi Freedom for the RAM/RS program. While we did not have sufficient data to definitively assess the wartime capability of the AAV, Marine Corps officials indicated that it has successfully fulfilled its wartime missions but with some limitations. While these officials cited the AAV as integral to ground operations during Operation Iraqi Freedom, they noted specific stresses placed on the vehicles. For example, AAVs deployed to Operation Iraqi Freedom traveled, on average, over 1,000 miles each, a majority of those miles under combat conditions. Those conditions added about 5 years worth of miles and wear and tear to the vehicles over a 6- to 8-week period. In addition, prior to Operation Iraqi Freedom, Marine Corps officials at Camp Lejeune highlighted problems they encountered with obtaining enhanced armor kits to protect the vehicles from the .50 caliber ammunition that was used by Iraqi forces. At the time of our review, only 26 of 213 AAVs at Camp Lejeune had been provided the enhanced armor kits. Marine Corps officials at Camp Lejeune believed the lack of kits was due to insufficient funding. The LAV-C2 variant is a mobile command station providing field commanders with the communication resources to command and control Light Armored Reconnaissance (LAR) units. It is an all-terrain, all-weather vehicle with night capabilities and can be made fully amphibious within three minutes. There are 50 vehicles in the inventory with an average age of 14 years. We assessed the condition of the LAV-C2 as green because the Marine Corps has initiated a fleet-wide Service Life Extension Program (SLEP) to extend the service life of the vehicle from 20 years to 27 years. The LAV-C2 SLEP includes enhancements to communications capabilities. Marine Corps officials cautioned that any delays in SLEP could affect future readiness. While we assessed the condition as green, we noted the operational readiness rate for the command and control variant was 90.5 percent, below the 100 percent goal but higher than the operational readiness rate of 85 percent for the entire fleet. We assessed the program strategy for the LAV-C2 as green because the Marine Corps has developed a long-term program strategy for upgrading the LAV-C2 inventory. The program funded in the current FYDP will enhance communications capabilities and power systems and may afford commonality with Unit Operation Center and helicopter systems. The Marines Corps intend for the upgraded LAV-C2 to provide a prototype to establish baseline requirements for future capabilities and a successor acquisition strategy. Marine Corps officials stated the C2 upgrade program needs to be supported at all levels. We assessed the funding for the LAV-C2 as green because current and projected funding is consistent with Marine Corps stated requirements to upgrade the LAV-C2 inventory. Marine Corps officials have requested $72.2 million in the current FYDP to support major LAV-C2 technology upgrades. Marine Corps officials at Pacific Command recommended increased funding for procurement of additional vehicles, citing the current inventory deficiency as critical. While we did not have sufficient data to definitively assess the wartime capability of the LAV-C2, Marine Corps officials indicated that it has successfully fulfilled its recent wartime missions. Marine Corps reports regarding the operations in Afghanistan cited LAVs in general as the most capable and dependable mobility platform despite the fact that the number of available C-17 transport aircraft limited the deployment of the vehicles. Initial reports from Operation Iraqi Freedom also indicate that the LAV-C2 performed successfully. The Maverick missile is a precision-guided, air-to-ground missile configured primarily for the anti-tank and anti-ship roles. It is launched from a variety of fixed-wing aircraft and helicopters and there are laser and infrared-guided variants. The Maverick missile was first fielded in 1985. We assessed the condition of the Maverick missile as not applicable because the Marine Corps does not track readiness data such as mission capable or operational readiness rates for munitions as they do for aircraft or other equipment. We assessed the program strategy for the Maverick missile as green because the Marine Corps has developed a long-term program strategy for replacing the Maverick missile with more capable missiles. Maverick missile procurement ended in 1992 and the infrared variant will no longer be used in 2003. According to Marine Forces Pacific Command officials, a joint common missile is being developed and scheduled for initial operational capability in 2008. The new missile will be a successor to the Maverick, Hellfire, and TOW missiles. Marine Corps officials stated a joint reactive precision-guided munition for both fixed- and rotary-winged aircraft as a potential successor to Maverick and Hellfire missiles will be submitted to the Joint Requirements Oversight Committee for evaluation in fiscal year 2003. We assessed the funding for the Maverick missile as green because current and projected funding is consistent with the Marine Corps’ stated requirements to replace the Maverick missile inventory. Since fiscal year 1998, the Marine Corps limited funding for the Maverick to the operation and maintenance accounts. While we did not have sufficient data to definitively assess the wartime capability of the Maverick missile, Marine Corps officials indicated that it has successfully fulfilled its recent wartime missions but with some limitations. Specifically, these officials stated that the Maverick missile lacks an all-weather capability. Marine Corps officials cited increased risks due to sensor limitations of the laser variant that restricts the missile’s use to low threat environments. Although the Maverick fulfilled its wartime mission during Operation Iraqi Freedom, Marine Corps officials stressed that its success was due to the fact that this was the optimal environment for the Maverick—desert environment and a lack of low cloud cover. In any other type of environment, however, the Maverick’s use is limited. In addition to the individual named above, Richard Payne, Donna Rogers, Jim Mahaffey, Patricia Albritton, Tracy Whitaker, Leslie Harmonson, John Beauchamp, Warren Lowman, Ricardo Marquez, Jason Venner, Stanley Kostyla, Susan Woodward, and Jane Lusby made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
GAO was asked to assess the condition of key equipment items and to determine if the services have adequate plans for sustaining, modernizing, or replacing them. To address these questions, we selected 25 major equipment items, and determined (1) their current condition, (2) whether the services have mapped out a program strategy for these items, (3) whether current and projected funding is consistent with these strategies, and (4) whether these equipment items are capable of fulfilling their wartime missions. Many of our assessments of 25 judgmentally selected critical equipment items indicated that the problems or issues we identified were not severe enough to warrant action by the Department of Defense, military services, and/or the Congress within the next 5 years. The condition of the items we reviewed varies widely from very poor for some of the older equipment items like the Marine Corps CH-46E Sea Knight Helicopter to very good for some of the newer equipment items like the Army Stryker vehicle. The problems we identified were largely due to (1) maintenance problems caused by equipment age and a lack of trained and experienced technicians, and (2) spare parts shortages. Although the services have mapped out program strategies for sustaining, modernizing, or replacing most of the equipment items we reviewed, some gaps exist. In some cases, such as the KC-135 Stratotanker and the Tomahawk missile, the services have not fully developed or validated their plans for the sustainment, modernization, or replacement of the items. In other cases, the services' program strategies for sustaining the equipment are hampered by problems or delays in the fielding of replacement equipment or in the vulnerability of the programs to budget cuts. For 15 of the 25 equipment items we reviewed, there appears to be a disconnect between the funding requested by the Department of Defense or projected in the Future Years Defense Program and the services' program strategies to sustain or replace the equipment items. For example, we identified fiscal year 2003 unfunded requirements, as reported by the services, totaling $372.9 million for four major aircraft--the CH-47D helicopter, F-16 fighter aircraft, C-5 transport aircraft, and CH-46E transport helicopter. The 25 equipment items we reviewed appear to be capable of fulfilling their wartime missions. While we were unable to obtain sufficient data to definitively assess wartime capability because of ongoing operations in Iraq, the services, in general, will always ensure equipment is ready to go to war, often through surging their maintenance operations and overcoming other obstacles. Some of the equipment items we reviewed, however, have capability deficiencies that could degrade their wartime performance in the near term.
According to State, more than 25,000 U.S. government personnel are assigned to 275 U.S. diplomatic posts overseas. These officials represent a number of agencies besides State, such as the Departments of Agriculture, Defense, Homeland Security, Justice, and the Treasury, and the U.S. Agency for International Development, among others. As we reported in 2005, State considers soft targets to be places where Americans and other westerners live, congregate, shop, or visit. In addition to residences and schools, soft targets can include hotels, clubs, restaurants, shopping centers, places of worship, and public recreation events. Travel routes of U.S. government employees are also considered soft targets, based on their vulnerability to terrorist attacks. For the purposes of this report, we focus primarily on U.S. diplomatic residences; we also focus on schools attended by U.S. government dependents and off-compound employee association facilities since such schools and facilities are eligible for State-funded security upgrades. As of the end of fiscal year 2014, the U.S. government leased or owned more than 15,000 residences worldwide, according to State. About 13,000 were residences leased by the U.S. government and located off embassy or consulate compounds, while slightly more than 2,000 were government-owned, most of which were also located off embassy or consulate compounds. With respect to schools, State estimates that there are nearly 250,000 school-age American children overseas, of which approximately 8,000 are U.S. government dependents. State provides assistance to almost 200 “American-sponsored” schools worldwide to help provide quality education for children of U.S. government employees. U.S. government dependents may also attend any other schools preferred by their parents. In addition, State has chartered about 130 employee associations at posts overseas. These associations maintain a variety of facilities, including, among others, retail stores, cafeterias, recreational facilities, and quarters for officials on temporary duty. Some of these facilities are located off embassy or consulate compounds. The vast majority of the facilities are either owned or leased by the U.S. government. According to State, host-country police, security, and intelligence forces are often the first line of defense in protecting U.S. government personnel against potential threats. Additionally, as required by the Omnibus Diplomatic Security and Antiterrorism Act of 1986, the Secretary of State, in consultation with the heads of other federal agencies, is responsible for developing and implementing policies and programs to protect U.S. government personnel on official duty abroad, along with their accompanying dependents. Responsibility for the security of residences and other soft targets overseas falls primarily on DS and OBO. DS is responsible for, among other things, establishing and operating security and protective procedures at posts, chairing the interagency process that sets security standards, and developing and implementing posts’ residential security programs, which includes providing funding for most residential security upgrades. At posts, DS agents known as RSOs, including deputy RSOs and assistant RSOs, are responsible for protecting personnel and property, documenting threats and residential vulnerabilities, and identifying possible mitigation efforts to address those vulnerabilities. Posts with high turnover of residences or a large number of residences may also have a residential security coordinator on the RSO’s staff to assist with supervision and management of the posts’ residential security programs. RSOs are also responsible for offering security advice and briefings to schools attended by U.S. government dependents and recommending security upgrades to school and employee association facilities. OBO tracks information on State’s real properties, including residences; provides funding for certain residential security upgrades; and funds and manages the Soft Target Program, State’s program for providing security upgrades to schools attended by U.S. government dependents and off-compound employee association facilities. State’s policies are outlined in the FAM and corresponding FAH. Sections of the FAM and FAH relevant to residential security include various subchapters detailing State’s program for residential security, the OSPB residential security standards, and security-related guidance found in the Residential Security Handbook and Physical Security Handbook. In addition, the FAM includes sections that provide guidance about schools that enroll U.S. government dependents and off-compound employee association facilities. See table 1 for further details on selected FAM and FAH policies that are pertinent to securing residences and other soft targets. In addition to these policies, State has produced other guidance documents, such as a matrix that identifies which residential security upgrades DS and OBO, respectively, are responsible for funding and an OBO-drafted cable that outlines the process for requesting security upgrades at schools and employee association facilities. State assesses risks to U.S. diplomatic residences overseas using a range of activities, but many security surveys were not completed for residences we visited. We found that State (1) records and monitors information on overseas residences in its property database, (2) establishes threat levels at overseas posts, (3) develops security standards for residences, and (4) uses these standards to conduct security surveys of residences to identify vulnerabilities. However, 17 of 68 surveys for residences we visited were not completed as required, thereby limiting State’s ability to effectively and efficiently identify and address vulnerabilities. OBO is responsible for maintaining records on all diplomatic residences overseas in its real property database (hereafter referred to as OBO’s property database). OBO’s property database contains data on residences owned and leased by the U.S. government, and includes details such as residence type and address, whether a given residence is leased or owned, the agency affiliation of the occupant, and the acquisition date, among others. DS officials told us that they rely on OBO’s database as their source for such details on residences. As we have previously reported, maintaining accurate and reliable property information has been a long-standing challenge for State. OBO has taken a number of steps to enhance its property data since we first reported on this issue, including hiring dedicated analysts to review and validate data entered at posts and processing budget requests through the property database so that funding requests from posts are linked to the accuracy of the posts’ property data. State also concurred with our June 2014 recommendation that OBO establish a routine process for validating the accuracy of the data in its property database, and we continue to follow up on State’s efforts to implement the recommendation. During this review, we found inaccuracies in the property data for 2 of the 68 residences we visited: in both cases, the residence type listed was incorrect. OBO officials told us that they reached out to the posts where the residences were located and asked them to input the correct information. DS conducts two key activities to help assess risks to residences. First, DS evaluates the security situation at each overseas post by assessing five types of threats—political violence, terrorism, crime, and two classified categories—and assigning corresponding threat levels for each threat type. The threat levels are as follows: critical: grave impact on U.S. diplomats; high: serious impact on U.S. diplomats; medium: moderate impact on U.S. diplomats; and low: minor impact on U.S. diplomats. Threat levels for each post are assessed and updated annually in the Security Environment Threat List. According to DS officials, the bureau develops the list based on questionnaires filled out by post officials, and the final threat ratings are reviewed and finalized through an iterative process involving officials at overseas posts and headquarters. These threat levels are used to determine the security measures required for residences at each post. Second, in consultation with the interagency OSPB, DS develops physical security standards for diplomatic facilities and residences. The residential security standards apply to all residences of U.S. government personnel assigned abroad under chief-of-mission authority. The OSPB standards are published in the FAH and vary by residence type. Specifically, there are separate OSPB standards for six different residence types, one of which is on-compound housing. The remaining five are for off-compound (1) apartments, (2) single family homes, (3) residential compounds, (4) Marine Security Guard residences, and (5) residences for principal officers. OSPB standards also vary by date of construction or acquisition, threat level, and whether they are mandatory. If residences do not meet all applicable mandatory standards, posts are required to request exceptions to the OSPB standards. Within the OSPB standards, we identified six key categories of security standards to protect residences from the threats of political violence, terrorism, and crime. These include (1) an anti-climb perimeter barrier, such as a wall or a fence, and access control; (2) setback from the perimeter; (3) a secure off-street parking area; (4) a secure building exterior with substantial doors and grilled windows with shatter-resistant film; (5) alarms; and (6) a safe space for taking refuge. Figure 1 portrays these six categories at a notional residence. In addition to the OSPB standards, State developed the Residential Security Handbook and Physical Security Handbook, also published in the FAH, which provide detailed supporting information designed to help officials understand how to implement and meet the OSPB standards. According to the FAM, before State can purchase or lease an overseas residence, the RSO must conduct a residential security survey of the property, document all security deficiencies that must be corrected, and approve the purchase or lease of the residence. Off-compound residences must be resurveyed every 5 years, and on-compound residences must be resurveyed every 3 years. Additionally, according to DS officials, RSOs at posts that experience a change in threat level must resurvey all post residences within 1 year of the threat-level change. While officials at the posts we visited were able to provide us with up-to- date surveys for most of the 68 residences that we evaluated, not all surveys at five of the seven posts we visited met the requirements outlined in the FAH and the FAM. Specifically, 17 surveys were not completed as required: 9 surveys were outdated, 1 survey was completed after the residence had already been leased, and 7 surveys were missing. Among the residences with surveys that did not meet requirements were those of two principal officers. At one post, we found that the consul general’s residence had not been surveyed since 2006. At another, the RSO was unable to find a survey for the ambassador’s residence. Missing or outdated surveys may limit DS and posts’ ability to identify and address residential security vulnerabilities that could have otherwise been recognized and corrected through the security survey process. As noted in the framework we developed to help federal agencies implement the Government Performance and Results Act of 1993, leading organizations reinforce results-oriented management by giving their managers extensive authority to pursue organizational goals in exchange for accountability for results. According to officials at DS headquarters, ensuring that residential security surveys are completed as required is the responsibility of individual posts. These officials added that they recently started reviewing surveys for on-compound residences. However, aside from periodic DS headquarters-led inspections that review, in part, the extent to which posts are conducting residential security surveys as required, DS has not instituted procedures to hold posts accountable for complying with the survey requirements for off-compound residences, which, as noted earlier, greatly outnumber on-compound residences. Without up-to-date security surveys of all its overseas residences, State has limited ability to effectively and efficiently identify vulnerabilities or make informed decisions about where to allocate resources for security upgrades to address such vulnerabilities. State Has Taken or Planned Actions to Enhance Residential Security Following Attacks on U.S. Facilities and Personnel State has taken steps to enhance residential security in response to previous attacks on U.S. facilities. These have included actions taken or planned to address recommendations resulting from interagency security assessments and Accountability Review Board (ARB) reports. In response to the September 2012 attacks against U.S. diplomatic facilities—including facilities in Libya, Sudan, Tunisia, Yemen, and Egypt, among others—State formed several Interagency Security Assessment Teams to assess security vulnerabilities at 19 posts that the Bureau of Diplomatic Security considered to be high-threat and high-risk. Rather than assess the facilities at the 19 posts against the Overseas Security Policy Board standards typically used to assess these facilities, the teams assessed all facilities at the 19 posts for any type of security vulnerability—physical or procedural. This assessment process resulted in a report that recommended physical security upgrades at some residences. At one post that we visited, officials noted that in response to this report they had added a closed-circuit television system at the ambassador’s residence, but had not yet installed an emergency siren system at a residential compound, which was also recommended by the report. State has also taken or planned steps to address recommendations from ARB reports stemming from previous attacks on U.S. facilities and personnel overseas. According to State, there have been four ARB reports related to the security of residences, with 10 residential security recommendations in total. We found that 8 of the 10 recommendations related to residential security have been implemented. For instance, State established an interagency working group in response to a recommendation to conduct a comprehensive review of issues related to residential security. The 2 open recommendations are both from the 2005 ARB report on the attacks on the U.S. consulate in Jeddah, Saudi Arabia. Those recommendations, which called for the construction of a new consulate compound along with residences that meet the relevant security standards, will be closed as implemented once staff transition into the new consulate compound in Jeddah. According to State documentation, the new consulate compound was originally projected for completion in March 2010. However, in May 2010, State terminated the contract because of the original contractor’s failure to perform. State awarded a new contract for the project in September 2012. State officials told us in November 2014 that they expect substantial construction of the new consulate compound to be completed in December 2015 and that the planned move-in date is March 2016. Over the last decade, State has taken steps to develop or revise several sets of OSPB residential security standards. However, we found that State has not been timely in updating these standards, nor has it always communicated changes to posts in a timely manner. Moreover, the OSPB standards and other security-related guidance for residences are confusing in nature and contain gaps and inconsistencies, thereby complicating posts’ efforts to apply the appropriate security measures and potentially leaving residences at risk. Federal internal control standards state that agencies must have timely communication and information sharing to achieve objectives; therefore, it is vital that agencies update their policies in a timely manner, particularly when lives and the security of property and information are at stake. DS manages the interagency process by which OSPB security standards are updated. According to DS officials, it should take about 75 days to make an update to the OSPB standards. Specifically, it should take up to 30 days to draft and obtain approval within DS for an update to the security standards in the FAH and up to another 15 days to obtain approval for the draft changes by other relevant stakeholders within State, such as OBO and the Office of the Legal Adviser. Obtaining approval from OSPB members should occur within an additional 30 days. Once all of the required approvals are obtained, DS sends the update to the Bureau of Administration for publishing. Since 2005, State has taken steps to update three sections of the FAH with new or revised OSPB residential security standards; however, the time it took to complete these updates significantly exceeded 75 days. Specifically, State (1) developed new residential security standards to address the threat of terrorism, (2) revised the standards for newly acquired on-compound housing, and (3) developed new standards for existing on-compound housing. In each of these cases, the update process took more than 3 years, including one instance that took more than 9 years (see fig. 2). In April 2005, the ARB resulting from the attacks on the U.S. consulate in Jeddah, Saudi Arabia, recommended developing residential security standards to address terrorism. A working group completed a final draft of the standards 4 years later in April 2009, and it took other stakeholders within State nearly 5 more years to clear the standards. As a result, the standards were not approved by OSPB and published until May 2014. Moreover, DS did not notify posts that the new standards had been completed until mid-October 2014—5 months after their publication and 3 months after they went into effect. DS officials stated that their decision to notify posts was prompted in part by our asking how posts become aware of new or revised standards. Of the three posts that we visited prior to DS’s notification regarding the new standards, one had found them in the FAH on its own in the same month that the standards went into effect, one had found the new standards in the FAH on its own but not until they had already gone into effect, and one was unaware of the new standards until we mentioned them during our visit. According to DS officials, these new standards applied immediately to off-compound residences acquired on or after July 1, 2014; all other off-compound residences have to meet the standards within 3 years—by July 2017—or receive exceptions. However, because DS did not notify posts about the new standards until October 2014, post officials effectively lost several months during which they could have been preparing to apply the new standards. Additionally, officials at one post stated that the DS notification arrived 1 day before the deadline for submitting a budget request for the following fiscal year. In order to request funding for newly required security features, they had to make major revisions to the budget request they had already developed but had very little time to do so. Likewise, officials at another post stated that they had to make significant revisions to the building plan for a new residential compound already under construction in order for it to meet the new standards. In November 2010, OSPB gave its approval for revising the standards for newly acquired on-compound housing. These standards were not finalized until September 2014. DS notified posts of their publication in October 2014. In November 2010, OSPB also gave its approval for developing new standards for existing on-compound housing. The standards did not receive approval within DS until September 2014. State published the standards in May 2015. As we reported in June 2014, two key factors can cause major delays in DS’s process for updating security standards. First, if a stakeholder suggests a change to the draft standards at any time during the review process, the proposed draft must go through the entire review process again; some stakeholders may then request additional time for reviewing proposed changes, further prolonging the process. Second, when changes are being made to an existing FAH subchapter, the FAH requires officials to review and update the entire subchapter, and according to DS officials, there is no specific exception for updates to standards aimed at ensuring security and protecting lives. As a result, even when DS needs to make urgent changes to the OSPB standards, it must review and update the entire subchapter containing the update. In an attempt to mitigate delays in the process for updating OSPB standards, State has taken some steps to help posts apply draft standards before they are officially approved, but these steps have not fully addressed the delays. While the standards for newly acquired on- compound housing were still in the OSPB approval process, State incorporated them into the Physical Security Handbook—updates to which, according to DS, require clearance only within State—so that RSOs could begin to apply them. Similarly, State incorporated the standards for existing on-compound housing into the Physical Security Handbook before DS approved them. With respect to timely communication of updated standards, DS recently began sending monthly notices to RSOs to announce recently published updates to the FAH. However, DS had not yet begun these monthly notices when the new standards addressing terrorism were finalized in May 2014. Thus, these efforts notwithstanding, delays in updating OSPB security standards and communicating the updates may leave posts unaware of the most current security measures required to address identified threats. Accordingly, we recommended in June 2014 that State take steps to ensure that updates to security and safety standards be approved through an expedited review process. State concurred with the recommendation, explaining that it had shortened the deadline for department clearance on draft policies from 30 days to 15 days. However, in two of the three cases outlined above—the revised standards for newly acquired on-compound housing and the new standards for existing on- compound housing—draft standards were submitted for department clearance after State’s decision to shorten the associated deadline to 15 days. In both cases, it took more than 4 months to secure department clearance. We continue to follow up on State’s efforts to implement our recommendation to use an expedited review process for updating security and safety standards. According to federal internal control standards, policy standards should be clear, complete, and consistent in order to facilitate good decision making in support of agency objectives. The FAH similarly directs officials who are drafting FAH and FAM directives to write “in plain language whenever possible” and to convey “a clear sense of what you want the reader to do or not do.” However, we found that relevant State residential security standards and related guidance are confusing in nature and contain gaps and inconsistencies, making it difficult for RSOs to identify and apply the appropriate security measures. Several aspects of State’s residential security standards and related guidance contribute to their confusing nature. Dispersed across the FAH and FAM. State’s residential security standards and related guidance are presented throughout various sections of the FAH and FAM. RSOs at six posts we visited and even some headquarters officials involved in developing the standards stated that they find it challenging to keep track of all the sections where relevant standards and guidance appear. We located relevant standards and guidance in nine different subchapters of the FAH and FAM. Further, one of the subchapters refers to a set of standards that predated the 1998 embassy bombings and no longer exists. DS has taken steps to mitigate this dispersion by creating tables that consolidate the relevant standards. Specifically, DS has created tables for each of the following: (1) the standards for newly acquired on-compound housing; (2) the standards for existing on-compound housing; and (3) the standards for off-compound residences, including the new May 2014 standards to address terrorism. DS officials stated in April 2015 that they had not yet provided RSOs with the table for off-compound residences but planned to do so in May 2015. In addition, DS officials told us that they plan to develop an automated template for on-compound housing to assist RSOs in identifying the relevant standards. Confusing terminology. RSOs at five posts stated that the standards and guidance are sometimes worded in a confusing manner. Specifically, while some standards are worded as mandatory measures that “must” be taken, others are worded in a way that could be interpreted as mandatory or discretionary. For example, some measures “should” or will “ideally” be taken, or are “recommended”; likewise, some standards “must be considered,” while others “should be considered.” DS officials told us that the intent of using various terms in the guidance is to give RSOs some flexibility in deciding which measures are applicable to their posts. However, a number of RSOs told us that they sometimes had difficulty deciding whether and how to apply certain standards because of their confusing wording. These RSOs said that difficulties in distinguishing the nuances between the various terms used in the standards sometimes resulted in their uncertainty as to whether the residences at their posts were fully in compliance with the standards. Unclear housing categories. While the new standards issued in May 2014 include a section outlining specific security measures for residential compounds, DS headquarters officials told us that they have had difficulty defining what differentiates this type of housing from single family homes located in gated communities, which are subject to a separate set of security measures. Officials added that in the absence of clearly defined housing categories, it is difficult for RSOs to know which standards to apply. In addition to finding State’s residential security standards and guidance to be confusing in nature, we identified multiple gaps and inconsistencies, including the examples described below. The previous version of the FAM subchapter detailing State’s program for residential security stated that when a post experiences a change in its Security Environment Threat List rating, the post must resurvey residences to determine what security upgrades, if any, are needed. However, a new version of the FAM subchapter published in August 2014 does not include this requirement. When we asked DS officials about this, they told us it was an oversight and stated that the requirement still exists. They also stated that because we brought this issue to their attention, they plan to revise the standards issued in May 2014 to include the requirement. While the new standards released in May 2014 call for pedestrian and vehicle gates at residences to have locking devices, DS officials noted that the standards as written do not explicitly require the residences to have gates. They told us they plan to modify the wording of this standard to clarify that gates are required. In addition to announcing the completion of residential security standards to address terrorism, the notification DS sent to posts in October 2014 enumerated other sets of security standards for posts to apply, including standards to address crime and standards for newly acquired on-compound housing. However, the notification did not mention the security standards in effect at the time for existing on- compound housing. Additionally, those standards for existing on- compound housing, which dated back to December 1999, were labeled in the FAH as standards for “new” on-compound housing. This could potentially have caused confusion since the September 2014 standards for newly acquired on-compound housing are also labeled as standards for “new” on-compound housing. As noted earlier, State issued new standards for existing on-compound housing in May 2015. Because State completed those standards late in our review, we were unable to fully evaluate them. We found inconsistent guidance on whether residential safe havens are required to have an emergency exit. The Residential Security Handbook states that every residential safe haven must have an emergency exit. By contrast, a definitional section of the OSPB security standards states that an emergency exit is required in residential safe havens that have grilles and are located below the fourth floor. A third variation appears in the new residential security standards to address terrorism released in May 2014; it states that residential safe havens must have an emergency exit “if feasible.” DS officials told us that they plan to update the May 2014 standards to help RSOs determine feasibility, but it is unclear whether they plan to eliminate the inconsistencies we identified. These gaps and inconsistencies exist in part because DS has not comprehensively reviewed and harmonized its various standards and security-related guidance for residences. The FAH requires OSPB to review all the OSPB standards periodically—at least once every 5 years. In practice, as we previously reported, the process by which security standards are updated is typically triggered by an event, such as an attack, rather than by a periodic and systematic evaluation of all the standards. As noted above, DS officials are planning updates to the OSPB residential security standards to remedy some of the issues we found. DS officials also stated that they are in the process of updating the Residential Security Handbook and, as part of that effort, are adding further clarifications and details to help guide RSOs. However, as discussed earlier in this report, updates to the OSPB standards have frequently taken State several years. Furthermore, the planned updates that DS officials discussed with us do not constitute a comprehensive effort to review all standards and security-related guidance for residences to identify all potential gaps, inconsistencies, and instances where clarity is lacking. Consequently, the confusing nature of the standards and guidance and the gaps and inconsistencies they contain may continue to complicate RSOs’ efforts to identify and apply the appropriate security measures, potentially leaving some residences at greater risk. At a minimum, such gaps and inconsistencies in the standards and guidance can lead to confusion and inefficiency. For example, according to RSO officials at one post we visited, DS inspectors from headquarters told them during a review of the post’s security operations that residential safe havens must have a reliable water source and grilles on all external windows, including inaccessible windows. We subsequently verified with DS headquarters that no such requirements exist for residential safe havens. Over the last 6 fiscal years, State has allocated about $170 million for security upgrades to help address vulnerabilities identified at diplomatic residences. However, 38 of the 68 residences that we reviewed did not meet all of the applicable standards, thereby potentially placing their occupants at risk. In instances when a residence does not and cannot meet the applicable security standards, posts are required to either seek other residences or request exceptions, which identify steps the posts will take to mitigate vulnerabilities. However, DS had an exception on file for only 1 of the 38 residences that we found did not meet all of the applicable standards. Without all necessary exceptions in place, State lacks information that could provide decision makers with a clearer picture of security vulnerabilities at residences and enable them to make better risk management decisions. In addition, new, more rigorous security standards will likely increase posts’ need for exceptions and lead to considerable costs for upgrades. State addresses security vulnerabilities at residences by installing various kinds of upgrades intended to help residences meet, or in some cases exceed, the applicable standards. According to State guidance, every effort should be made to have owners or landlords of leased residences complete any permanent residential security upgrades at no cost to the U.S. government. If the owners or landlords are unwilling or unable to complete the necessary upgrades, RSOs have the option either to request funding for upgrades or seek alternate residences. Security upgrades for residences are primarily funded through DS, which funds all upgrades—such as window grilles, residential safe havens, and shatter- resistant window film—other than perimeter barriers and some access control measures at certain residences. As shown in table 2, in fiscal years 2010 through 2015, DS allocated approximately $164 million for residential security upgrades. Over the same period, OBO allocated more than $6 million for residential security upgrades to perimeter barriers and some access control measures at government-owned residences and certain leased residences. In some cases, RSOs may determine that the OSPB residential security standards applicable at their posts are not stringent enough to address threat conditions. In such instances, the RSO, in consultation with the post’s Emergency Action Committee, may seek to implement security measures that go above the standards. If an owner or landlord does not agree to install a security measure that goes above the standards, the post may choose to request funding from DS. For example, DS officials told us that they approved a request for funding to install additional security measures at a post where single family homes were meeting all the applicable standards but were still experiencing break-ins. Diplomatic residences are required to meet OSPB security standards. The FAH and the FAM state that when residences do not meet and cannot be made to meet the applicable standards, and no other acceptable alternatives are available, posts are required to request exceptions to the standards from DS headquarters. According to State, exception requests are required to identify the steps posts will take to mitigate vulnerabilities, and approved exceptions serve to document State’s acceptance of any unmitigated risk that remains. DS officials clarified that posts are required to apply for exceptions for any unmet standards that are worded in terms of mandatory measures that “must” be taken. However, more than half of the residences we reviewed at the seven posts we visited did not meet all applicable mandatory security standards and lacked required exceptions to those standards. Of the 68 residences we reviewed at the seven posts, 38 did not meet all of the mandatory standards applicable to them at the time even though, according to post officials, most of the 38 received security upgrades in recent years. Moreover, 23 of the 38 residences did not meet two or more of the mandatory standards. When we discussed the unmet standards we found with RSOs and other post officials, they generally agreed with our assessments and stated in several cases that they would take steps to address the vulnerabilities. In some cases, though, post officials were unable to provide explanations for unmet standards. For instance, although 3 of the 4 off-compound principal officer residences we visited did not have grilles on all accessible windows as required—thereby creating vulnerabilities at these potentially high-profile targets—officials could not explain why grilles were missing other than to suggest that current principal officers and their predecessors may have wanted to leave certain windows without grilles for aesthetic purposes. In addition, two factors discussed earlier in this report may contribute to unmet standards. First, missing or outdated residential security surveys may hamper posts’ ability to identify and address residential security vulnerabilities that could have otherwise been recognized and corrected. Second, the difficult-to-use nature of the OSPB standards and other security-related guidance for residences can complicate RSOs’ efforts to identify and apply the appropriate security measures. Of the 38 residences we reviewed that did not meet all of the applicable mandatory standards, DS had an exception on file for only 1. In some cases, such as residences with doors lacking deadbolt locks or peepholes, DS officials told us that relatively little effort or funding would be needed to bring the residence into compliance with the standards and thereby eliminate the need for an exception. However, many of the 38 residences that did not meet all of the applicable mandatory standards had more significant vulnerabilities. For instance, as noted earlier, 3 of the 4 off-compound principal officer residences we visited did not meet the standard that all accessible windows must have grilles when such residences are located at posts rated high for political violence or crime; DS did not have exceptions for any of the 3. Likewise, 5 of the 7 off- compound apartments acquired after July 1, 2014—and therefore subject to the new May 2014 standards—did not have perimeter barriers surrounding them as required by the standards; none of the 5 had exceptions. We found that required exceptions for residences were missing for three key reasons. First, posts do not always request exceptions when they are needed. For example, of the 3 posts where off-compound principal officer residences lacked some window grilles, none had requested exceptions to this unmet standard. DS headquarters officials told us that State’s guidance for posts on how to request residential security exceptions has historically been limited and vague, which, according to them, may explain why posts do not always request exceptions when they are needed. In cases where residences acquired since July 1, 2014, did not meet all of the standards issued in May 2014, the lack of exceptions is understandable, given that the posts acquired the residences before being notified of the new standards in October 2014. In other cases, though, the residences in question had been acquired several years or even decades prior, and the standards we found that they were not meeting had also been in existence for years. Second, until recently, State guidance on exceptions did not clearly identify the roles and responsibilities of key offices involved in managing the exception process, leading to confusion within DS headquarters and potentially at posts as well. In 2007, State established FAM guidance identifying an office within DS’s Directorate for International Programs (DS/IP) as the office responsible for managing the residential security exception process. DS officials explained that, in practice, a different office within DS’s Directorate for Countermeasures (DS/C) has handled residential exception requests since the late 1980s and, because of limited staffing in DS/IP, continued to do so even after State guidance named DS/IP as the responsible office in 2007. However, State did not revise the FAM guidance to identify DS/C as the responsible office. Subsequently, in August 2014, DS provided us with a written response stating that, as of that date, it had not received any requests for residential security exceptions. We subsequently learned the written response was drafted by DS/IP, which was unaware that DS/C had been receiving and processing exception requests for residences since the late 1980s. Since that time, the two offices have clarified their roles and responsibilities with respect to residential security exceptions. Specifically, officials told us in April 2015 that DS/C is now handling all exception requests related to setback, while DS/IP is handling all other exception requests. Third, weaknesses exist in DS’s tracking of exceptions. The FAM states that State documentation should be complete to the extent necessary to facilitate decision making. However, we found weaknesses that raise questions about the completeness of DS’s documentation on exceptions. For example, a list DS provided in response to our request for data on all residential security exceptions only shows exceptions to the setback standard. As noted earlier, DS had an exception on file for 1 of the 38 residences we reviewed that did not meet all of the applicable mandatory standards. In reviewing the exception package for that residence, we noted that the post requestedand received approval forexceptions to four different standards. The exception package also lists mitigation actions the post plans to take. DS’s list mentions that an exception was granted to the setback standard, but it does not mention any of the other three standards to which DS granted exceptions or the planned mitigation actions. In addition, while the list DS provided identifies the post that requested each exception, it does not identify the specific residence for which the exception was requested. DS officials stated that in order to identify the specific residence for which a given exception was requested and approved, they would have to locate the paper copy of the exception packagea potentially time-consuming task given that, according to DS officials, the paper copies of residential exception packages are commingled with thousands of exception packages for office facilities dating back as far as 1986. Because of these weaknesses, DS’s list has limited utility in helping DS and posts understand which residences may have security vulnerabilities stemming from unmet standards. While DS is taking steps to improve its guidance and tracking for exceptions, it is unclear if the planned improvements will fully address the factors that have contributed to missing exceptions. DS officials told us that as part of their ongoing update to the Residential Security Handbook, they will be providing additional guidance for RSOs on how to submit requests for exceptions. Since that initiative is still in development, it is too early to assess its effectiveness. Additionally, DS/C officials told us that they have begun converting paper copies of exception packages they have processed to electronic form—a task they estimate will take about 5 months—and both DS/IP and DS/C have developed databases to record exception requests. As noted earlier, the FAM calls for State documentation to be complete to the extent necessary to facilitate decision making; however, neither office currently plans for its database to include all of the exceptions processed by the other office. Consequently, DS may lack a complete picture of all the residential exceptions it has processedvaluable information that could help it better understand the types of security vulnerabilities at residences and thus make better informed risk management decisions. In addition to reviewing residences against the mandatory security standards applicable to them at the time of our visit, we also reviewed them using the new May 2014 standards in order to assess the impact these standards will have on posts, DS headquarters, and future State funding requests. As discussed earlier, by July 2017, all off-compound residences will be required to meet the new standards to address terrorism. If the residences do not meet the standards and cannot be upgraded to meet them, posts will be required to apply for exceptions. DS officials indicated that the new standards are more rigorous than the previous ones and that many existing residences may be unable to meet them. Overall, 55 of the 56 off-compound residences we reviewed did not meet all of the new standards, including 49 residences that did not meet two or more. As a result of the new standards, DS officials expect the need for posts to apply for exceptions to increase. For example, officials at one post told us that they do not believe it will be possible for any of the approximately 300 apartments occupied by U.S. personnel at that post— or for other apartments available at their post—to meet the new mandatory standard of a perimeter barrier surrounding the building. In other cases, it will be possible to upgrade existing residences to meet the new standards, but the associated costs may be considerable. For instance, RSO officials at the post with about 300 apartments stated that they will need to install alarms and residential safe havens to meet the new standards, at a cost of approximately $3,000 per residence. They added that it is unlikely that current landlords will agree to pay for these upgrades. The 2014 Security Environment Threat List indicates that nearly 45 of the 275 U.S. diplomatic posts worldwide have higher threat ratings for terrorism than for political violence and crime and thus will likely have to adopt additional security measures—potentially at significant cost to the U.S. government—in order to meet the new standards to address terrorism. Furthermore, additional upgrades—and thus additional funding—will likely be needed as posts take steps to apply the new standards for existing on-compound housing, which, according to DS officials, are also more rigorous than the standards that preceded them. DS headquarters officials told us that because of our inquiries about the implications of the new standards to address terrorismsuch as the financial cost of meeting themthey had decided to conduct a survey of posts to assess the extent to which each one currently meets the new standards. According to these officials, the results of the survey will help them estimate the cost of upgrades needed to meet the new standards and will provide them with valuable information on the types of vulnerabilities that currently exist at residences. They stated that they anticipate the survey may also help them determine how much time it will take to review and process posts’ requests for exceptions. DS officials added that they plan to send the survey to posts in May 2015. State’s efforts to protect U.S. personnel and their families at schools and other soft targets include funding physical security upgrades, providing threat information and security-related advice, and conducting security surveys of various soft target facilities. However, RSOs at most of the accompanied posts we visited were unaware of some guidance and tools for securing these facilities. As a result, State may not be taking full advantage of its programs and resources for managing risks at schools and other soft targets. State has taken a variety of actions to manage risks to schools and other soft targets. These actions fall into three main categories: (1) funding security upgrades at K-12 schools with enrolled U.S. government dependents and off-compound employee association facilities, (2) sharing threat information and providing advice for mitigating threats at schools and other soft targets, and (3) conducting security surveys to identify and manage risks to schools and other soft targets. First, in 2003, State developed a multiphase initiative known as the Soft Target Program to protect U.S. personnel and their families at schools and off-compound employee association facilities. With respect to schools, the Soft Target Program funds physical security upgrades at existing K-12 schools with enrolled U.S. government dependents. Program funding was initially limited to “American-sponsored” schools that receive State assistance. Subsequently, eligibility for program funding was expanded to non-American-sponsored schools that enroll U.S. dependents or are expected to enroll such students within the next 6 months. In fiscal years 2010 through 2015, State allocated almost $28 million for security upgrades through the Soft Target Program (see table 3). State has used approximately $23.8 million of these allocations to award almost 400 grants for specific upgrades. Of these awards, approximately 97 percent ($23.1 million of $23.8 million) went to schools, with the remaining 3 percent provided for off-compound employee association facilities. According to State documentation, since the Soft Target Program began in 2003, State has provided schools with more than $100 million for physical security upgrades. American-sponsored schools received $63 million of this funding; other schools received the remaining $38 million. At the posts we visited, RSOs had worked with schools to identify physical security needs and obtain funding for upgrades such as walls, guard booths, public address systems, and window grilles. In addition to school security upgrades, we also saw Soft Target Program-funded security upgrades at employee association facilities, such as closed- circuit television systems, perimeter walls, and access control systems. Overall, RSOs and school administrators told us they were pleased with the upgrades. However, State does not operate or control the schools eligible for Soft Target Program upgrades; as a result, the extent to which eligible schools cooperate with posts on security-related issues depends on the willingness of the schools’ administrators. Second, State officials help manage risks to schools and other soft targets by sharing threat information and providing advice on how to mitigate such threats. RSO outreach to schools at the posts we visited included sharing information related to specific threats, such as apprising school administrators about the recent posting on a jihadist website calling for attacks on western-affiliated teachers and schools in the Middle East. In addition, at one post architects designing a new American- sponsored school cited security advice provided by the RSO as the impetus for them to undertake a more security-conscious redesign. RSOs at all the accompanied posts we visited stated that they also share local threat information with schools and others outside the U.S. government. Additionally, RSOs we met with told us that they communicate with security officials at British, Canadian, and other embassies in the area they cover to help deter attacks on soft targets by raising overall threat awareness. Third, RSOs conduct security surveys to help identify and manage risks to schools and other soft targets. Although State does not control schools, hotels, or hospitals overseas, RSOs at all the accompanied posts we visited had conducted security surveys of such facilities. State is currently in the process of developing security standards for off-compound employee association facilities in response to our June 2014 recommendation that State develop physical security standards for facilities not covered by existing standards. Federal internal control standards call for agencies to communicate the information necessary for conducting the agency’s operations to those within the entity responsible for carrying out these activities in a form and time frame that allows them to carry out their responsibilities. State has established guidance for RSOs regarding the security of schools and other soft targets as well as tools to assist RSOs’ security-related outreach with schools, but half of the RSOs we met with at the six accompanied posts stated that the only guidance or tools they were aware of with respect to schools and soft targets was a cable with information on the types of items that can be funded through the Soft Target Program. One post was using an outdated version of this cable. Additionally, two RSOs described the cable as lacking sufficient detail on the specific types of upgrades allowed and disallowed. Additional Soft Target Program guidance does exist in the FAM—including details on grant eligibility and the roles, responsibilities, procedures, and requirements related to project development and implementation—but the relevant FAM subchapter was not mentioned in any of the Soft Target Program cables we reviewed, and no RSOs cited that FAM subchapter as a source of guidance with which they were familiar. OBO officials stated that they believe the Soft Target Program cable provides RSOs with the necessary level of information but noted that they plan to issue an updated version of the cable with additional detail. They also stated that they see value in mentioning the associated FAM subchapter in the updated cable. With respect to tools, in 2008, State’s Office of Overseas Schools, DS, and OBO published a booklet—Security Guide for International Schools— and an accompanying CD to assist international schools in designing and implementing a security program. These are tools that RSOs can offer to all schools—including those otherwise ineligible for upgrades funded by the Soft Target Program. However, RSOs at the majority of the posts we visited were unaware of this security guide, but after we brought it to their attention, some stated that they planned to share it with schools at their posts. Because of limited awareness of the guidance and tools for securing schools and other soft targets, State may not be fully leveraging its existing programs and resources for addressing the security needs of schools and other soft targets. Thousands of U.S. diplomatic personnel and their families live in an overseas environment that presents myriad security threats and challenges. While State has taken significant measures to enhance security at its embassies and consulates since the 1998 East Africa embassy bombings, these same actions have given rise to concerns that would-be attackers may shift their focus to what they perceive as more accessible targets, such as diplomatic residences, schools, and other places frequented by U.S. personnel and their families. We found that State has taken various steps to address threats to residences and other soft targets. For instance, over the last 6 fiscal years, State allocated nearly $200 million for security upgrades for residences, schools, and off- compound employee association facilities, and it has also made efforts to modernize the physical security standards that residences must meet. However, we found vulnerabilities at many of the residences we reviewed and a number of gaps or weaknesses in State’s implementation of its risk management activities. For example, posts do not always complete residential security surveys as required, exceptions are missing for many residences that require them, and DS’s tracking of exceptions is fragmented between two offices. As a result, State lacks full awareness of the vulnerabilities that exist at residences. Similarly, State’s physical security standards and security-related guidance for residences are difficult to use, and awareness of its guidance and tools for schools and other types of soft targets is limited. Each of these issues is problematic on its own, but taken together, they raise serious questions about State’s ability to make timely and informed risk management decisions about soft targets. Until it addresses these issues, State cannot be assured that the most effective security measures are in place at a time when U.S. personnel and their families are facing ever-increasing threats to their safety and security. To enhance State’s efforts to manage risks to residences, schools, and other soft targets overseas, we recommend that the Secretary of State direct DS to take the following five actions: 1. Institute procedures to improve posts’ compliance with requirements for conducting residential security surveys. 2. Take steps to clarify existing standards and security-related guidance for residences. For example, DS could conduct a comprehensive review of its various standards and security-related guidance for residences and take steps to identify and eliminate gaps and inconsistencies. 3. Develop procedures for ensuring that all residences at posts overseas either meet applicable standards or have required exceptions on file. 4. Ensure that DS/IP and DS/C share information with each other on the exceptions they have processed to help DS establish a complete picture of all residential security exceptions on file. 5. Take steps in consultation with OBO to ensure that RSOs are aware of existing guidance and tools regarding the security of schools and other soft targets. For example, DS and OBO could modify the Soft Target Program cable to reference the associated FAM subchapter. We provided a draft of this report for review and comment to State and the U.S. Agency for International Development. We received written comments from State, which are reprinted in appendix II. State agreed with all five of our recommendations and highlighted a number of actions it is taking or plans to take to implement the recommendations. The U.S. Agency for International Development did not provide written comments on the report. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and the Administrator for the U.S. Agency for International Development. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8980 or courtsm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. The objectives of our report were to evaluate (1) how the Department of State (State) assesses risks to residences overseas; (2) the timeliness, clarity, and consistency of State’s security standards for residences; (3) how State addresses security vulnerabilities at residences; and (4) how State manages risks to other soft targets overseas. To address these objectives, we reviewed U.S. laws; relevant State security policies and procedures as found in cables, the Foreign Affairs Manual (FAM), and the Foreign Affairs Handbooks (FAH)—in particular, the Residential Security Handbook, Physical Security Handbook, Overseas Security Policy Board (OSPB) standards, and information and guidance related to State’s residential security program and Soft Target Program; the Bureau of Diplomatic Security’s (DS) threat and risk ratings, periodic assessments of post security programs, and residential security exceptions; post-specific documents pertaining to security of residences and other soft targets; classified Accountability Review Board reports and Interagency Security Assessment Team recommendations; and past GAO, State Office of Inspector General, and Congressional Research Service reports. We assessed DS’s risk management practices against its own policies and standards, best practices identified by GAO, and federal internal control standards. Additionally, we reviewed and compared residential security standards and other security-related guidance within the FAM and FAH to evaluate their clarity and consistency. We identified gaps and inconsistencies in relevant standards and security-related guidance for residences in the course of (1) conducting an analysis of the OSPB standards, the Residential Security Handbook, and the Physical Security Handbook to develop our facility review checklists and (2) discussing the guidance with knowledgeable State officials. Because it was beyond the scope of this engagement to systematically review all residential security standards and related guidance for gaps and consistencies, we cannot generalize our findings to all standards and security-related guidance for residences. We also evaluated the timeliness of updates to OSPB residential security standards. To do so, we asked State to identify all updates to the residential security standards since 2005 and to provide information about when the updates started and were completed. We then analyzed how long each update took and compared that against State officials’ expectation of how long such updates should take. In addition to reviewing the documents above, we interviewed officials in Washington, D.C., from DS; the Bureau of Overseas Buildings Operations (OBO); State’s Office of Management Policy, Rightsizing, and Innovation; and the U.S. Agency for International Development. We also traveled to 7 posts and conducted work focused on 3 other posts. Our judgmental sample of 10 posts included nine countries in four of State’s six geographic regions—Africa, the Near East, South and Central Asia, and the Western Hemisphere. Each of the 10 posts was rated by DS as having a high or critical threat level in one or more of the Security Environment Threat List categories of political violence, terrorism, and crime. Additionally, all but 1 of the 10 posts we selected were within the top 75 posts rated by DS as the highest risk worldwide; of the 10, 7 were within the top 50, and 4 were within the top 25. For security reasons, we are not naming the 10 posts in our judgmental sample. Our findings from these posts are not generalizable to all posts. Moreover, our judgmental sample of high-threat, high-risk posts cannot be generalized to other high- threat, high-risk posts. For 2 of the 3 posts in our judgmental sample that we did not visit, we reviewed residential security surveys and spoke with a Regional Security Officer (RSO) regarding residential security measures in place at the posts. For the third post we did not visit, we requested RSO input on recent security upgrades and any remaining vulnerabilities at soft target facilities at post. At the 7 posts we visited, we met with U.S. government officials from State and other agencies involved in securing residences and other soft targets—including RSOs, general services officers, financial management officers, facility managers, and members of post Emergency Action Committees and Interagency Housing Boards—to understand their respective roles related to security of residences and other soft targets and their perspectives on State’s security policies and procedures for these facilities. We also requested residential security surveys for all 68 residences in our judgmental selection. We evaluated the posts’ records of these surveys using the residential security survey requirements outlined in the FAM. Additionally, we reviewed security measures in place at 10 schools attended by U.S. government dependents and 3 off-compound employee association facilities. Of the 10 schools, 6 were “American-sponsored” schools that receive State assistance; the other 4 do not receive State assistance but enroll U.S. government dependents. At the 7 posts we visited, we also evaluated a judgmental selection of 68 residences against applicable security standards. To do so, we first asked DS officials to identify all sections of the FAH that contain residential security standards. Based on DS’s input, we reviewed all of the identified sections and developed checklists of the residential security standards applicable to each of the following residence types: on-compound housing, off-compound apartments, off-compound single family homes, off-compound residential compounds, off-compound Marine Security Guard residences, and off-compound residences for principal officers. The checklist for each residence type included the standards applicable to it as of our fall 2014 visit, and the checklists for off-compound residences also included the new May 2014 standards to address terrorism, which all off-compound residences have to meet by July 2017. As noted earlier in this report, standards also vary by date of construction or acquisition and threat level. We included these variations in each checklist so that for each residence we visited, we could apply the exact standards applicable to it based on its type, its date of construction or acquisition, and the post threat level. Our checklists included mandatory standards worded in terms of measures that “must” be taken as well as other standards, such as measures that are “recommended” or that “should” or will “ideally” be taken, among others. While we used all of these standards to review the residences we visited, the analysis presented in this report only includes mandatory standards worded in terms of measures that “must” be taken. We did not include other standards in our analysis because, as discussed earlier in this report, some of the terminology used in those standards is inconsistent, making it difficult to determine if a given residence is in compliance or not. With regard to the specific 68 residences reviewed, we evaluated the principal officer’s residence at 6 of 7 posts and the Marine Security Guard residence at 5 of 7 posts. The remaining 57 residences in our judgmental selection represented a mix of different types of residences (such as apartments and single family homes), on-compound and off-compound residences, owned and leased residences, older and newer residences, and residences occupied by State officials and non-State officials. Using the checklists that we developed, we reviewed each of the 68 residences against the mandatory standards applicable to it as of our fall 2014 visit; in addition, we reviewed each of the 56 off-compound residences against the mandatory standards in the new May 2014 standards to address terrorism. After completing all 68 checklists, we categorized the mandatory standards into six general categories, which we developed on the basis of our professional judgment as well as our review of the six general categories of security standards presented in our June 2014 reporting on the security of diplomatic work facilities overseas. Each category included one or more mandatory standards. For example, the category of secure building exteriors included mandatory standards calling for features such as lighting; substantial or grilled doors with peepholes and deadbolt locks; and grilles, locks, and shatter-resistant film on accessible windows. For the purposes of our analysis, if a residence did not meet one of more of the mandatory standards in a given category, we classified the residence as not meeting all mandatory standards in that category. We used this methodology to calculate the number of residences that did not meet all mandatory security standards applicable to them as of fall 2014 within each security standard category. We used the same methodology to calculate the number of off-compound residences that did not meet all mandatory standards within each category of the new May 2014 standards to address terrorism. To determine the reliability of the data we collected on overseas residences and funding for security upgrades to residences, schools attended by children of U.S. government personnel, and off-compound facilities of employee associations, we compared information from multiple sources, checked the data for reasonableness, and interviewed cognizant officials regarding the processes they use to collect and track the data. We evaluated the reliability of OBO’s data on overseas residential properties by comparing records for specific residences from OBO’s system with information we collected during site visits to these residences and discussions with OBO and post officials. We evaluated the reliability of the funding data we collected by comparing the data against prior GAO reporting and by interviewing DS and OBO officials familiar with State’s financial management system to ask how the data are tracked and checked for accuracy. On the basis of these checks, we determined that the data we collected on overseas residences and funding were sufficiently reliable for the purposes of this engagement. We conducted this performance audit from July 2014 to June 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, Thomas Costa (Assistant Director), Joshua Akery, Amanda Bartine, John Bauckman, Tina Cheng, Aniruddha Dasgupta, David Dayton, Martin De Alteriis, Jonathan Fremont, Grace Lui, and Candice Wright made key contributions to this report.
Since the 1998 East Africa bombings, U.S. diplomatic personnel working overseas have faced increasing threats to their safety and security. State has built many new embassies and consulates since 1998 and enhanced security measures at others. Increased security at such facilities has raised concerns that residences, schools, and other places where U.S. diplomatic personnel and their families congregate may be viewed by terrorists as more attractive “soft targets.” GAO was asked to review the security of residences and other soft targets overseas. GAO evaluated (1) how State assesses risks to U.S. diplomatic residences overseas; (2) the timeliness, clarity, and consistency of residential security standards; (3) how State addresses security vulnerabilities at residences; and (4) how State manages risks to other soft targets. GAO reviewed agency documents; met with officials in Washington, D.C.; and conducted fieldwork at a judgmental sample of seven higher-threat, higher-risk posts in four of State's six geographic regions. This is the public version of a sensitive but unclassified report issued in June 2015. The Department of State (State) conducts a range of activities to assess risks to residences overseas. For instance, State tracks information on overseas residences in its property database, establishes threat levels at overseas posts, develops security standards for different types of residences and threat levels, and requires posts to periodically conduct residential security surveys. However, 17 of the 68 surveys for residences GAO reviewed were untimely or missing. Without up-to-date security surveys of all its overseas residences, State's ability to identify and address vulnerabilities or make informed decisions about where to allocate resources for security upgrades is limited. State has taken steps to update its residential security standards; however, these updates have not been timely, and the standards are difficult to use. According to State officials, updating residential security standards should take about 75 days, but all three updates since 2005 took more than 3 years each. State is making efforts to improve the timeliness of such updates in response to a prior GAO recommendation. In addition, while federal internal control standards state that policy standards should be clear and consistent to support good decision making, State's standards and other security-related guidance for residences have gaps and inconsistencies, complicating posts' efforts to determine and apply the appropriate security measures and potentially leaving some residences at risk. State addresses security vulnerabilities at residences by installing various upgrades intended to help residences meet security standards, but 38 of the 68 residences GAO reviewed did not meet all applicable standards. For example, 8 residences did not meet the standards for perimeter barriers. When residences do not and cannot meet all applicable security standards, posts are required to request exceptions, which identify steps the posts will take to mitigate vulnerabilities. However, State had an exception on file for only 1 of the 38 residences that did not meet all applicable standards. As a result, State lacks key information that could provide it with a clearer picture of security vulnerabilities at residences and enable it to make better risk management decisions. State manages risks to schools and other soft targets overseas in several ways, but its efforts may be constrained by limited awareness of relevant guidance and tools. In fiscal years 2010 through 2015, State awarded almost 400 grants in total for security upgrades at schools and other soft targets. While federal internal control standards call for timely communication of relevant information to staff responsible for program objectives, officials at most of the posts GAO visited were unaware of some guidance and tools for securing schools and other soft targets. As a result, State may not be fully leveraging existing programs and resources for addressing security needs at these facilities. GAO recommends that State, among other things, institute procedures to ensure residential security surveys are completed as required, clarify its standards and security-related guidance for residences, develop procedures to ensure residences either meet standards or have exceptions on file, and take steps to ensure posts are aware of existing guidance and tools regarding the security of schools and other soft targets. State concurred with all of GAO's recommendations.
In response to global challenges the government faces in the coming years, we have a unique opportunity to create an extremely effective and performance-based organization that can strengthen the nation’s ability to protect its borders and citizens against terrorism. There is likely to be considerable benefit over time from restructuring some of the homeland security functions, including reducing risk and improving the economy, efficiency and effectiveness of these consolidated agencies and programs. Realistically, however, in the short term, the magnitude of the challenges that the new department faces will clearly require substantial time and effort, and will take additional resources to make it fully effective. Numerous complicated issues will need to be resolved in the short term, including a harmonization of information technology systems, human capital systems, the physical location of people and other assets, and many other factors. Implementation of the new department will be an extremely complex task and will ultimately take years to achieve. Given the magnitude of the endeavor, not everything can be achieved at the same time. As a result, it will be important for the new department to focus on a handful of important things, such as: articulating a clear overarching mission and core values, developing a national strategy, utilizing strategic planning to establish desired outcomes and key priorities, and assuring effective communications systems. Further, effective performance and risk management systems must be established, and work must be completed on threat and vulnerability assessments. GAO and other observers of the federal government’s organization, performance and accountability for terrorism and homeland security functions have long recognized the prevalence of gaps, duplication and overlaps driven in large part by the absence of a central policy focal point, fragmented missions, ineffective information sharing, and institutional rivalries. In recent years, GAO has made numerous recommendations related to changes necessary for improving the government’s response to combating terrorism. Prior to the establishment of the OHS, GAO found that the federal government lacked overall homeland security leadership and management accountable to both the President and Congress. GAO has also stated that fragmentation exits in both coordination of domestic preparedness programs and in efforts to develop a national strategy. Based on evaluations prior to September 11th , GAO identified the following five actions to improve programs to combat terrorism: Create a single high-level federal focal point for policy and coordination, Develop a comprehensive threat and risk assessment, Develop a national strategy with a defined end state to measure Analyze and prioritize governmentwide programs and budgets to identify gaps and reduce duplication of effort, and Coordinate implementation among the different federal agencies. Moreover, in a recent report to Congress on initial concerns about organizing for homeland security since September 11th, GAO indicated that a definition of homeland security should be developed, preferably in the context of the Administration’s issuance of a national strategy for homeland security, in order to improve the effectiveness and coordination of relevant programs. The recent and on-going actions of the Administration to strengthen homeland security functions, including the proposal for establishing DHS, should not be considered a substitute for, nor should they supplant, the timely issuance of a national homeland security strategy. Based on our prior work, GAO believes that the consolidation of some homeland security functions makes sense and will, if properly organized and implemented, over time lead to more efficient, effective and coordinated programs, better intelligence sharing, and a more robust protection of our people, borders and critical infrastructure. At the same time, the proposed cabinet department, even with its multiple missions, will still be just one of many players with important roles and responsibilities for ensuring homeland security. At the federal level, homeland security missions will be require the involvement of the CIA, FBI, the U.S. Marshals Service, the Department of Defense (DOD), and a myriad of other agencies. State and local governments, including law enforcement and first responder personnel, and the private sector all have critical roles to play. If anything, the multiplicity of players only reinforces the recommendations that GAO has made in the past regarding the urgent need for a comprehensive threat, risk and vulnerability assessment and a national homeland security strategy that can provide direction and utility at all levels of government and across all sectors of the country. The development and implementation of a national strategy for homeland security is vital to effectively leveraging and coordinating the country’s assets, at a national rather than federal level, to prevent and defend against future terrorist acts. A national homeland security strategy can help define and establish a clear role and need for homeland security and its operational components, to create specific expectations for performance and accountability, and to build a framework for partnerships that will support the critical role of coordination, communication and collaboration among all relevant parties and stakeholders with homeland security missions. DHS will clearly have a central role in the success of efforts to strengthen homeland security, but it is a role that will be made stronger within the context of a larger, more comprehensive and integrated national homeland security strategy. A reorganization of the government’s homeland security functions along the lines being proposed is a major undertaking and represents one of the largest potential reorganizations and consolidations of government agencies, personnel, programs and operations in recent history. Those involved in this transition should not underestimate the time or effort required to successfully achieve the results the nation seeks. Numerous comparisons have been made between the proposed DHS and other large- scale government reorganizations, including the creation of DOD, the Central Intelligence Agency and the National Security Council as part of the National Security Act of 1947. Other analogies include the 1953 creation of the Department of Health, Education and Welfare, the 1966 establishment of the Department of Transportation (DOT) or the 1977 creation of the Department of Energy (DOE). Each of these cabinet level restructurings involved the transfer and consolidation of disparate functions and the creation of a new cabinet level structure in the Executive Branch. Often it has taken years for the consolidated functions in new departments to effectively build on their combined strengths, and it is not uncommon for these structures to remain as management challenges for decades. It is instructive to note that the creation of DOD, which arguably already had the most similar and aligned missions and functions among the reorganizations mentioned, still required Congress to make further amendments to its organization in 1949, 1953, 1958 and 1986 in order to improve its structural effectiveness. Despite these and other changes made by DOD, GAO has consistently reported over the years that the department – more than 50 years after the reorganization -- continues to have a number of serious management challenges. In fact, DOD has 6 of 22 government wide high risk areas based on GAO’s latest list. This note of caution is not intended to dissuade the Congress from seeking logical and important consolidations in government agencies and programs in order to improve homeland security missions. Rather, it is meant to suggest that reorganizations of government agencies frequently encounter start up problems and unanticipated consequences that result from the consolidations, are unlikely to fully overcome obstacles and challenges, and may require additional modifications in the future to effectively achieve our collective goals for defending the country against terrorism. The Congress faces a challenging and complex job in its consideration of DHS. On the one hand, there exists a certain urgency to move rapidly in order to remedy known problems relating to intelligence and information sharing and leveraging like activities that have in the past and even today prevent the United States from exercising as strong a homeland defense as emerging and potential threats warrant. Simultaneously, that same urgency of purpose would suggest that the Congress be extremely careful and deliberate in how it creates a new department for defending the country against terrorism. The urge to “do it quickly” must be balanced by an equal need to “do it right” in order to ensure a consensus on identified problems and needs, and to be sure that the solutions our government legislates and implements can effectively remedy the problems we face in a reasonably timely manner. It is clear that fixing the wrong problems, or even worse, fixing the right problems poorly, could cause more harm than good in our efforts to defend our country against terrorism. The federal government has engaged in numerous reorganizations of agencies in our nation’s history. Reorganizations have occurred at various times and for various reasons, and have been achieved through executive order, through recommendations by landmark commissions subsequently approved by the Congress, such as the Hoover Commission chaired by former President Herbert Hoover in the late 1940s, and by the Congress through its committee structure. The prevailing consensus on organizational management principles changed considerably during the course of the 20th century and through the various approaches to reorganization, but Hoover’s Commission clearly articulated that agencies and functions of the executive branch should be grouped together based on their major purposes or missions. The government has not always followed Hoover’s lead uniformly, but in recent years most departments except those serving a specific clientele, such as veterans, generally have been organized according to this principle. GAO’s own work on government restructuring and organization over the years has tended to support the overall tendency to emphasize consolidations of agencies as ways to improve the economy, efficiency and effectiveness of government operations. GAO has previously recommended that reorganizations should emphasize an integrated approach, that reorganization plans should be designed to achieve specific, identifiable goals, and that careful attention to fundamental public sector management practices and principles, such as strong financial, technology and human capital management are critical to the successful implementation of government reorganizations. Similarly, GAO has also suggested that reorganizations may be warranted based on the significance of the problems requiring resolution, as well as the extent and level of coordination and interaction necessary with other entities in order to resolve problems or achieve overall objectives. Of course, there are many lessons to be learned from the private sector, which over the past 20 years has experienced an extraordinary degree of consolidation through the merger and acquisition of companies or business units. Among the most important lessons, besides ensuring that synergistic entities can broaden organizational strengths more than limit them, is the need to pay critical attention to the employees impacted by the reorganization, and to align the human capital strategies and core competency components of the organization in order to meet expectations and achieve results. GAO has made similar conclusions and recommendations for the federal government. These observations are particularly apt to the proposed structure of DHS, which would combine an estimated 170,000 employees into a single department, making it the third largest government department in terms of personnel behind DOD and the Department of Veterans Affairs. GAO, based on its own work as well as a review of other applicable studies of approaches to the organization and structure of entities, has concluded that Congress should consider utilizing specific criteria as a guide to creating and implementing the new department. Specifically, GAO has developed a framework that will help Congress and the Administration create and implement a strong and effective new cabinet department by establishing criteria to be considered for constructing the department itself, determining which agencies should be included and excluded, and leveraging numerous key management and policy elements that, after completion of the revised organizational structure, will be critical to the department’s success. The following chart depicts the proposed framework: With respect to criteria that Congress should consider for constructing the department itself, the following questions about the overall purpose and structure of the organization should be evaluated: Definition: Is there a clear and consistently applied definition of homeland security that will be used as a basis for organizing and managing the new department? Statutory Basis: Are the authorities of the new department clear and complete in how they articulate roles and responsibilities and do they sufficiently describe the department’s relationship with other parties? Clear Mission: What will the primary missions of the new DHS be and how will it define success? Performance-based Organization: Does the new department have the structure (e.g., COO, etc.) and statutory authorities (e.g., human capital, sourcing) necessary to meet performance expectations, be held accountable for results, and leverage effective management approaches for achieving its mission on a national basis? Congress should also consider several very specific criteria in its evaluation of whether individual agencies or programs should be included or excluded from the proposed department. Those criteria include the following: Mission Relevancy: Is homeland security a major part of the agency or program mission? Is it the primary mission of the agency or program? Similar Goals and Objectives: Does the agency or program being considered for the new department share primary goals and objectives with the other agencies or programs being consolidated? Leverage Effectiveness: Does the agency or program being considered for the new department create synergy and help to leverage the effectiveness of other agencies and programs or the new department as a whole? In other words, is the whole greater than the sum of the parts? Gains Through Consolidation: Does the agency or program being considered for the new department improve the efficiency and effectiveness of homeland security missions through eliminating duplications and overlaps, closing gaps and aligning or merging common roles and responsibilities? Integrated Information Sharing/Coordination: Does the agency or program being considered for the new department contribute to or leverage the ability of the new department to enhance the sharing of critical information or otherwise improve the coordination of missions and activities related to homeland security? Compatible Cultures: Can the organizational culture of the agency or program being considered for the new department effectively meld with the other entities that will be consolidated? Field structures and approaches to achieving missions vary considerably between agencies. Impact on Excluded Agencies: What is the impact on departments losing components to DHS? What is the impact on agencies with homeland security missions left out of DHS? In addition to the criteria that Congress should consider when evaluating what to include and exclude from the proposed DHS, there are certain critical success factors the new department should emphasis in its initial implementation phase. GAO over the years has made observations and recommendations about many of these success factors, based on effective management of people, technology, financial and other issues, especially in its biannual Performance and Accountability Series on major government departments. These factors include the following: Strategic Planning: Leading results-oriented organizations focus on the process of strategic planning that includes involvement of stakeholders, assessment of internal and external environments, and an alignment of activities, cores processes and resources to support mission-related outcomes. Organizational Alignment: The organization of the new department should be aligned to be consistent with the goals and objectives established in the strategic plan. Communication: Effective communication strategies are key to any major consolidation or transformation effort. Building Partnerships: One of the key challenges of this new department will be the development and maintenance of homeland security partners at all levels of the government and the private sector, both in the United States and overseas. Performance Management: An effective performance management system fosters institutional, unit and individual accountability. Human Capital Strategy: The new department must ensure that that its homeland security missions are not adversely impacted by the government’s pending human capital crisis, and that it can recruit, retain and reward a talented and motivated workforce, which has required core competencies, to achieve its mission and objectives. The people factor is a critical element in any major consolidation or transformation. Information Management and Technology: The new department should leverage state-of-the art enabling technology to enhance its ability to transform capabilities and capacities to share and act upon timely, quality information about terrorist threats. Knowledge Management: The new department must ensure it makes maximum use of the collective body of knowledge that will be brought together in the consolidation. Financial Management: The new department has a stewardship obligation to prevent fraud, waste and abuse, to use tax dollars appropriately, and to ensure financial accountability to the President, Congress and the American people. Acquisition Management: Anticipated as one of the largest of new federal departments, the proposed DHS will potentially have one of the most extensive acquisition requirements in government. Early attention to strong systems and controls for acquisition and related business processes will be critical both to ensuring success and maintaining integrity and accountability. Risk Management: The new department must be able to maintain and enhance current states of homeland security readiness while transitioning and transforming itself into a more effective and efficient structural unit. The proposed DHS will also need to immediately improve the government’s overall ability to perform risk management activities that can help to prevent, defend against and respond to terrorist acts. Prior to the terrorist attacks of September 11th, the United States in recent years had made what must be characterized as limited progress in strengthening its efforts to protect the nation from terrorist attacks. Mainly through the mechanisms of executive orders and presidential decision directives (PDD), the President has sought to provide greater clarity and leadership in homeland security areas. For instance, PDD 39 in June 1995 assigned the Department of Justice, through the FBI, responsibility as the lead federal agency for crisis management, and FEMA as the lead federal agency for consequence management for domestic terrorist attacks. In May 1998, PDD 62 established the position of national coordinator for terrorism within the National Security Council. PDD 63 emphasized new efforts to protect the nation’s critical infrastructure from attack. Through legislation, the federal government increased the availability of grants for first responder training and instituted more regular tabletop training exercises involving state and local governments. A number of blue ribbon panels or commissions were also convened prior to September 11th and, after studying the government’s structure and methods for protecting against terrorism, made many important and timely recommendations for improving our approach. Panels led by former Senators Gary Hart and Warren Rudman, as well as former Virginia Governor James Gilmore, made sweeping recommendations about remedying the gaps, overlaps and coordination problems in the government’s ability to detect, prevent, and respond to terrorist attacks in a comprehensive manner across both the public and private sectors. Indeed, the Hart-Rudman Commission recommended the creation of a new department to consolidate many of the government’s homeland security functions. In recent years, GAO has also issued numerous reports and made many recommendations designed to improve the nation’s approach to homeland security. We summarized our work in a report completed just prior to the September 11th attacks, in which we found that: (1) overall leadership and coordination needed to be addressed; (2) limited progress had been made in developing a national strategy and related guidance and plans; (3) federal response capabilities had improved but further action was still necessary; (4) federal assistance to state and local governments could be consolidated; and (5) limited progress had been made in implementing a strategy to counter computer-based threats. We have continued to re- iterate that a central focal point such as OHS be established statutorily in order to coordinate and oversee homeland security policy within a national framework. Today, we re-emphasize the need for OHS to be established statutorily in order to effectively coordinate activities beyond the scope of the proposed DHS and to assure reasonable congressional oversight. As mentioned previously, after the September 11th terrorist attacks, Congress and the Administration took a number of actions designed to improve our ability to combat terrorism and protect the nation. The President created OHS via executive order. Congress passed legislation creating the Transportation Security Administration (TSA) to better secure transportation and the USA Patriot Act to improve our capabilities to detect and prevent terrorist acts. Congress also introduced legislation to restructure a variety of homeland security related functions, and Senator Lieberman and Representative Thornberry proposed legislation to create a new cabinet department to consolidate many homeland security functions. On June 6th, President Bush announced a new proposal to create a Department of Homeland Security and submitted draft legislation to Congress on June 18th. Like the congressional approaches to creation of a new department, the President’s plan also reflected many of the recent commissions’ suggestions and GAO’s recommendations for improved coordination and consolidation of homeland security functions. As indicated by Governor Ridge is his recent testimony before Congress, the creation of DHS would empower a single cabinet official whose primary mission is to protect the American homeland from terrorism, including: (1) preventing terrorist attacks within the United States; (2) reducing America’s vulnerability to terrorism; and (3) minimizing the damage and recovering from attacks that do occur. In our initial review of the proposed DHS, we have used the President’s draft bill of June 18th as the basis of our comments. Nevertheless, we recognize that the proposal has already – and will continue -- to evolve in the coming days and weeks ahead. The President’s proposal creates a cabinet department with four divisions, including: Information Analysis and Infrastructure Protection Chemical, Biological, Radiological and Nuclear Countermeasures Additionally, the proposed DHS would be responsible for homeland security coordination with other executive branch agencies, state and local governments, the private sector and other entities. The legislation transfers to the new department intact the U.S. Secret Service and the U.S. Coast Guard. For the organizations transferred to the new department, the proposed DHS would be responsible for managing all of their functions, including non-homeland security functions. In some instances, these other responsibilities are substantial. Finally, the proposal would exempt the new department from certain authorities, including some civil service protections, the Federal Advisory Committee Act, and procurement laws, while providing authority to authorize new rules by regulation and to reprogram portions of departmental appropriations. The new department’s Inspector General would be modeled on that office in the Central Intelligence Agency. Homeland Security Missions One of the most critical functions that the new department will have is the analysis of information and intelligence to better foresee terrorist threats to the United States. As part of its function, the Information Analysis and Infrastructure Protection division of the department would assess the vulnerability of America’s key assets and critical infrastructure, including food and water systems, agriculture, health systems, emergency services, banking and finance, communications and information systems, energy (including electric, nuclear, gas and oil and hydropower), transportation systems, and national monuments. The President’s proposal seeks to transfer to the new department the FBI’s National Infrastructure Protection Center (other than the computer investigations and operations center), the National Communications System of DOD, the Commerce Department’s Critical Infrastructure Assurance Office, the Computer Security Division of the National Institute of Standards and Technology (NIST), the National Infrastructure Simulation and Analysis Center of DOE, and the General Services Administration’s (GSA) Federal Computer Incident Response Center. The Administration has indicated that this new division would for the first time merge under one roof the capability to identify and assess threats to the homeland, map those threats against our vulnerabilities, issue timely warnings, and organize preventive or protective action to secure the homeland. Considerable debate has ensued in recent weeks with respect to the quality and timeliness of intelligence data shared between and among relevant intelligence, law enforcement and other agencies. The proposal would provide for the new department to receive all reports and analysis related to threats of terrorism and vulnerabilities to our infrastructure and, if the President directs, information in the “raw” state that has not been analyzed. The agencies and programs included in the Administration’s proposal to consolidate information analysis functions are clear contributors to the homeland security mission and, if well coordinated or consolidated, could provide greater benefits in incident reporting, analysis and warning, and the identification of critical assets. Such a critical endeavor, however, will still require detailed planning and coordination, including a national critical infrastructure protection strategy, both inside and outside the new department, to ensure that relevant information reaches the right offices and officials who can act upon it. Furthermore, in considering this portion of the legislation, Congress ought to evaluate whether the new division as proposed, despite the provision stipulating access, will have sufficient ability to obtain all necessary information, assistance and guidance to make decisions in a timely, effective manner. Within this framework, the Congress will likely need to make trade-off decisions between concerns over access and utility of information and the concerns that some Americans may have about civil rights issues associated with any larger consolidation of domestically-oriented intelligence operations. It is also important to note that while certain cyber/critical infrastructure protection functions are proposed for transfer into DHS, a significant number of federal organizations involved in this effort will remain in their existing locations, including the Critical Infrastructure Protection Board, the Joint Task Force for Computer Network Operations, and the Computer Investigations and Operations Section of the FBI. The homeland security proposal is silent on the relationship between those entities that will be consolidated and their role in coordinating with the entities left out of the new department, and Congress should consider addressing this important issue. Ultimately, a greater emphasis on strategic planning and information sharing clearly will be necessary to resolve the significant shortfalls that the government has faced in sharing critical intelligence and infrastructure information in order to better achieve homeland security expectations. The consolidation of some intelligence functions into DHS may help solve these problems. The division of the new department responsible for chemical, biological, radiological and nuclear countermeasures will consolidate several important scientific, research and development programs, including the select agent registration enforcement programs and activities of the Department of Health and Human Services (HHS), programs at DOE dealing with chemical and biological national security and non- proliferation supporting programs, the nuclear smuggling programs, the nuclear assessment program, energy security and assurance activities, and life science activities of DOE’s biological and environmental research program related to microbial pathogens. Also proposed for transfer are the Environmental Measurements Laboratory, portions of the Lawrence Livermore National Laboratory, the Plum Island Animal Diseases Center of the Department of Agriculture (USDA), and DOD’s National Bio-Weapons Defense Analysis Center, which is not yet operational. The proposal seeks to remedy the current fragmented efforts of the government and its private sector partners to counter and protect against the threat of weapons of mass destruction. To the extent that this division would develop or coordinate the development of national policy to strengthen research and development in the areas of countermeasures to chemical, biological, radiological and nuclear weapons, such a goal conforms to previous recommendations we have made. As with the information analysis division discussed previously, this division would also have extensive needs to coordinate with other similar programs throughout the government – programs which are not included in the new department. For example, there are civilian applications of defense related research and development underway at the Defense Threat Reduction Agency (DTRA) and the National Institutes of Health (NIH) has some on-going responsibility for bioterrorism research. Whether such programs ought to be considered for inclusion in the new department, or whether these issues can be coordinated simply through improved interaction, are also questions that should be considered in the larger context of the legislation. The proposal also calls for transferring elements of the Lawrence Livermore Lab to the new department. At this point, without sufficient additional information, it is not clear what the impact that such a shift would have on the lab’s overall research program or the significant contract workforce that is engaged in much of the activities. Congress may also need to further explore whether the relationships the proposal would establish between the new department’s secretary and the Secretary of HHS will efficiently and effectively result in the desired outcomes for civilian research, as the nature of the agreements and delegations to implement such functions are not clear. Nevertheless, despite some unresolved ambiguity, it will be important for the Congress to capture the synergy that potentially can be created by combining compatible research and development activities. One of the larger divisions of the new department would handle Border and Transportation Security, and would include the transfer of the U.S. Customs Service, INS, the Animal and Plant Health Inspection Service (APHIS) of USDA, the Coast Guard and TSA, both from DOT, and GSA’s Federal Protective Service. The proposal seeks to bring together under one department all of the border control functions, including authority over the issuance of visas, in order to consolidate operations for border controls, territorial waters and transportation systems. This effort is designed to balance prevention of terrorist activities against people, food and other goods, and transportation systems with the legitimate, rapid movement of people and commerce across borders and around the country. Under the proposed transfer, APHIS and Plum Island (as part of the Infrastructure division) would be moved from USDA, but other units would remain. In addition, no Food and Drug Administration (FDA) food safety functions were identified for transfer. Thus, the focus appears to be on enhancing protection of livestock and crops from terrorist acts, rather than on protecting the food supply as a whole. In previous reports, GAO has described our current fragmented federal food supply safety structure and, in the absence of a single food safety agency, Congress may wish to consider whether the new department would be able to prevent, detect, and quickly respond to acts of terrorism in the food supply. Another issue that Congress may need to consider is the organizational separation of facilities management functions and building security responsibilities contained in the Federal Protective Service’s mission. Since the provision of security is a key facilities management function, security needs to be integrated into decisions about the location, design and operation of federal facilities. Moreover, many federal agencies provide their own building security. The proposal does not address the coordination or further consolidation of such functions, and it is also silent on GSA’s role in leading the Interagency Security Committee, which develops the federal government’s security policies and oversees the implementation of such policies in federal facilities. Finally, the last division, Emergency Preparedness and Response, would combine the government’s various agencies and programs that provide assistance, grants, training and related help to state and local governments, to first responder personnel and support other federal agencies that may confront terrorist attacks, major disasters and other emergencies. The proposal would transfer to the new department the Federal Emergency Management Agency (FEMA), the Office of Domestic Preparedness and the Domestic Emergency Support Teams of the Justice Department and National Domestic Preparedness Office of the FBI, as well as the Strategic National Stockpile and certain public health preparedness responsibilities of HHS. This consolidation would allow the secretary of the new department to oversee federal government assistance in the domestic disaster preparedness training of first responders and would coordinate the government’s disaster response efforts. Although certain other disaster response functions are not specifically included in the proposed department, the DHS secretary would have the authority to call on other response assets, such as DOE’s nuclear incident response teams. Additionally, Congress might wish to examine the likely impact of establishing agreements between the DHS and HHS secretaries that retain authority for the conduct of certain public health related activities at DHS but the execution of the activities would be left to HHS. The legislation for the new department indicates that DHS, in addition to its homeland security responsibilities, will also be responsible for carrying out all other functions of the agencies and programs that are transferred to it. In fact, quite a number of the agencies proposed to be transferred to DHS have multiple functions – they have missions directly associated with homeland security and missions that are not at all related to homeland security. In our initial review of the impacted agencies, we have not found any missions that would appear to be in fundamental conflict with the department’s primary mission of homeland security. However, the Congress will need to consider whether many of the non-homeland security missions of those agencies transferred to DHS will receive adequate funding, attention, visibility and support when subsumed into a department that will be under tremendous pressure to succeed in its primary mission. As important and vital as the homeland security mission is to our nation’s future, the other non-homeland security missions transferred to DHS for the most part are not small or trivial responsibilities. Rather, they represent extremely important functions executed by the federal government that, absent sufficient attention, could have serious implications for their effective delivery and consequences for sectors of our economy, health and safety, research programs and other significant government functions. Some of these responsibilities include: maritime safety and drug interdiction by the Coast Guard, collection of commercial tariffs by the Customs Service, regulation of genetically engineered plants by APHIS, advanced energy and environmental research by the Lawrence Livermore and Environmental Measurements labs, responding to floods and other natural disasters by FEMA, and authority over processing visas by the State Department’s consular officers. These examples reveal that many non-homeland security missions are likely to be integrated into a cabinet department overwhelmingly dedicated to protecting the nation from terrorism. Congress may wish to consider whether the new department, as proposed, will dedicate sufficient management capacity and accountability to ensure the execution of non- homeland security missions, as well as consider potential alternatives to the current framework for handling these important functions. Likewise, Congress may wish to consider the impact that the proposed transfer of certain agencies and programs may have on their “home” departments. Both the Department of the Treasury and the DOT will see significant reductions in size and changes to their overall departmental missions, organization, and environments if the legislation is enacted. As a result, these changes provide an opportunity for Congress and the Administration to consider what is the proper role for these and other federal government entities. As the impact of reductions of missions and personnel are contemplated at several cabinet departments, it is appropriate for Congress to reconsider the relevance or fit of federal programs and activities. This process requires that we ask important, yet sometimes tough questions, such as: What is the national need? How important is it relative to other competing needs and available resources? What is the proper federal role, if any? Who are the other key players (e.g., state and local government, non- government organizations, private sector)? How should we define success (e.g., desired outcomes)? What tools of government create the best incentives for strong results – (direct funding, tax incentives, guarantees, regulation, enforcement)? What does experience tell us about the effectiveness of any current related government programs? Based on the above, what programs should be reduced, terminated, started or expanded? In fact, given the key trends identified in GAO’s recent strategic plan for supporting the Congress and our long range fiscal challenges, now is the time to ask three key questions: (1) what should the federal government do in the 21st century? (2) how should the federal government do business in the 21st century? and (3) who should do the federal government’s business the 21st century? These questions are relevant for DHS and every other federal agency and activity. As the proposal to create DHS indicates, the terrorist events of last fall have provided an impetus for the government to look at the larger picture of how it provides homeland security and how it can best accomplish associated missions. Yet, even for those agencies that are not being integrated into DHS, there remains a very real need and possibly a unique opportunity to rethink approaches and priorities to enable them to better target their resources to address our most urgent needs. In some cases, the new emphasis on homeland security has prompted attention to long- standing problems that have suddenly become more pressing. For example, we’ve mentioned the overlapping and duplicative food safety programs in the federal government. While such overlap has been responsible for poor coordination and inefficient allocation of resources, these issues assume a new, and potentially more foreboding, meaning after September 11th given the threat from bio-terrorism. A consolidated approach can facilitate a concerted and effective response to new threats. The federal role in law enforcement, especially in connection with securing our borders, is another area that is ripe for re-examination following the events of September 11th. In the past 20 years, the federal government has taken on a larger role in financing criminal justice activities that have traditionally been viewed as the province of the state and local sector. Given the daunting new law enforcement responsibilities, and limited budgetary resources at all levels, it is important to consider whether these additional responsibilities should encourage us to reassess criminal justice roles and responsibilities at the federal, state and local level. As Congress considers legislation for a new homeland security department, it is important to note that simply moving agencies into a new government organizational structure will, by itself, be insufficient to create the dynamic environment that will be required to meet performance expectations for protecting and defending the nation against terrorism. It is critical to recognize the important management and implementation challenges the new department will face. These challenges are already being faced at TSA, which is under considerable pressure to build a strong workforce and meet numerous deadlines for integrating technology and security issues. Moreover, Congress should be aware that some fundamental problems currently exist with certain of the agencies that are slated to become part of the new department. DHS will need to pay special attention to these agencies to ensure that they can maintain readiness and confront significant management problems simultaneously. For example, several of the agencies currently face challenges in administering their programs, managing their human capital, and implementing and securing information technology systems. Absent immediate and sustained attention to long-standing issues, these problems are likely to remain once the transfer is complete. Our past work has demonstrated that these management challenges exist within INS, APHIS, and FEMA. Program management and implementation has been a particular challenge for INS, which has a dual mission of enforcing laws regarding illegal immigration and providing immigration and naturalization services for aliens who enter and reside legally in the U.S. This “mission overload” has impeded INS from succeeding at either of its primary functions. In 1997, the bipartisan Commission on Immigration Reform stated that INS’ service and enforcement functions were incompatible and that tasking one agency with carrying out both functions caused problems, such as competition for resources, lack of coordination and cooperation, and personnel practices that created confusion regarding mission and responsibilities. For example, INS does not have procedures in place to coordinate its resources for initiating and managing its programs to combat alien smuggling. In several border areas, multiple antismuggling units exist that operate autonomously, overlap in jurisdiction, and report to different INS officials. In addition, INS field officials lack clear criteria on which antismuggling cases to investigate, resulting in inconsistent decision- making across locations. Managing human capital also remains a challenge for INS, APHIS, and FEMA. For INS, issues in managing its human capital management have impacted various functions. Because of cut backs or delays in training, a large portion of INS’ staff will be relatively inexperienced and inadequately trained for processing visas for specialty occupations. Furthermore, while INS officials believe they need more staff to keep up with the workload, they could not specify the types of staff needed or where they should be located because of the lack of a staff allocation model and procedures. APHIS, one of the three primary agencies responsible for monitoring the entry of cargo and passengers into the U.S., has struggled to keep pace with its heavy workload at ports of entry. These conditions have led APHIS inspectors to shortcut cargo inspection procedures, thereby jeopardizing the quality of the inspections conducted. In addition, APHIS has little assurance that it is effectively deploying its limited inspection resources because of weaknesses in its staffing models. Likewise, FEMA still struggles with using its disaster relief staff in an effective manner although it has reported progress in improving its Disaster Field Office operations through convening a review council to study its operations and the implementation of corrective actions. Agencies’ management efforts to implement information technology systems, as well as utilize and secure the information within these systems, have also proved challenging. For example, INS lacks an agencywide automated case tracking and management system to help it monitor and coordinate its investigations. Further, INS’ antismuggling intelligence efforts have been hampered by an inefficient and cumbersome process for retrieving and analyzing intelligence information and by the lack of clear guidance to INS staff about how to gather, analyze, and disseminate intelligence information. Within APHIS, no central automated system has been implemented to allow for agency-wide access to information on the status of shipments on hold at ports, forcing inspection staff to use a manual record keeping system that does not reliably track this information. For FEMA, material weaknesses in its access controls and program change controls have contributed to deficiencies within its financial information systems. The creation of the Department of Homeland Security will be one of the largest, most complex re-structurings ever under taken. The department and its leaders will face many challenges, including organizational, human capital, process, technology and environmental issues that must be sorted out at the same time that the new department is working to maintain readiness. Strategic planning will be critical to maintaining readiness, managing risk, and balancing priorities, and the department’s broad mission will depend on many partners to ensure success. Moreover, sound management systems and practices will be integral to the department’s ability to achieve its mission effectively and to be held accountable for results. A strategic plan should be the cornerstone of DHS’ planning structure. It should clearly articulate the agency’s mission, goals, objectives, and the strategies the department will use to achieve those goals and objectives. It provides a focal point for all planning efforts, and is integral to how an organization structures itself to accomplish its mission. In addition, a comprehensive transition plan that clearly delineates timetables and resource requirements will be vital to managing this re-organization. A consolidation of this magnitude cannot be accomplished in months. As shown by past experience, it will take years to truly consolidate the programs, functions and activities being brought under the umbrella of DHS. The President has taken a significant first step by establishing a transition planning office in the Office of Management and Budget. Congress should consider requiring a comprehensive transition plan and periodic progress reports, as part of its oversight of the consolidation actions. The magnitude of the challenges that DHS faces calls for comprehensive and rigorous planning to guide decisions about how to make the department work effectively and achieve high performance. Leadership will be needed to establish long-range plans, to direct and coordinate the actions of the department’s various interrelated policies and functions, and to achieve its goals and objectives. Management also must develop specific short-range plans to efficiently direct resources among functions and to assist in making decisions regarding day-to-day operations. DHS must define priorities, goals and plans in concert with other agencies, Congress, and outside interest groups, while also leveraging the potential and dynamism of its new units. Leading organizations start by assessing the extent to which their programs and activities contribute to meeting their mission and intended results. An organization’s activities, core processes, and resources must be aligned to support missions and help it achieve its goals. It is not uncommon for new leadership teams to find that their organization structures are obsolete and inadequate to modern demands, or that spans of control and field to headquarters ratios are misaligned, and that changes are required. For example, the agencies proposed to be included in DHS have unique field structures, the integration of which will be a significant challenge given the natural tension between organizational, functional and geographic orientations. Flexibility will be needed to accomplish this difficult management task, as well as many others. The President’s proposal will consolidate many homeland security functions and activities. However, the new department ultimately will be dependent on the relationships it builds both within and outside the department for its ultimate success. As we indicated, the recently reported intelligence sharing challenges provide ample illustration of the need for strong partnerships and full communication among critical stakeholders. There is a growing understanding that any meaningful results that agencies hope to achieve are accomplished through matrixed relationships or networks of governmental and nongovernmental organizations working together toward a common purpose. These matrixed relationships exist on at least three levels. First, they support the various internal units of an agency. Second, they include the relationships among the components of a parent department as well as those between individual components and the department. Matrixed relationships are also developed externally, including relationships with other federal agencies, domestic and international organizations, for-profit and not-for-profit contractors, and state and local governments, among others. Internally, leading organizations seek to ensure that managers, teams, and employees at all levels are given the authority they need to accomplish their goals and work collaboratively to achieve organizational outcomes. Communication flows up and down the organization to ensure that line staff has the ability to provide leadership with the perspective and information that the leadership needs to make decisions. Likewise, senior leadership keeps line staff informed of key developments and issues so that the staff can best contribute to achieving the organization’s goals. There is no question that effective communication strategies are key to any major consolidation or transformation effort. Collaboration, coordination, and communication are equally important across agency boundaries. However, our work also has shown that agencies encounter a range of barriers when they attempt coordination. In our past work, we have offered several possible approaches for better managing crosscutting programs – such as improved coordination, integration, and consolidation–to ensure that crosscutting goals are consistent, program efforts are mutually reinforcing, and where appropriate, common or complementary performance measures are used as a basis for management. The proposed legislation provides for the new department to reach out to state and local governments and the private sector to coordinate and integrate planning, communications, information, and recovery efforts addressing homeland security. This is important recognition of the critical role played by nonfederal entities in protecting the nation from terrorist attacks. State and local governments play primary roles in performing functions that will be essential in effectively addressing our new challenges. Much attention has already been paid to their role as first responders in all disasters, whether caused by terrorist attacks or natural disasters. State and local governments also have roles to play in protecting critical infrastructure and providing public health and law enforcement response capability. The private sector’s ownership of energy and telecommunications is but one indicator of the critical role that the corporate sector must play in addressing threats to our homeland. Achieving national preparedness and response goals hinge on the federal government’s ability to form effective partnerships with nonfederal entities. Therefore, federal initiatives should be conceived as national, not federal in nature. The new department needs to gain the full participation and buy-in of partners in both policy formulation and implementation to develop effective partnerships. DHS will need to balance national interests with the unique needs and interests of nonfederal partners. One size will not, nor should it, fit all. It is important to recognize both the opportunities and risks associated with partnerships. While gaining the opportunity to leverage the legal, financial and human capital assets of partners for national preparedness, each of these nonfederal entities has goals and priorities that are independent of the federal government. In designing tools to engage these actors, the department needs to be aware of the potential for goal slippage and resource diversion. For instance, in providing grants to state or local governments for training and equipment, federal officials should be alert to the potential for these governments to use grants to substitute for their own resources in these programs, essentially converting a targeted federal grant into a general revenue sharing initiative. Maintenance of effort provisions can be included to protect against such risk. Designing and managing the tools of public policy to engage and work constructively with third parties has become a new skill required of federal agencies, and one that needs to be addressed by the new department. A good illustration of the relevance of partnerships involves the protection of the nation’s borders against threats arriving aboard shipping cargo. The Customs Service currently inspects only two percent of the cargo arriving in American ports and it is probably unrealistic to expect significant increases in coverage through inspections even with higher numbers of federal inspectors. Rather, a more effective strategy calls for the federal government to work proactively with shipping companies to gain their active buy-in to self-inspections and more rigorous protection of cargo. Partnerships with foreign ports are also critical in preventing the shipping of suspicious items in the first place. Although critical to national security, the protection of our ports illustrates the critical role played by partnerships spanning sectors of the economy and nations. A performance management system that promotes the alignment of institutional, unit and individual accountability to achieve results will be an essential component for success of the new department. High-performing organizations know how the services and functions they deliver contribute to achieving the results of their organizations. Our work has shown that there are three characteristics for high-performing, results-oriented organizations. These organizations: (1) define clear missions and desired outcomes; (2) measure performance to gauge progress; and (3) use performance information as a basis for decision-making. These characteristics are consistent with the Government Performance and Results Act, and should be the guide to developing a strong performance management system for the new department. The first step for the department’s leadership will be to define its mission and desired outcomes. Activities, core processes and resources will have to be aligned. This will require cascading the department’s goals and objectives down through the organization. Further, an effective performance management system will require involvement of stakeholders and a full understanding of the environment in which the department operates. A good performance management system fosters both institutional, unit and individual accountability. One way to inculcate a culture of excellence or results-orientation is to align individual employees’ performance expectations with agency goals and desired outcomes so that individuals understand the connection between their daily activities and their organization’s success. High-performing organization have recognized that a key element of a fully successful performance management system is to create a “line of sight” that shows how individual responsibilities contribute to organizational goals. These organizations align their top leadership’s performance expectations with organizational goals and then cascade performance expectations to lower organizational levels. An organization’s people are its most important asset. People define an organization, affect its capacity to perform, and represent the knowledge- base of the organization. In an effort to help agency leaders integrate human capital considerations into daily decision-making and in the program results they seek to achieve, we have recently released an exposure draft of a model of strategic human capital management that highlights the kinds of thinking that agencies should apply and steps they can take to manage their human capital more strategically. The model focuses on four cornerstones for effective human capital management – leadership; strategic human capital planning; acquiring, developing, and retaining talent; and results-oriented organization culture. The new department may find this model useful in helping guide its efforts. One of the major challenges DHS faces is the creation of a common organizational culture to support a unified mission, common set of core values, and organization-wide strategic goals, while simultaneously ensuring that the various components have the flexibility and authorities they need to achieve results. When I have discussed the need for government-wide reforms in strategic human capital management, I have often referred to a three-step process that should be used in making needed changes. This process may be helpful to Congress as it considers the human capital and other management authorities it will provide the department. Like other departments, DHS should be encouraged to make appropriate use of all authorities at its disposal. We often find that agencies are not taking full advantage of the tools, incentives, and authorities that Congress and the central management agencies have provided. DHS will also find it beneficial to identify targeted statutory changes that Congress could consider to enhance DHS’s efficiency and effectiveness (e.g., additional hiring and compensation flexibility for critical skill areas, targeted early out and buyout authority). In this regard, Congress may wish to consider the approach it used in forming TSA, which included provisions for a progress report and related recommendations for congressional action. The new department will face tremendous communications and systems and information technology challenges. Programs and agencies will be brought together in the new department from throughout the government. Each will bring their communications and information systems. It will be a tremendous undertaking to integrate these diverse systems to enable effective communication and share information among themselves, as well as those outside the department. Further, considering the sensitivity of the data at the proposed department, securing its information systems and networks will be a major challenge. Since 1996, we have reported that poor information security is a widespread federal government problem with potentially devastating consequences. Effective leadership and focused management control will be critical to meeting these challenges. We recommend that a CIO management structure as prescribed by the Clinger-Cohen Act of 1996 be established to provide the leadership necessary to direct this complex, vital function. Further, it will be critical that an enterprise architecture be developed to guide the integration and modernization of information systems. Enterprise architecture consists of models that describe how the enterprise operates now and how it needs to operate in the future. Without enterprise architecture to guide and constrain IT investments, stovepipe operations and systems can emerge, which in turn lead to needless duplication, incompatibilities, and additional costs. By its very nature, the combining of organizations will result in stovepipes. It will require strong leadership, re- engineering of business processes to meet corporate goals, and effective planning to integrate, modernize and secure the new department’s information systems. Effective knowledge management captures the collective body of information and intellect within an organization, treats the resultant knowledge base as a valued asset, and makes relevant parts of the knowledge base available to decisionmakers at all levels of the organization. Knowledge management is closely aligned with enterprise architecture management, because both focus on systematically identifying the information needs of the organization and describing the means for sharing this information among those who need it. The people brought together in the new department will have diverse skills and knowledge. It will be critical for the new department to build an effective knowledge management capability. Elements involved in institutionalizing this function include: Deciding with whom (both internally and externally) to share Deciding what knowledge is to be share, through performing a knowledge audit and creating a knowledge map; Deciding how the knowledge is to be share, through creating apprenticeship/mentoring programs and communities of practice for transferring tacit knowledge, identifying best practices and lessons learned, managing knowledge content, and evaluating methods for sharing knowledge; and Sharing and using organizational knowledge, through obtaining sustained executive commitment, integrating the knowledge management function across the enterprise and embedding it in business models, communications strategies, and measuring performance and value. The events of September 11th and the efforts of the Administration and Congress to protect the country from future terrorist attacks have generated enormous demands on resources in a short period of time. The FY2002 appropriations and the nearly simultaneous transmission of an emergency supplement and FY2003 budget request were followed shortly by a second FY2002 supplemental. This rapid growth in spending for homeland security has shifted budget priorities in ways that we are only beginning to understand. As Congress considers the resource implications of the proposed Department, both budget and accountability issues need to be addressed. It will be important for both OMB and the Congress to develop a process to track the budget authority and outlays associated with homeland security through the President’s budget proposals, congressional budget resolution, and the appropriations process. A tracking system is vital for Congress to address the total spending for homeland security as well as to ensure that the total allocations are in fact implemented subsequently in the authorization and appropriations process. In addition, DHS must also track the spending for the non-homeland security missions of the department. As we have indicated, many important activities relevant to homeland security will continue to be housed in other agencies outside the department, such as the protection of nuclear power plants and drinking water, and require the new department to work collaboratively. The proposed legislation addresses this challenge in several instances by authorizing the new department to transfer and/or control resources for some of these related programs. For instance, the department is given authority to set priorities for research on bioterrorism by the Department of Health and Human Services, but it is unclear how this will occur. Although consolidating activities in one department may produce savings over the longer term, there will be certain transition costs in the near term associated with setting up the new agency, acquiring space, providing for new information systems, and other assorted administrative expenses. Some of these costs, such as developing new systems, may be one time in nature, while others, such as the overhead necessary to administer the department will be continuing. Congress may very well decide that these new costs should be absorbed from the appropriations of programs and agencies being consolidated into the department. Indeed, it appears that the Administration’s proposal seeks to facilitate this by authorizing the Secretary to draw up to five percent of unobligated balances from accounts to be included in the new department after notification to the Congress. While these transfers may be sufficient to fund the transition, the costs of the transition should be transparent to Congress up front as it considers the proposed new department. The initial estimated funding for the new department is $37.7 billion. This estimate reportedly includes the total funds, both for homeland and non- homeland security missions of the incoming agencies and programs. Agencies and programs migrating to the new department have different financial systems, as well as financial management challenges. Further, the new department would have numerous financial relationships with other federal departments, as well as state and local government and the private sector. It will be essential that the department have very strong financial stewardship to manage these funds. It is important to re-emphasize that the department should be brought under the Chief Financial Officers (CFO) Act and related financial management statutes. A Chief Financial Officer, as provided by the CFO Act, would be a significant step to ensuring the senior leadership necessary to cut across organizational boundaries to institutionalize sound financial systems and practices and provide good internal controls and accountability for financial resources. Systems that produce reliable financial information will be critical to managing day-to- day operations and holding people accountable. Sound acquisition management is central to accomplishing the department’s mission. While the details are still emerging, the new department is expected to spend billions annually to acquire a broad range of products, technologies, and services from private-sector companies. Getting the most from this investment will depend on how well the department manages its acquisition activities. Our reports have shown that that some of the government’s largest procurement operations are not always particularly well run. In fact, three agencies with major procurement operations – DOD, DOE and NASA -- have been on our high- risk list for the last 10 years. To ensure successful acquisition outcomes, and effectively integrate the diverse organizational elements that will comprise the new department, we believe the department needs to adopt a strategic perspective on acquisition needs, including the establishing a Chief Acquisition Officer. Key elements of a strategic approach involve leadership, sound acquisition strategies, and a highly skilled workforce. Our acquisition best practices work shows that a procurement executive or chief acquisition officer plays a crucial role in implementing a strategic approach to acquisition. At the leading companies we visited, such officials were corporate executives who had authority to influence decisions on acquisitions, implement needed structural process or role changes, and provide the necessary clout to obtain buy-in and acceptance of reengineering and reform efforts. Good acquisition outcomes start with sound acquisition strategies. Before committing substantial resources, the department should look across all of its organizational elements to ensure that requirements are linked to mission needs and costs and alternative solutions have been considered. Finally, having the right people with the right skills to successfully manage acquisitions is critical to achieving the department’s mission. Many agencies are experiencing significant skill and experience imbalances. This will be a particular leadership challenge for the acquisition function. The administration’s proposal would allow the department to deviate from the normal federal acquisition rules and laws. Certainly, there could be situations where it might be necessary to expedite or streamline procurement processes so that the department is not handicapped in its ability to protect American citizens against terrorism. We support such flexibilities in these situations. However, it is not clear from our review of the administration’s proposal exactly what flexibilities are being requested. Moreover, depending on how far-reaching such flexibilities go, we are concerned about whether the department will have an acquisition workforce with the skills and capabilities to execute the acquisition function outside of the normal procurement structure. A risk assessment is central to risk management and involves the consideration of several factors. Generally, the risk assessment process is a deliberate, analytical approach to identify which threats can exploit which vulnerabilities in an organization’s specific assets. The factors to consider include analyzing the vulnerabilities, identifying and characterizing the threat, assessing the value of the asset, identifying and costing countermeasures, and assessing risks. After these factors are considered, an organization can decide on implementing actions to reduce the risk. It is very difficult to rank threats. However, it is more constructive to consider a range of threats and review the vulnerabilities and criticality of assets when contemplating decisions on resource allocations toward homeland security. As a nation, we must be able to weather a variety of threat-oriented scenarios with prudent planning and execution. Therefore it is very important to ensure that the right resources are applied to the most appropriate areas based on a risk based management approach. In summary, I have discussed the reorganization of homeland security functions and some critical factors for success. However, the single most important element of a successful reorganization is the commitment of top leaders. Top leadership involvement and clear lines of accountability for making management improvements are critical to overcoming an organization’s natural resistance to change, marshalling the resources needed to improve management, and building and maintaining organization-wide commitment to new ways of doing business. Organizational cultures will not be transformed, and new visions and ways of doing business will not take root without strong and sustained leadership. Strong and visionary leadership will be vital to creating a unified, focused organization, as opposed to a group of separate units under a single roof. Madame Chair, this concludes my written testimony. I would be pleased to respond to any questions that you or members of the Subcommittee may have at this time. Homeland Security: Key Elements to Unify Efforts Are Underway but Uncertainty Remains (GAO-02-610, June 7, 2002). National Preparedness: Integrating New and Existing Technology and Information Sharing into an Effective Homeland Security Strategy (GAO-02-811T, June 7, 2002). Homeland Security: Responsibility and Accountability for Achieving National Goals (GAO-02-627T, April 11, 2002). National Preparedness: Integration of Federal, State, Local, and Private Sector Efforts Is Critical to an Effective National Strategy for Homeland Security (GAO-02-621T, April 11, 2002). Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy (GAO-02-549T, March 28, 2002). Homeland Security: Progress Made, More Direction and Partnership Sought (GAO-02-490T, March 12, 2002). Homeland Security: Challenges and Strategies in Addressing Short- and Long-Term National Needs (GAO-02-160T, November 7, 2001). Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, October 31, 2001). Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness (GAO-02-145T, October 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, October 12, 2001). Homeland Security: A Framework for Addressing the Nation’s Issues (GAO-01-1158T, September 21, 2001). Combating Terrorism: Intergovernmental Cooperation in the Development of a National Strategy to Enhance State and Local Preparedness (GAO-02-550T, April 2, 2002). Combating Terrorism: Enhancing Partnerships Through a National Preparedness Strategy (GAO-02-549T, March 28, 2002). Combating Terrorism: Critical Components of a National Strategy to Enhance State and Local Preparedness (GAO-02-548T, March 25, 2002). Combating Terrorism: Intergovernmental Partnership in a National Strategy to Enhance State and Local Preparedness (GAO-02-547T, March 22, 2002). Combating Terrorism: Key Aspects of a National Strategy to Enhance State and Local Preparedness (GAO-02-473T, March 1, 2002). Combating Terrorism: Considerations For Investing Resources in Chemical and Biological Preparedness (GAO-01-162T, October 17, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, September 20, 2001). Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management (GAO-01-909, September 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, April 24, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, March 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, March 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities: Opportunities Remain to Improve Coordination (GAO-01- 14, November 30, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, April 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, March 21, 2000). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, October 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, September 7, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO-NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorism Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, March 11, 1999). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO-NSIAD-99-3, November 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, October 2, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, April 9, 1998). Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, December 1, 1997). Bioterrorism: The Centers for Disease Control and Prevention’s Role in Public Health Protection (GAO-02-235T, November 15, 2001). Bioterrorism: Review of Public Health and Medical Preparedness (GAO- 02-149T, October 10, 2001). Bioterrorism: Public Health and Medical Preparedness (GAO-02-141T, October 10, 2001). Bioterrorism: Coordination and Preparedness (GAO-02-129T, October 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, September 28, 2001). Chemical and Biological Defense: Improved Risk Assessments and Inventory Management Are Needed (GAO-01-667, September 28, 2001). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, September 11, 2000). Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attacks (GAO/NSIAD-99-163, September 7, 1999). Chemical and Biological Defense: Program Planning and Evaluation Should Follow Results Act Framework (GAO/NSIAD-99-159, August 16, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, March 16, 1999). Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations (GAO-01-1171T, September 25, 2001). Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities (GAO-01-1165T, September 21, 2001). Aviation Security: Terrorist Acts Illustrate Severe Weaknesses in Aviation Security (GAO-01-1166T, September 20, 2001). Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation's Airports (GAO-01-1162T, September 20, 2001). Aviation Security: Long-Standing Problems Impair Airport Screeners' Performance (RCED-00-75, June 28, 2000). Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems (T-RCED-00-125, March 16, 2000). Aviation Security: Progress Being Made, but Long-term Attention Is Needed (T-RCED-98-190, May 14, 1998). Aviation Security: FAA's Procurement of Explosives Detection Devices (RCED-97-111R, May 1, 1997). Aviation Security: Commercially Available Advanced Explosives Detection Devices (RCED-97-119R, April 24, 1997). Aviation Security: Technology's Role in Addressing Vulnerabilities (T- RCED/NSIAD-96-262, September 19, 1996). Aviation Security: Urgent Issues Need to Be Addressed (T-RCED/NSIAD- 96-251, September 11, 1996). Aviation Security: Immediate Action Needed to Improve Security (T- RCED/NSIAD-96-237, August 1, 1996). Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks (GAO-01-1168T, September 26, 2001). Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities (GAO-01-1132T, September 12, 2001). Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities (GAO-01-1005T, July 25, 2001). Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities (GAO-01-769T, May 22, 2001). Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities (GAO-01-323, April 25, 2001). Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination (T- AIMD-00-268, July 26, 2000). Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000 (T-AIMD-00-229, June 22, 2000). Critical Infrastructure Protection: National Plan for Information Systems Protection (AIMD-00-90R, February 11, 2000). Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection (T-AIMD-00-72, February 1, 2000). Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations (T-AIMD-00-7, October 6, 1999). Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences (AIMD-00-1, October 1, 1999). Disaster Assistance: Improvement Needed in Disaster Declaration Criteria and Eligibility Assurance Procedures (GAO-01-837, August 31, 2001). Federal Emergency Management Agency: Status of Achieving Key Outcomes and Addressing Major Management Challenges (GAO-01-832, July 9, 2001). FEMA and Army Must Be Proactive in Preparing States for Emergencies (GAO-01-850, August 13, 2001). Results-Oriented Budget Practices in Federal Agencies (GAO-01-1084SP, August 2001). Managing for Results: Federal Managers’ Views on Key Management Issues Vary Widely Across Agencies (GAO-010592, May 2001).
Since September 11, the President and Congress have taken aggressive steps to protect the nation, including creating an Office of Homeland Security (OHS); passing new laws, such as the USA Patriot Act and an emergency supplemental spending bill; establishing a new agency to improve transportation security; and working with federal, state, and local governments, private sector entities, non-governmental organizations and other countries to prevent future terrorist acts and to bring those individuals responsible to justice. More recently, Congress and the President have proposed greater consolidation and coordination of various agencies and activities. The President has proposed establishing a Department of Homeland Security (DHS) and has sent draft legislation to Congress. This testimony focuses on two major issues: (1) the need for reorganization and the principles and criteria to help evaluate what agencies and missions should be included in or left out of the new DHS and (2) issues related to the transition, cost, and implementation challenges of the new department.
Pakistan is central to U.S. efforts to disrupt, dismantle, and defeat al Qaeda and deny its resurgence in the Afghanistan-Pakistan region. The United States has sought to secure these interests through counterterrorism and counterinsurgency cooperation with Pakistan as well as through a long-term partnership anchored, in part, by civilian and military assistance. In fiscal years 2002 through 2012, the U.S. government provided the Pakistani government more than $26 billion in assistance and reimbursements toward these goals.goals, multiple U.S. agencies provide assistance to Pakistan. Table 1 summarizes example activities of key U.S. agencies providing this assistance. To achieve U.S. Available agency data show that U.S. officials experience delays in the issuance of both visas for travel to Pakistan and visa extensions, which have affected the implementation and oversight of U.S. assistance to Pakistan. Agencies reported that these delays affect the implementation of U.S. programs in multiple ways—for example, creating staffing gaps for critical embassy positions and necessitating the cancellation of training to assist the Pakistani government in areas such as antiterrorism and counternarcotics. Our analysis of available agency data shows that U.S. officials have experienced delays in obtaining Pakistani visas. According to the Pakistani embassy, and as reported by State, the embassy’s policy is to issue visas for U.S. officials within 6 weeks of their application date. We obtained data from components of DOD, DOJ, State, and USAID on processing times for U.S. officials’ applications for official or diplomatic Our analysis of these Pakistani visas in fiscal years 2010 through 2012.data shows that approximately 82 percent of visas for U.S. officials were issued within 6 weeks or less. However, about 18 percent of the visas took longer than 6 weeks to be issued, with approximately 3 percent taking 16 weeks or longer. See figure 3 for more information. Processing time for issued visas varies depending on the agency or component submitting applications. For instance, for visas issued by the Pakistani embassy, processing took more than 6 weeks for about 4 percent of applications submitted by USAID’s Office of Afghanistan and Pakistan Affairs, compared with approximately 33 percent of applications submitted by State’s Orientation and In-Processing Center. Moreover, according to U.S. officials, an analysis of issued visas may underestimate the extent of visa delays. First, it may exclude cases in which a visa was not issued, including cases where (1) an individual withdrew a visa application because the visa was not received prior to the planned departure date or (2) a visa application that had taken longer than 6 weeks to process was still pending at the Pakistani embassy at the time of our analysis. For instance, from November 2009 to June 2012, State’s Special Issuance Agency pulled about 180 passports— representing approximately 10 percent of the Pakistani visa applications that it submitted during this period—from the Pakistani embassy without visa issuance, including approximately 140 passports that the agency pulled after a visa was not issued prior to the individual’s planned departure date. Further, an analysis of visa processing time may not include delays that certain DOD officials face in obtaining non-objection certificates and letters of invitation required for their visa applications. For instance, DOD officials told us that obtaining this documentation takes approximately 6 weeks and that visa applications for DOD personnel are submitted only after a non-objection certificate or letter of invitation has been obtained. Therefore, visa delays that DOD personnel may experience are in addition to the wait time to obtain a non-objection certificate or letter of invitation. For instance, DOD officials told us that in addition to waiting 4 to 6 weeks to obtain a Pakistani visa, some DOD travelers have waited months for the documentation that must accompany the visa application. U.S. officials also experience delays in obtaining visa extensions after arrival in Pakistan. The U.S. embassy in Islamabad monitors the visa status of U.S. officials accredited to the U.S. mission in Pakistan and processes applications for any required visa extensions for these officials. According to State, visa extensions are granted by the Pakistani Ministry of Foreign Affairs. Between 2010 and 2012, the U.S. embassy processed applications for approximately 2,200 visa extensions for U.S. officials in Pakistan. Approximately 59 percent of these visa extensions took longer than 6 weeks to be issued, with approximately 5 percent taking 16 weeks or longer. See figure 4 for more information. Additionally, data from the U.S. embassy in Islamabad show that it never received approximately 50 visa extensions for which it had applied. U.S. officials told us that in some of these cases, the applicants experienced such a lengthy delay in receiving their visa extensions that they decided to leave Pakistan rather than overstay their initial visa while waiting for the extension. In addition to reporting delays related to obtaining visas for U.S. officials, agencies noted challenges in obtaining visas for contractors assisting in implementing programs for U.S. agencies in Pakistan.State’s Antiterrorism Assistance Program and DOJ’s International Criminal Investigative Training Assistance Program rely on contractor staff to provide training to Pakistani law enforcement personnel. However, both agencies noted delays in obtaining visas for these instructors. For example, State reported that it applied for approximately 40 instructor visas for its Antiterrorism Assistance Program between October and December 2012 but that no instructors received visas in time to provide instruction as planned. Similarly, officials of DOJ International Criminal Investigative Training Assistance Program told us that they have experienced delays in obtaining visas for their contractor staff. According to State’s Bureau of International Narcotics and Law Enforcement Affairs Office of Aviation for Pakistan, it has experienced delays in obtaining contractor visas, including one visa that it applied for in October 2010 and that was still pending as of January 2013. In addition, as of February 2013, State told us it had experienced visa wait times of longer than 6 weeks for approximately 30 facilities maintenance contractors overseeing reconstruction of the U.S. embassy in Islamabad. Reasons for the delays in processing Pakistani visas are not well understood. Officials from DOD, DOJ, State, USAID, and the U.S. embassy in Islamabad said that they receive little specific information from Pakistan on the reasons for visa delays. Officials from those agencies stated that factors in the bilateral relationship between the United States and Pakistan appear to affect the length of visa processing. Agencies have identified visa delays as a risk to effective implementation of U.S. programs in Pakistan. Agencies noted that visa delays cause staffing gaps, limit opportunities to train Pakistani security personnel, constrain oversight and monitoring of U.S. programs, and complicate program planning and implementation. First, according to DOD, DHS, DOJ, State, and USAID officials, visa delays cause staffing gaps for positions at the U.S. embassy in Islamabad, including those providing security and law enforcement assistance to Pakistan. For example, DOD officials told us that visas for key positions have been significantly delayed. Specifically, DOD officials noted that the visa for the training officer for the International Military Training and Education program in Pakistan was delayed for several months, during which the position was vacant. Additionally, they told us that two individuals scheduled to staff the Defense Attaché Office waited approximately 8 months for their visas and were ultimately reassigned because of the delays. State officials also noted staffing gaps due to delays in obtaining visas for security staff and staff managing counternarcotics and law enforcement programs. According to State officials, visa delays particularly affect Regional Security Office and Marine Security Guard staff, who provide protection for the U.S. embassy in Islamabad. State noted that visas for approximately 40 Regional Security Officers and Marine Security Guards were significantly delayed, some for as long as 9 months. Officials told us that such delays can lead to staffing gaps and that these gaps cannot always be filled by obtaining personnel on temporary assignment. Moreover, because embassy staffing plans are designed to align the number of staff with U.S. foreign policy priorities, security concerns, and other constraints, staffing gaps can undermine this alignment. See table 3 for additional examples of staffing gaps at the U.S. embassy in Islamabad that have resulted from visa delays. In addition, according to agency officials, visa delays and related staffing gaps have limited their opportunities to train Pakistani security personnel. The officials said that the delays in obtaining visas for instructors scheduled to train Pakistani officials have caused agencies to postpone or cancel training in a variety of areas, including antiterrorism, counternarcotics and law enforcement, use and maintenance of military equipment, and countering improvised explosive devices. For example, in the first quarter of fiscal year 2013, State’s Antiterrorism Assistance program canceled 14 of 31 classes on critical management topics, such as tactical, negotiation, and investigation skills in combating terrorism, because of delays in obtaining visas for instructors. In addition, State continued to operate and maintain its training facility in Pakistan for the Antiterrorism Assistance program, although the classrooms were empty because of canceled trainings. Moreover, officials at State’s Bureau of International Narcotics and Law Enforcement Affairs told us that visa delays have disrupted the delivery of law enforcement and rule-of-law assistance to Pakistan’s criminal justice sector. According to these officials, the bureau has provided Pakistan with 17 aircraft for counternarcotics assistance at a value of $50 million. However, officials at the bureau’s Office of Aviation for Pakistan told us that in June 2010 and November 2012, respectively, the office placed three C-208 aircraft and six Huey-II helicopters in storage owing to a shortage of personnel in Pakistan that had resulted from visa delays. According to these officials, this personnel shortage limited the office’s ability to train Pakistanis and perform aircraft inspections and repairs necessary for the proper use of the equipment. In addition, State officials noted that they had canceled police and rule-of-law training because of visa delays. DOD officials noted similar cancellations of counternarcotics training, and several agencies noted cancellation of training on countering improvised explosive devices. DOJ International Criminal Investigative Training Assistance Program officials told us that delays in obtaining visas for instructors were so pervasive that the program would be forced to reduce training after March 2013, unless it receives new visas for its instructor staff. Moreover, visa delays have reportedly constrained oversight and monitoring of U.S. programs in Pakistan. Officials from Inspectors General for USAID, State, and DOD’s U.S. Central Command stated that visa delays have disrupted inspections and audits. State and USAID officials also noted that visa delays create challenges for conducting monitoring and evaluation of program assistance, including monitoring and evaluation of antiterrorism and development programs. For instance, according to State officials, an assessment and evaluation of the Pakistan Antiterrorism Assistance program in June 2012 was delayed when several team members were unable to participate in the trip because their visas were not issued in a timely manner. DHS officials also told us that because of visa delays, DHS cannot conduct an audit of the inventory of supplies or replace outdated nonintrusive inspection technology used for the Secure Freight Initiative in Pakistan, which captures data on containers bound for the United States and alerts U.S. and Pakistani officials of security risks. Further, even when visa delays do not lead to staffing gaps, the delays complicate program planning and implementation, according to agency officials. Regarding planning, DOD, State, and USAID officials noted that visa delays create challenges planning travel to Pakistan, as it is not unusual for U.S. officials to receive visas very close to the day of their planned departure. Visa delays also slow program implementation. USAID officials stated that visa delays can affect programs in Pakistan by slowing the arrival of technical experts needed to assist with project design and implementation. Similarly, DOJ International Criminal Investigative Training Assistance Program officials told us that visa delays make it very challenging for them to plan courses and training schedules. Additionally, officials told us that the issuance of visas with travel restrictions can also create challenges and increase project costs. For instance, according to officials from USAID, various types of visas received by U.S. officials create challenges for staff once they are in Pakistan, as they must constantly monitor their visa status and may have to leave Pakistan to reset their visas in compliance with Pakistani immigration regulation. For example, U.S. officials in Pakistan with visas that allow multiple entries for 1 year, with a maximum stay of 90 days at a time, must exit Pakistan every 90 days to reset their visas. USAID’s Office of Inspector General estimated that it has spent approximately $25,000 in additional travel costs due to delays in receiving visa extensions for staff traveling on visas that require a reset every 90 days. In addition, State officials told us that they have experienced challenges in obtaining 1-year multi-entry visas for staff providing oversight of construction on the U.S. embassy compound in Islamabad. According to State, the need for staff to reset their short-term visas limits the efficiency of embassy construction and adds to the cost of the project. For instance, State reported in May 2012 that these disruptions added approximately $2 million to the project’s overall cost. Agencies have taken various steps to address Pakistani visa delays, but reporting to Congress does not provide comprehensive information on the risk of visa delays government-wide. The Enhanced Partnership with Pakistan Act of 2009 requires State, in consultation with DOD, to identify and report to Congress about risks to effective use and oversight of U.S. funds, including any shortfall in U.S. human resources, among other things. In addition, according to federal standards for internal control, analyzing information on identified risks could help agencies better manage such risks. According to officials, agencies have taken various steps to manage visa delays and their effects. For instance, State has conducted high-level discussions with the Pakistani government regarding visa delays, and agencies affected by Pakistani visa delays have shifted training to other countries. However, State’s reporting to Congress does not provide comprehensive information on the risk of visa delays government-wide. State has reported visa delays as a challenge to the implementation of its programs. However, State’s reports do not include information regarding the risks of visa delays to the human resources of other agencies, although components of DOD, DOJ, and USAID told us that they had experienced staffing gaps caused by visa delays. Reporting comprehensive information about the risks of visa delays could provide a more complete picture of the challenges to implementing U.S. assistance and better inform any potential diplomatic discussions between the United States and Pakistan regarding delays. Agencies have taken steps to address the effects of delays on their operations, including reprogramming funds for other priority initiatives, shifting training to other countries, and tracking information on the status of obtaining Pakistani visas and visa extensions. According to federal standards for internal control, analyzing information on identified risks could help agencies better manage such risks. See table 4 for more examples of steps that agencies reported having taken to address the effects of visa delays on their operations. In addition to taking steps to address the effects of visa delays, officials at State—the agency responsible for conducting diplomatic discussions with Pakistan—told us that they had engaged in high-level discussions with Pakistani officials in an attempt to expedite visa processing times for U.S. officials. State officials noted that mission leadership in Pakistan have raised the issue of visa delays, and particular staffing gaps at the U.S. embassy that have resulted from delays, with Pakistani ministerial personnel. The U.S. Secretary of State has also discussed visa delays with the Pakistani Foreign Minister to try to resolve the issue. State officials told us State will continue to discuss these issues with Pakistan as warranted by events. While operations generally continue, agencies noted that their mitigation actions do not fully resolve the effects of visa delays and that they cannot implement programs as effectively or efficiently. Staff from the Bureau of International Narcotics and Law Enforcement Affairs said that although they take steps to deliver assistance despite visa delays, the difficulty in maintaining the continuous presence of in-country staff reduces readily available subject matter expertise, slows interaction with Pakistani officials, and lessens the ability to quickly increase staff to address emerging needs. Regarding efficiency, DOD staff said that shifting training to third-country locations raises costs significantly, including costs for airlifting Pakistani personnel and providing funds for insurance and other incidental needs, and results in the training of fewer Pakistani personnel. DOJ’s Office of Overseas Prosecutorial Development, Assistance and Training staff similarly noted that although relocating training has had a positive outcome, it has incurred higher costs for the U.S. government. In reporting to Congress, State has not provided comprehensive information on the risks of Pakistani visa delays government-wide. The Enhanced Partnership with Pakistan Act of 2009 requires State, in consultation with DOD, to identify and report on a semiannual basis to Congress about risks to effective use and oversight of U.S funds to Pakistan, including any shortfall in U.S. human resources, among other things. In compliance with the act, State has produced semiannual monitoring reports describing the assistance provided to Pakistan during the preceding 180-day period. State’s report for the period of March 2010 to December 2010 did not cite visa delays as a risk to its programs. In its reports covering January 2011 to November 2011 and December 2011 to June 2012, State broadly cited challenges to program implementation due to visa delays, and in the latter report, State included the example of disruptions to its antiterrorism training efforts in Pakistan caused by visa delays. However, State’s reports do not include information regarding the risks of Pakistani visa delays affecting U.S. human resources for other State programs and for agencies other than State, although officials from components of DOD, USAID, and DOJ reported to us that such delays had caused staffing gaps during this time period. For instance, DOD experienced visa delays that created staffing gaps affecting its security assistance office, including a vacancy in the position of the officer responsible for coordinating military exchange programs, while DOJ experienced a staffing gap for its Resident Legal Advisor. Information from DOD, USAID, and DOJ components could be included in State’s reports, given that officials from these agencies told us that they track information related to visa delays. Officials of other agency components, such as DOJ’s Drug Enforcement Administration and DHS’s U.S. Immigration and Customs Enforcement, told us that they do not retain data about visa processing times although they maintain information on their staff’s ability to travel to Pakistan. In addition, since passage of the Enhanced Partnership with Pakistan Act in October 2009, Congress has expressed continued interest in receiving information on visa delays. For instance, in the Consolidated Appropriations Act, 2012, Congress required State to certify requested information about the timeliness of issuance of Pakistani visas to U.S. officials before certain funding for Pakistan could be provided. State waived the certification requirements for fiscal year 2012 as allowed in the act. DOD’s fiscal year 2013 appropriation requires a similar certification from the Secretary of Defense, in consultation with the Secretary of State, before funds may be reimbursed to Pakistan for support provided to U.S. military operations. Without comprehensive reporting about the risks of visa delays and related staffing gaps, State’s reporting to Congress may not provide a complete picture of the challenges the United States faces in managing and overseeing U.S. assistance to Pakistan, and agencies may lack information that could help them manage such risks. Although the United States invested more than $26 billion in fiscal years 2002 through 2012 to assist the government of Pakistan, U.S. officials applying for Pakistani visas continue to face delays that they have identified as disrupting their efforts to provide assistance. Despite identifying these disruptions as a risk, State does not report comprehensive information on the extent of visa delays across the U.S. government. Complete and consistent reporting of such information could help the United States diagnose problems related to visa delays, enhance planning, and improve decision making to address the effects of such delays. In addition, tracking information on the risks of visa delays could help State provide more complete information in response to congressional reporting requirements and may help to inform future diplomatic negotiations between the United States and Pakistan to resolve this issue. To improve the information provided to Congress and to inform potential diplomatic discussions, we recommend that the Secretary of State consult with U.S. agencies engaged in providing assistance to Pakistan to obtain information on Pakistani visa delays and include this information in State’s future reporting to Congress. We provided a draft of this report to State, DOD, DHS, DOJ, and USAID for their review and comment. State and DOD provided written comments, which we have reprinted in appendixes II and III, respectively. State and DOJ also provided technical comments, which we incorporated as appropriate. DHS and USAID had no comments. In commenting on our report, State partially concurred with our recommendation that it should consult with U.S. agencies engaged in providing assistance to Pakistan to obtain information on visa delays and include this information in its future reporting to Congress. State noted that interagency coordination regarding Pakistani visa applications can be difficult and additional staff would be required for State to coordinate all U.S. government Pakistani visa applications. In addition, according to State, it is important to note that certain visa applicants, such as U.S. officials traveling to Pakistan on short-term assignments and military or security personnel, experience longer wait times than other applicants. State noted that reporting to Congress that differentiates between these applicants would be more effective. However, State stated that our report prompted State and Embassy Islamabad to improve coordination procedures to better track visa applications within State and, to the extent possible, throughout the interagency. We maintain that reporting of more comprehensive information by State on the risks of visa delays could better inform Congress regarding the challenges of implementing U.S. assistance in Pakistan. While we acknowledge that interagency coordination can be challenging, officials from various agencies—including DOD, DHS, DOJ, and USAID—told us they track or maintain information related to visa delays, which could facilitate State’s efforts to obtain and report such information. Moreover, our recommendation would not require State to “coordinate all U.S. government visa applications for Pakistan,” but rather recommends that State consult with other agencies to obtain information these agencies already collect regarding visa delays to Pakistan. In addition, we are encouraged that our report prompted State to develop new procedures to enhance its tracking of visa applications government-wide. We agree with State that certain visa applicants may experience longer wait times than other applicants, including, as we note in our report, those providing security and law enforcement assistance to Pakistan. We believe that our recommendation is consistent with State identifying and reporting on which visa applicants experience longer wait times, and that such reporting could better inform Congress. In commenting on our report, DOD agreed with the report’s findings and observations. DOD also noted that Pakistani officials experience delays obtaining visas to travel to the United States, which “feed the narrative that the Pakistanis treat us no differently than we treat them,” undermining requests for process improvements. While DOD notes that visa delays experienced by Pakistani officials may affect processing of visas for U.S. officials, we could not verify whether this has been a contributing cause because the government of Pakistan did not respond to our requests to discuss visa delays. We are sending copies of this report to the appropriate congressional committees, the Secretary of State, and officials at DOD, DHS, DOJ, and USAID. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-7331 or johnsoncm@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. This report reviews issues related to visa delays, including their extent and implications. Specifically, we examined (1) the extent to which U.S. officials experience delays obtaining Pakistani visas and the effects of these delays and (2) steps U.S. agencies have taken to address Pakistani visa delays. Our work focused on Pakistani visa applications of the Departments of Defense (DOD), Homeland Security (DHS), Justice (DOJ), State (State), and the U.S. Agency for International Development (USAID). We met with officials from relevant components of these agencies, including DOD’s Office of the Under Secretary of Defense for Policy, Office of the Defense Representative Pakistan, U.S. Central Command, and DOD Passport Matters; DHS’s U.S. Customs and Border Protection and U.S. Immigration and Customs Enforcement; DOJ’s Drug Enforcement Administration, Federal Bureau of Investigation, International Criminal Investigative Training Assistance Program, and Office of Overseas Prosecutorial Development, Assistance and Training; State’s Bureau of South and Central Asian Affairs, Bureau of Consular Affairs, Bureau of Diplomatic Security, Bureau of International Narcotics and Law Enforcement Affairs, and Office of Inspector General; and USAID’s Office of Afghanistan and Pakistan Affairs and Office of Inspector General. We focused on these agencies and components because personnel of these agencies constituted 99 percent of U.S. officials stationed in Pakistan as of May 2012. To evaluate the extent to which U.S. officials experience delays obtaining Pakistani visas, and the effects of these delays, we collected available data on processing times for official and diplomatic visa applications to Pakistan in fiscal years 2010 through 2012 from DOD’s Office of the Defense Representative Pakistan, DOJ’s Federal Bureau of Investigation, State’s Special Issuance Agency and Orientation and In-Processing Center, and USAID’s Office of Afghanistan and Pakistan Affairs. Not all components had data available for the entire period of fiscal years 2010 through 2012. Although DOJ’s Federal Bureau of Investigation provided data for the entire period, DOD’s Office of the Defense Representative Pakistan provided data for December 2009 through September 2012, State’s Special Issuance Agency provided data for November 2009 through September 2012, State’s Orientation and In-Processing Center provided data for March 2010 through September 2012, and USAID’s Office of Afghanistan and Pakistan Affairs provided data for December 2010 through September 2012. We also obtained data for January 2010 through December 2012 from the U.S. embassy in Islamabad on processing times for visa extensions for U.S. officials in Pakistan. We determined visa processing times by examining the date of a visa application and the date the visa was received by the agency. We defined a delay as a processing time exceeding 6 weeks, because, according to the embassy of Pakistan, the stated processing time for visas for U.S. officials is within 6 weeks of their application date. To determine the extent of delays, we grouped processing times into categories by week. The category “1 week” includes processing times of 0 to 7 days, the category “2 weeks” includes processing times of 8 to 14 days, and the category “3 weeks” includes processing times of 15 to 21 days. Categories continue sequentially in this manner until the “16+ week” category, which includes processing times of 106 to 440 days. Additionally, we obtained data on visa processing times for contractors from DOJ’s International Criminal Investigative Training Assistance Program and State’s Bureau of International Narcotics and Law Enforcement Affairs. However, because contractors are not U.S. officials, we did not combine these data with the data on visa processing times for U.S. officials. Further, to discuss the effects of Pakistani visa delays on the delivery and oversight of U.S. assistance in Pakistan, including the implementation of such assistance by contractors, we examined agency planning, budget, and oversight documents discussing visa delays, including mission strategic resource plans and quarterly progress and oversight reports on the civilian assistance program in Pakistan from 2010 to 2012. Additionally, we interviewed officials from DOD, DHS, DOJ, State, and USAID to discuss the effects of visa delays on program delivery and oversight. The government of Pakistan did not respond to our requests to discuss visa delays for U.S. officials. To examine the steps U.S. agencies have taken to address Pakistani visa delays, including steps to address associated risks that they had identified, we reviewed relevant agency documents, including planning and budget documents and available data on visa processing times, and conducted interviews with knowledgeable officials. We compared these steps with risk assessment standards in our Standards for Internal Control in the Federal Government. We also examined semiannual monitoring reports that State had provided to Congress in compliance with the Enhanced Partnership with Pakistan Act of 2009. To assess the reliability of visa processing data, we (1) interviewed agency officials responsible for compiling these data and (2) performed basic reasonableness checks of the data for obvious inconsistency errors and completeness. When we found discrepancies, we brought them to the attention of relevant agency officials and worked with officials to correct the discrepancies before conducting our analyses. According to agency officials, these data may not include all visa applications to Pakistan, because U.S. officials may not always involve agency travel offices in processing their applications. Furthermore, as we report, our analysis of issued visas may understate processing time because it excludes cases in which a visa was not issued, such as cases in which (1) an individual withdrew a visa application because the visa was not received prior to the individual’s planned departure date or (2) a visa application that had taken longer than 6 weeks to process was still pending at the Pakistani embassy at the time of our analysis. In addition, our analysis covers only visa processing time and does not include wait times to obtain non-objection certificates or letters of invitation. Despite these limitations, we determined that the data were sufficiently reliable for the purpose of making broad statements about processing times for completed visas. We also make a recommendation to address existing limitations in reporting of visa delays across agencies. We conducted this performance audit from August 2012 to May 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the individual named above, Hynek Kalkus (Assistant Director), Lina Khan, Lisa Reijula, and Biza Repko made key contributions to this report. Ashley Alley, Jenny Chanley, Karen Deans, Carol E. Finkler, Rebecca Gambler, and Mary Moutsos provided additional support. Counterterrorism: U.S. Agencies Face Challenges Countering the Use of Improvised Explosive Devices in the Afghanistan/Pakistan Region. GAO-12-907T. Washington, D.C.: July 12, 2012. Combating Terrorism: State Should Enhance Its Performance Measures for Assessing Efforts in Pakistan to Counter Improvised Explosive Devices. GAO-12-614. Washington, D.C.: May 15, 2012. Pakistan Assistance: Relatively Little of the $3 Billion in Requested Assistance Is Subject to State's Certification of Pakistan's Progress on Nonproliferation and Counterterrorism Issues. GAO-11-786R. Washington, D.C.: July 19, 2011. Department of State's Report to Congress and U.S. Oversight of Civilian Assistance to Pakistan Can Be Further Enhanced. GAO-11-310R. Washington, D.C.: February 17, 2011. Accountability for U.S. Equipment Provided to Pakistani Security Forces in the Western Frontier Needs to Be Improved. GAO-11-156R. Washington, D.C.: February 15, 2011. Combating Terrorism: Planning and Documentation of U.S. Development Assistance in Pakistan’s Federally Administered Tribal Areas Need to Be Improved. GAO-10-289. Washington, D.C.: April 15, 2010. Afghanistan and Pakistan: Oversight of U.S. Interagency Efforts. GAO-09-1015T.Washington, D.C.: September 9, 2009. Securing, Stabilizing, and Developing Pakistan’s Border Area with Afghanistan: Key Issues for Congressional Oversight. GAO-09-263SP. Washington, D.C.: February 23, 2009. Combating Terrorism: Increased Oversight and Accountability Needed over Pakistan Reimbursement Claims for Coalition Support Funds. GAO-08-806. Washington, D.C.: June 24, 2008. Combating Terrorism: U.S. Oversight of Pakistan Reimbursement Claims for Coalition Support Funds. GAO-08-932T. Washington, D.C.: June 24, 2008. Combating Terrorism: U.S. Efforts to Address the Terrorist Threat in Pakistan’s Federally Administered Tribal Areas Require a Comprehensive Plan and Continued Oversight. GAO-08-820T. Washington, D.C.: May 20, 2008. Preliminary Observations on the Use and Oversight of U.S. Coalition Support Funds Provided to Pakistan. GAO-08-735R. Washington, D.C.: May 6, 2008. Combating Terrorism: The United States Lacks Comprehensive Plan to Destroy the Terrorist Threat and Close the Safe Haven in Pakistan’s Federally Administered Tribal Areas. GAO-08-622. Washington, D.C.: April 17, 2008.
Pakistan is a key U.S. partner in the effort to combat terrorism and violent extremism. In fiscal years 2002 through 2012, Pakistan received more than $26 billion in U.S. funding. To travel to Pakistan to implement and oversee programs, U.S. officials are required to obtain a Pakistani visa and, depending on the length of their stay, may need to apply for a visa extension once in Pakistan. U.S. officials have expressed concerns about delays in obtaining Pakistani visas. Congress has also expressed interest in receiving information on Pakistani visa delays, such as requiring that State and DOD certify information regarding timely issuance of visas to officials before providing or reimbursing certain funding for Pakistan. GAO was asked to review issues related to visa delays. This report examines (1) the extent to which U.S. officials experience delays obtaining Pakistani visas and the effects of these delays and (2) steps U.S. agencies have taken to address Pakistani visa delays. GAO analyzed data on visa wait times, reviewed planning documents, and met with officials from DOD, DHS, DOJ, State, and USAID. U.S. officials have experienced delays in obtaining Pakistani visas that disrupt the delivery and oversight of U.S. assistance to Pakistan. According to Pakistani Consular Services, and as confirmed by the Department of State (State), the goal of the embassy of Pakistan is to issue visas for U.S. officials within 6 weeks. GAO's analysis of data provided by State, the Departments of Defense (DOD) and Justice (DOJ), and the U.S. Agency for International Development (USAID) found that U.S. officials experience delays in the issuance of both visas to travel to Pakistan and visa extensions. For instance, GAO found that of about 4,000 issued visas, approximately 18 percent took more than 6 weeks, with approximately 3 percent taking 16 weeks or longer. Moreover, of approximately 2,200 visa extensions, about 59 percent took longer than 6 weeks to be issued, with approximately 5 percent taking 16 weeks or longer. U.S. officials stated that they receive little specific information from Pakistan on the reasons for visa delays, but they noted that visa delays disrupt the effective implementation and oversight of U.S. programs and efficient use of resources in Pakistan. Visa delays also have created staffing gaps for critical embassy positions, such as Regional Security Officers and Marine Security Guards, and have necessitated the cancellation of training to assist the Pakistani government in areas such as antiterrorism, counternarcotics, and law enforcement assistance. Agencies have taken various steps to address Pakistani visa delays, but reporting to Congress does not provide comprehensive information on the risk of visa delays government-wide. The Enhanced Partnership with Pakistan Act of 2009 requires State to identify and report to Congress on a semiannual basis about risks to effective use and oversight of U.S funds to Pakistan, such as any shortfall in U.S. human resources. In addition, federal standards for internal control state that once agencies identify a risk to their programs, they should collect and analyze information to allow them to develop better approaches to manage it. According to officials, agencies have taken various steps to manage visa delays and their effects. For instance, State has conducted high-level discussions with the Pakistani government regarding visa delays and has reprogrammed $10 million budgeted for antiterrorism trainings in Pakistan, which were canceled due to visa delays, toward other priority initiatives. However, GAO found that State's reporting does not include comprehensive information on the risks of visa delays government-wide. State has reported to Congress that visa delays create challenges to the implementation of its programs in Pakistan. However, State's reports do not include information regarding the risks of visa delays to the human resources of other agencies, although components of DOD, DOJ, and USAID told GAO that they experience staffing gaps caused by visa delays. Reporting comprehensive information about the risks of visa delays could provide a more complete picture of the challenges that the United States faces in managing and overseeing U.S. assistance to Pakistan. More comprehensive reporting may also help to better inform any potential diplomatic discussions between the United States and Pakistan regarding visa delays. GAO recommends that State consult with U.S. agencies engaged in providing assistance to Pakistan to obtain information on visa delays and include this information in its reporting to Congress. State partially concurred, citing challenges with interagency coordination, but noted that GAO's report has prompted State to improve its tracking of visa applications to Pakistan government-wide.
The Creekbed facility was built in 1937 as a German air force hospital. The U.S. military acquired it at the conclusion of World War II and used it as a hospital until the late 1990s. The facility was slated to revert to the German government in 2000. From 2000 to 2001, State conducted discussions with the German government to acquire the property. In July 2002, Creekbed was officially transferred from the German government to the State Department for a cost of $30.3 million. Since July 2002, OBO has been determining which renovations, including security and safety enhancements, will be necessary to prepare the facility to house the U.S. government’s Consulate General in Frankfurt. The design and renovation cost for the facility is estimated at $49.8 million, bringing total project costs to an estimated $80.1 million. State estimates that, if Creekbed had not been available, acquiring a site and building a comparable facility to meet U.S. government needs in Frankfurt would have cost roughly $260 million. The facility consists of 13 major interconnected buildings that will provide 325,000 square feet of usable office space. In addition, an 85,000-square- foot warehouse will be built on the property. The site also contains significant areas of land that can be used for construction and future expansion of operations if necessary. OBO stressed that the renovation will focus on building a perimeter wall, warehouse, and access controls; and performing basic renovation, such as painting and installing upgraded wiring. OBO does not plan to tear down walls, install air conditioning, or do other extensive work. Renovation of the facility is scheduled from September 2003 to March 2005. State projects that by mid-2005, Creekbed will be fully operational. According to State’s business plan to purchase the facility, the Creekbed project had four fundamental objectives. First, the renovated facility would provide secure office space that is a vast improvement over security afforded by existing facilities in Frankfurt. Second, Creekbed would provide space for operations currently located at the Rhein Main Air Force Base, which the U.S. government has agreed to vacate in 2005 and return to the German government. Third, Creekbed would provide office space for staff currently working at the U.S. embassy in Berlin who will not have space in the new U.S. embassy building that is scheduled for construction. Finally, Creekbed has space to accommodate a number of regional staff from outside Germany who are assigned to embassies and consulates with security vulnerabilities. In its business plan, State identified several agencies from outside Germany that would be considered for relocation to Frankfurt. According to State, the Consul General in Frankfurt, and officials at each of the agencies in Frankfurt that we visited, Frankfurt is considered a good location as a regional hub because of its location and transportation links. They also noted that many of the offices currently assigned to the U.S. consulate have regional responsibilities. Developing the Frankfurt facility as a regional center is consistent with recommendations of the Overseas Presence Advisory Panel calling for use of regional centers and relocation of personnel to reduce security vulnerabilities at overseas posts. It is also consistent with a rightsizing framework we developed to support decision-making on overseas staffing. The framework encourages decisions to be based on a full consideration of the security, mission, and cost factors associated with each agency’s presence and outlines rightsizing options, including regionalization of operations. OMB also cited this project as allowing U.S. agencies to put in one central location appropriate administrative functions now performed in multiple posts around Europe and beyond. Furthermore, the House Conference Report for the Consolidated Appropriations Resolution 2003 stated that the conferees support “the Department ’s effort to initiate a consolidation, streamlining and regionalization of country and multi- regional staffing in Frankfurt, Germany.” The report also said, “The success of this initiative will be measured largely by the staffing reductions made possible at less secure locations throughout Germany, Europe, Eurasia, Africa and the Near East.” State indicated it has renewed its efforts to identify staff from posts outside Germany who could be relocated to the new Frankfurt regional center. According to State, this process will consider rightsizing factors such as security, mission requirements, and costs as well as possible changes in functions that would make operations more efficient. State’s earlier efforts were prematurely halted in August/September 2002 because staffing planners mistakenly interpreted space planning estimates as indicating the regional center would be fully occupied. However, in May 2003, we analyzed State’s staffing requirements for Creekbed in relation to the facility’s capacity and found additional space was available. We briefed both State and OMB officials on the capacity issue. OMB urged State to reopen the staffing process and to consider relocating more regional staff to Frankfurt. In May 2003, State announced that it had restarted a process to identify staff from posts outside Germany who could be relocated to take advantage of Creekbed’s available office space and enhanced security. State is reassessing the facility’s space plans and staffing projections for all agencies and is focusing on identifying which additional regional activities might be moved to the Frankfurt center, especially where this action would improve security for U.S. government personnel. State also indicated that it would pursue a rigorous rightsizing and regionalization strategy in staffing the Frankfurt facility. State has said that under its new effort, it will analyze security, mission, and cost factors associated with each agency’s regional operations at posts in Europe, Eurasia, Africa, and the Near East. On June 12, 2003, State sent formal guidance to the ambassadors at each post, directing them to identify staff who might transfer to the regional center in Frankfurt. To help the posts identify positions for relocation, State plans to conduct a detailed, Web-based survey based on our rightsizing framework. State plans to have revised staffing estimates for Frankfurt at the end of 2003. The Frankfurt facility will have a capacity of about 1,100 desk positions. The facility will have sufficient space to consolidate existing diplomatic operations in Frankfurt as well as bring in significant numbers of personnel from posts outside Germany to expand regional operations. Positions currently in Germany envisioned to relocate to the Frankfurt regional center include a total of about 900 personnel from the current Frankfurt consulate, offices at the Rhein Main Air Force Base, and the embassy in Berlin. Based on current capacity estimates, there is also desk space for about 200 staff who could be relocated from other posts. To help address staffing decisions, State also plans to undertake what it characterizes as a “think outside the box” exercise by asking embassies to examine whether any functions in Europe or elsewhere can be reengineered to be more effective. Our rightsizing framework encourages decision makers to consider reengineering actions such as competitively sourcing support functions, regionalizing contract activities, and centralizing warehouse operations. This kind of reengineering, which could help reduce costs of support functions and staffing requirements for embassies, should be weighed along with the options for relocating staff to regional centers. Although State has renewed its process for staffing Creekbed, its comments on a draft of this report lead us to question State’s commitment to the process. State’s comments and our evaluation of them are discussed in more detail on page 10. Although substantial space exists for relocating staff from other posts, State documents indicate that the department may encounter some resistance among agencies identified to relocate. While some agencies and offices agree that relocation would improve their security, State anticipates that they will raise concerns about their relative ability to effectively carry out their mission from Frankfurt, the cost of relocating staff from other locations, the convenience of airline connections, and costs related to living and operating out of Germany. These issues indicate that State and other agencies will have to carefully weigh the security, mission, and cost trade-offs associated with staffing relocation decisions. In some cases, security issues may be so compelling that some staff will have to be relocated. From September 2001 to August 2002, State tried to identify positions with regional responsibilities that could be relocated to Creekbed. Although State initially identified potential positions, State halted its efforts in August/September 2002. In September 2001, State initiated discussions with key agencies operating at its European posts and asked them to consider relocating to Frankfurt if it would be substantially more secure than their current facilities. This process was more formally articulated in a March 2002 State cable to 48 European and Eurasian posts having regional coverage, asking ambassadors to review their staffing with an eye toward relocating to Frankfurt staff whose primary responsibilities were regional. Although many of the posts were slow to respond, some listed possible candidates for relocation. For example, one post identified three agencies with a combined total of more than 50 staff members whom the ambassador believed should be considered for relocation. Although this effort initially identified positions for possible relocation, it was halted when planners in State’s Bureau of European and Eurasian Affairs received a document from OBO in August 2002 stating that “the facility is at 100% occupancy” based on a projected staffing level of about 900 desks. OBO later explained that this document meant that the facility was filled to the requirements level of 900 positions but did not mean the facility was filled to capacity. OBO acknowledged that the wording of the document was confusing. However, State officials told us that based on that document, the department concluded there would be no additional room in the facility for staff beyond the 900-desk staffing level. (The 900- desk projection only included staff currently in the Frankfurt consulate offices, staff currently at the Rhein Main Air Force Base, newly created staff positions, and staff “overflow” from the U.S. embassy in Berlin, Germany.) As a consequence, in August/September 2002, State stopped its efforts to relocate staff from posts outside Germany. For example, in September 2002, State’s Under Secretary for Management sent a letter to the U.S. Agency for International Development, one of the key agencies initially identified by State as having staff potentially available for relocation from outside Germany, indicating that the Frankfurt facility would be fully occupied. Beginning in March 2003, we performed a detailed analysis of State’s staffing requirements for Creekbed in relation to the facility’s capacity. We found that the facility had substantial additional capacity beyond the 900- desk level, affording opportunity for the relocation of personnel from posts outside Germany. Before visiting the Frankfurt facility in early May 2003, we interviewed the private contractor officials responsible for the space planning and concept design for Creekbed, who confirmed that there was space available for additional staff. While at the facility, we examined space allotted for two agencies and found the space significantly exceeded the number of positions slated to fill it. For example, one agency projected 28 office personnel for the facility but was allotted space for about 38 offices. Another agency also projected 28 office personnel but was allotted space for about 50 offices. In addition, we found that there was potentially more office space available at Creekbed because some agencies did not conduct a rigorous staffing process before submitting their staff projections. During our fieldwork in Frankfurt, we reviewed the documented 2002 staffing projections with the agencies in Frankfurt that will be moving into Creekbed and found that some agencies disputed their earlier projections. Some agencies had overestimated their individual staffing requirements, which were eventually curtailed by their headquarters in Washington, D.C. We have previously reported that U.S. agencies do not take a systematic approach to determining long-term staffing needs for embassy buildings scheduled for construction. We discussed these issues with the Consul General and the facility manager in Frankfurt, who agreed that the facility had substantial space to accommodate staff from other posts. When we completed our fieldwork in May 2003, we also discussed our observations with officials in State’s Bureau of European and Eurasian Affairs, the Office of Management Policy, and OBO; and with OMB. They, too, agreed that there was additional space. State then announced that it was renewing its efforts to regionalize operations in Frankfurt. In a May 2003 letter to OMB, State’s Under Secretary for Management said that the department was reopening the space plan for the facility and anticipated that Creekbed would accommodate significant additional positions. State indicated that it took this action because OMB urged it to do so. In a June 2003 cable to all posts, State said that it is considering which additional activities might be relocated to Creekbed. State emphasized that its renewed effort is part of its overall rightsizing strategy. Successful staffing of the Frankfurt facility consistent with State’s regionalization goals is a critical step in efforts to rightsize U.S. overseas operations. In fact, it may be the single most visible and concrete example of a rightsizing initiative by the U.S. government in the near term. We believe that the revised staffing plans for Creekbed will provide State a significant opportunity to work with other agencies to regionalize diplomatic operations in Europe and develop a more rational, secure, and cost-effective overseas presence. The facility has ample, available office and other space that, when fully renovated, will provide a secure alternative location to conducting regional operations at embassies and consulates with physical security deficiencies. Deciding which U.S. government positions will be relocated to the facility will require a careful consideration of the security, mission, and cost factors associated with agencies’ presence at individual posts. In some situations, State may encounter agency resistance to relocation. However, security considerations may be so compelling that relocation of certain staff may be necessary. In other cases, State and other agencies will have to work hard to reach agreement on the relative importance of the security, mission, and cost factors associated with the relocation decision and how the factors should be weighed. More importantly, it will require a strong and continual commitment by State to the broader objective of rightsizing the U.S. overseas presence. OMB and the Department of State provided written comments on a draft of this report (see apps. I and II). OMB said that it is working closely with State to develop a plan of action to appropriately staff the new facility, to assess if staff could be shifted from their current overseas location to Frankfurt, and to discuss potential moves to Frankfurt with headquarters staff at all agencies. OMB also expressed the hope that this facility will serve as an example of a best practice for the development of other regional centers around the world. State said that OBO’s estimate that the facility could accommodate about 1,100 desk positions represented a maximum theoretical capacity and that the actual capacity would probably be less. We subsequently asked OBO, which is State’s expert on overseas real estate and facility issues, if it was confident of its capacity estimate. OBO reiterated its estimate stating that it has identified space in the facility for about 1,100 personnel. However, even if the capacity of the facility were slightly less, there would still be ample room to accommodate some staff currently assigned to other locations outside Germany. State also noted that our report did not identify specific agencies or staff that we believe should be relocated to Frankfurt. State said this suggested that we do not believe that there are suitable candidates for relocation. This is not the case. As we noted in this report, State’s business plan for the purchase of the facility indicated it has space to accommodate regional staff from outside Germany who are assigned to embassies with security vulnerabilities. Moreover, State’s plan identified 73 staff from five agencies at posts outside Germany for potential relocation. As further noted in this report, State’s subsequent efforts at its European and Eurasian posts identified suitable candidates for relocation, but that exercise was halted because State mistakenly believed that the facility did not have sufficient space. Our work at the four posts outside Germany validated the existence of significant numbers of staff with regional responsibilities, many of which were located in buildings with substandard security. We did not identify specific candidates for relocation in this report because State said that it was conducting a full assessment of staffing options for Frankfurt, and we did not want to preempt that assessment. However, in our briefings with State and OMB officials, we discussed our fieldwork observations and told them that there were many staff that could be considered for relocation. For example, there were at least 87 staff with regional responsibilities in Vienna and Budapest that were assigned to space with substandard security. Furthermore, we noted that in 2002, we had identified regional positions in Paris that could be considered for relocation to Frankfurt based on security, mission, and/or cost factors. State also said that it believes, based on their follow-up to the 1999 Overseas Presence Advisory Panel report, that the U.S. government’s overseas presence is already rightsized. We have previously pointed out the substantial weaknesses in the pilot studies which provided the basis of State’s follow-up. State subsequently indicated that it intended to reinvigorate the rightsizing process consistent with the President’s Management Agenda, OMB’s directives, and our rightsizing framework. In our view, State’s comments are inconsistent with its (1) stated expectations that the Frankfurt project will achieve the department’s key rightsizing and regionalization goals and (2) plans to conduct a full assessment of staffing options for the Frankfurt regional center. In addition, State’s comments lead us to question whether the department seriously intends to implement its business plan for the Frankfurt center regarding relocating regional staff, as well as its commitment to the overall rightsizing process. We believe that State’s actions regarding staffing of the facility warrant oversight. State also provided technical comments that we have incorporated into this report, as appropriate. In view of State’s comments on a draft of this report and the continued importance of rightsizing the overseas U.S. presence consistent with security, mission, and cost factors, the Congress may wish to direct the Secretary of State to submit a detailed staffing plan for the Frankfurt facility that specifically lists positions to be relocated to Frankfurt. To determine State’s process for creating staffing projections for the Frankfurt regional center, we reviewed documents and interviewed officials in State’s Bureau of European and Eurasian Affairs, OBO, and Office of Management Policy. We visited the current consulate facilities in Frankfurt and spoke with the Consul General and appropriate State officers about the current security status of their consulate buildings as well as the multiple projections of staff relocating to the facility. We spoke to representatives from agencies that will be moving to the Creekbed facility. We also toured the facilities at the Rhein Main Air Force Base that are scheduled to be relocated by June 2005 as well as the currently empty Frankfurt regional center facility. In addition, we visited other posts in Europe—Paris, Rome, Budapest, and Vienna—to determine (1) the extent to which each has agencies and personnel performing regional functions that could be considered for relocation to Frankfurt based on the nature of their mission and/or their security vulnerability and (2) what actions these embassies had taken to identify staff who could be considered for relocation to the Frankfurt facility. Specifically, at these posts, we interviewed not only the agencies that were earlier identified by State or by their ambassadors as being potential relocatees, but also officials from other agencies with regional responsibilities. To determine the facility’s capacity to accommodate staff from outside Germany, we interviewed the private contractor officials in Albany, New York responsible for the initial feasibility design to discuss their space planning and concept design for the Frankfurt center. We also compared OBO’s capacity estimates with staffing requirements for the facility. In addition, during our visit to Creekbed, we compared the size of office space allocated to two different agencies in Frankfurt with the number of people in those agencies. We also met with officials in OMB to obtain documentation on the plans for purchasing the facility and to discuss State’s approach to staffing it. We conducted our work from February 2003 through August 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Director of OMB and the Secretary of State. We are also sending copies of this report to other interested Members of Congress. Copies will be made available to others upon request. This report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me on (202) 512-4128. John Brummet, Janey Cohen, Lynn Moore, Ann M. Ulrich, and Joseph Zamoyta made key contributions to this report.
The State Department plans to spend at least $80 million to purchase and renovate a multibuilding facility in Frankfurt, Germany. The facility, known as Creekbed, is scheduled to open in mid-2005. The project is a key rightsizing initiative under the President's Management Agenda to reassess and reconfigure the staffing of the U.S. overseas presence. Creekbed is expected to achieve the department's major rightsizing and regionalization goals. The Office of Management and Budget expects the project to serve as a model for developing other regional centers. GAO was asked to determine whether State fully examined the potential for relocating regional staff from outside Germany to Creekbed. The Department of State indicated it is currently renewing earlier efforts to relocate staff from outside Germany to the new Frankfurt regional center. State said it would pursue a rigorous rightsizing and regionalization strategy in staffing the Frankfurt facility. State prematurely stopped its earlier efforts to relocate regional staff from other posts in August/September 2002 because staffing planners interpreted space planning estimates as indicating that the regional center would be fully occupied. However, according to GAO analysis, the facility was not full and significant additional space existed. After touring the facility and studying staffing requirements and space allocated for specific agencies, GAO found there was space available for additional staff. Successfully staffing the Frankfurt regional facility has the potential to optimize its use and achieve broader regionalization objectives.
Export promotion activities include efforts to raise awareness about exporting and to provide businesses with export counseling, training, and information on market opportunities; help connecting with potential buyers abroad; and help obtaining financing. Responsibility for export promotion is widely dispersed. Many federal and state agencies operate a wide variety of programs across the country and overseas that are intended, at least in part, to assist U.S. companies in entering foreign markets, or in expanding their existing presence in markets abroad. Some of the 20 TPCC member agencies directly assist small businesses to export overseas, including Commerce, SBA, the Export- Import Bank, the Departments of State and Agriculture, and the U.S. Trade Development Agency. This review of export promotion efforts focuses solely on Commerce and SBA because their activities in this area are similar to those of state governments. The TPCC Secretariat is housed in Commerce’s ITA and takes the lead in coordinating the activities of federal export promotion activities and implementing the NEI through an annual National Export Strategy. The ITA also manages CS as part of its Global Markets unit. In most states, CS is the primary government entity providing federal export promotion services to non-agricultural businesses. Commerce’s mission includes strengthening the international economic position of the United States by facilitating global trade and opening up new markets for U.S. goods and services. CS’s network of domestic and international trade professionals seeks to increase exports of goods and services from the United States. While CS’s mission identifies small businesses as a particular focus of its export promotion efforts, CS assists companies of all sizes. SBA’s key roles in export promotion are to conduct outreach and provide training, counseling, and export financing for small businesses. Within SBA, the Office of International Trade has primary responsibility for export promotion efforts. The Office of International Trade field staff are mainly responsible for providing outreach, training, and technical assistance on SBA’s export finance programs. In addition, one person in each of SBA’s 68 District Offices is designated as a District International Trade Officer and provides basic export assistance as a collateral duty. The District International Trade Officers are managed by a separate SBA office, SBA’s Office of Field Operations. SBA partially funds SBDCs, which are located primarily at colleges and universities as a cooperative effort among SBA, the academic community, the private sector, and state and local governments. Our January 2013 report on the SBA’s role in export promotion noted that in addition to providing general business services, SBDCs may help businesses interested in exporting, particularly those that are new to exporting and need assistance with preparing their The report also noted that while most SBDCs business to export.provide export assistance as one of many business-development services, some SBDCs meet SBA’s criteria to be designated as International Trade Centers that focus primarily on providing export assistance to businesses. Every state government conducts some export promotion activities. State- level trade functions can be housed in various state government entities, including governors’ offices, state departments of commerce, and state departments of economic development. State trade offices often have both domestic and international staff; while domestic staff generally are state employees, international staff may work directly for the state trade office or may work as contractors. The federal government and state trade offices across the country work to promote exports, offering similar services to similar types of clients, primarily small businesses. When government agencies and state trade offices have similar goals, activities, strategies, or beneficiaries, their programs may overlap. This overlap may be beneficial for clients if federal and state offices collaborate to better address the demand for export promotion services. However, gaps in collaboration may result in an inefficient use of limited resources dedicated to export promotion. We found that although federal agencies and state trade offices overlap to varying degrees in their provision of export promotion services, their staffing levels, resource levels, and metrics for measuring program effectiveness often differ. Furthermore, we found that although increased collaboration may increase the efficiency and effectiveness of overlapping export promotion efforts, the extent of collaboration between federal agencies and individual state trade offices varies widely. In seeking to promote exports, the federal government and state trade offices provide services that overlap to varying degrees. By definition, overlap occurs when programs have similar goals, engage in similar activities or strategies to achieve them, or target similar beneficiaries. Officials at both U.S. Export Assistance Centers (USEAC) and state trade offices that we visited told us that they are operating at capacity in terms of resources to meet the demand for their export promotion services. Consequently, officials at two USEACs and two state trade offices that we visited viewed the provision of similar services by different organizations as positive for beneficiaries of these services because the overlap meant that more resources were available to companies looking for help exporting. As we previously concluded, however, without enhanced collaboration, overlap may have a negative effect in that limited resources may not be used in the most effective and efficient manner. Furthermore, small business beneficiaries could be confused about how to access available services and unclear about who could best meet their needs. Federal and state efforts to provide export promotion services share the same goal: to strengthen the economy and create jobs. The President’s National Export Initiative (NEI) articulates this goal in very specific terms, aiming to double U.S. exports (based on 2009 export levels) by the end of 2014. Export promotion is also a component of many state strategies for economic development and job creation. U.S. exports have increased substantially since 2009, reaching record levels in 2013; nevertheless, they have fallen short of the levels needed to attain the NEI goal. Figure 1 illustrates progress toward this goal. Commerce and SBA provide some of the same types of export promotion services, such as outreach, counseling and training, and trade leads, as most states do through their state trade offices. For example, as shown in table 1, we found this to be true in the five states we visited. In prior work, we established the following definitions for the four types of export promotion services shown in table 1: Outreach. Outreach can include any activity in which agencies generally increase public awareness about exporting, including seeking to inform businesses or other partners about the export promotion services offered by the federal government. Both Commerce and SBA conduct outreach to businesses. For example, outreach includes Commerce staff attending a trade show and counseling clients. Similarly, Commerce or SBA staff may conduct outreach at seminars and training sessions, where they meet businesses to identify potential clients interested in exporting. Counseling and training. Counseling is specific to the needs of each business and can cover a variety of topics relating to international trade and exporting, such as helping a business identify a target export market or discussing logistics for shipping exported goods. For example, Commerce organizes trade missions for U.S. firms that want to explore and pursue export opportunities by meeting directly with potential clients in their markets. Commerce often refers businesses new to exporting to the SBDCs for export training. SBA also provides counseling and training, primarily through SBDCs. Trade leads. Trade leads, also known as matchmaking, refers to the process of connecting businesses with overseas buyers. Commerce leverages its international network of Commercial Officers to help U.S. businesses set up appointments with potential buyers abroad and to notify businesses when foreign buyers are looking to import U.S. goods, also known as matchmaking. Also, Commerce organizes trade missions for U.S. firms that want to explore and pursue export opportunities by meeting directly with potential clients in their markets. Typically, SBA does not provide trade leads but does hosts or co- sponsors events that link businesses to export management companies—intermediaries that represent a company’s product overseas and reduce some of the risks associated with exports by managing the logistics of the process. Financing. SBA assists small businesses with export financing through its Office of International Trade. Of the states we visited, Florida is the only state that offers export financing; state trade offices generally refer interested clients to SBA and the Export-Import Bank for export financing help. While Commerce is not involved in export financing, its Commercial Officers at USEACs often refer interested clients to SBA and the Export-Import Bank. Like the state trade offices in the five states we visited, most of the 28 state trade offices that responded to a 2013 SIDO survey also provide export promotion outreach, counseling and training, and trade leads. While not all states provide all services, the survey results showed that state trade offices provide a range of services such as outreach, which can include providing marketing materials; counseling and training, which can include export counseling and export readiness training; and trade leads, which may result from participation in or coordination of trade shows. In our fieldwork, we found that the depth and breadth of services state trade offices provide vary depending on the availability of resources such as funding, staff, and overseas capacity. For example, the state of Florida’s program provides export counseling, educational events, and international trade leads, including international trade missions and trade shows. Similarly, the Minnesota Trade Office offers organization of trade missions, export counseling, training programs on the export process, including creating an export plan, mitigating risk, and understanding trade regulations. Virginia also offers a number of export promotion services including counseling; training; trade missions; an array of services such as in-country market research provided by an extensive network of international consultants; and a 2-year business acceleration program for selected companies whose goal is to increase their export sales. While Commerce charges fees for certain services, including those that provide trade leads, some states provide their services for free, subsidize businesses’ participation in trade missions and trade shows, or both. For example, the Philadelphia USEAC operated by Commerce provides export counseling, market research, and trade events for a fee. However, the Pennsylvania state trade office provides those same services, in addition to market-entry strategy development and technical support, at no cost to businesses; moreover, the state trade office awards some grants to businesses to pay for or partially subsidize some export-related activities. Because Commerce charges fees, businesses in some states sometimes opt to seek the same or similar services from the state trade office rather than from Commerce, according to one state trade office official. Both Commerce and a state official from one of the states that we visited noted that perceived quality of service also influences businesses’ decisions in selecting a service provider for export promotion services. In addition to providing similar services, the federal agencies offering export promotion services at the state level—Commerce and SBA—and most state trade offices primarily serve small businesses. According to Commerce, the majority of its customers are small businesses, although firms of any size may request its services. SBA, by law, offers services exclusively to small businesses. In 2012, 97 percent of the over 301,000 identified U.S. exporters that exported were small and medium-sized exporters. In the 2013 SIDO survey, of the 22 state trade offices that responded, most states reported that small businesses were their primary clients. Four of the five state trade offices that we visited told us they work primarily with small businesses, although three said they also serve some larger companies. In most states, federal agencies and state trade offices have different levels of domestic staff resources available for providing export-related assistance to companies. As of February 2014, there were 265 domestic Commerce CS staff in the 108 USEACs located in every state with the exception of Delaware and Wyoming. SBA also has staff in every state that provides export services, and partially funds more than 900 SBDCs across the country. Most states have trade offices that also provide export-related assistance to businesses. One state trade office reported only 2 staff while another reported having 28 staff dedicated to international trade, according to the 2013 SIDO survey. As shown in figure 2, for the five state trade offices that we visited, staff numbers ranged from 5 full-time employees statewide to 30; the state export promotion employees in each state outnumbered the Commerce USEAC export promotion employees. Commerce has greater representation abroad than state trade offices. Commerce has 930 staff in offices in 72 countries abroad. In 15 of these countries, no state trade office maintains its own representation. An official from one state explained that Commerce’s presence internationally is more important for state trade offices than Commerce’s domestic offices in serving the needs of small businesses. This was because there is less overlap between states and the federal government overseas, according to this official, and a number of states do not have a presence overseas and rely entirely on Commerce for those services. Nevertheless, many state trade offices have their own international representatives that provide services to businesses from those states similar to the services offered by Commerce’s Foreign Commercial Officers. This representation is provided by either state employees or contractors hired as service providers, or both, and they provide these services to a country from offices located in-country or on a regional basis from offices not located in-country. While 38 state trade offices collectively have representation in 83 countries abroad, the number of countries in which individual state trade offices have representation varies. For example, Pennsylvania operates abroad relatively independently from Commerce through its own network of overseas contractors located in 22 countries from which the contractors serve 73 countries. Similarly, Virginia utilizes a combination of its own staff (in 2 countries), and overseas contractors, which constitutes a network that reaches 57 countries. Conversely, Oregon relies almost exclusively on its own staff and contractors in 4 countries where it has representation, but it relies heavily on Commerce in all the countries where it has no formal representation. Figure 3 shows the top 10 countries in terms of state trade office representation, with China, Japan, and Mexico having the most. See appendix II for more details on Commerce’s and the states’ export promotion representation overseas. In fiscal year 2013, Commerce’s ITA dedicated $267.5 million of its total budget to export promotion. SBA’s Office of International Trade, which provides export financing and promotion services to small businesses, had fiscal year 2013 total program costs of approximately $9.8 million. Additionally, SBA’s export-related loans amounted to approximately $1.2 billion in fiscal year 2013. The total amount of money spent on federal export promotion is unclear because comparable budget information for federal agencies involved in export promotion is not readily available, as we found in July 2013. State governments vary greatly in the budgetary resources they provide for export promotion. According to the 2013 SIDO survey, among the 14 state trade offices that responded to this question, office annual budgets ranged from $80,000 to $6.2 million, with 90 percent of the 14 states having budgets ranging from $420,000 to $1.75 million. The 5 state trade offices we visited had budgets for export promotion that ranged from $1.9 million to over $5 million annually for fiscal year 2013. In the 2013 SIDO survey, of the 23 states that responded to this survey question, over half reported no significant changes in their overall budgets from the previous year. Organizations define performance metrics to reflect priorities, measure achievement, and create incentives for management and staff. Measuring performance allows organizations to track the progress they are making toward their goals and gives managers crucial information on which to base their organizational and management decisions. primarily uses “export successes” to measure performance. When any Commerce trade-promotion activity successfully assists a U.S. company to export a product or service, Commerce staff document an export success that is verified by Commerce management. The total dollar value of Commerce export successes in each state varies greatly, with, for example, export successes from Alaska totaling $370,000 in 2012 and export successes from Washington exceeding $10 billion for that same year (see fig. 4). GAO, Entrepreneurial Assistance: Opportunities Exist to Improve Programs’ Collaboration, Data-Tracking, and Performance Management, GAO-13-452T (Washington, D.C.: Mar. 20, 2013). In the five states we visited, we found that the extent of collaboration between federal and state trade offices ranged from no collaboration in one state, to very little collaboration in another, to close collaboration between federal and state export promotion providers in three states. According to federal and state officials with whom we spoke, factors affecting their level of collaboration included physical proximity, level of communication, and resource levels. In addition, we note that state offices report to their own executive and legislative bodies, and state trade offices determine their own priorities and are not obligated to collaborate with federal agencies.with federal partners in trade promotion primarily interact with Commerce but also collaborate with SBA, both to varying degrees. States that do choose to collaborate Among the five states we visited, we found little to no collaboration with federal agencies in two states. One state Director described the state’s relationship with Commerce export promotion staff in his state as cordial but involving virtually no interaction. In another state, we found minimal interaction between the state trade office and the state’s USEACs. In both states, local federal officials agreed that they had a limited relationship with their state counterparts. Both these state trade offices are well- funded and provide export assistance services through their networks of in-state trade offices and extensive networks of international offices staffed by contracted service providers. The Directors of both state trade offices noted that in most cases they opt not to work with Commerce because their experience has been that Commerce response times are slow and that the quality of the service inconsistent. For example, one state Director said that his state provides export promotion assistance faster, better, and cheaper than Commerce. However, both states collaborate with Commerce overseas when their staff or contractors are not located in a country or are unable to meet the demand for services from exporters. Both states also minimally collaborate with SBA; one state refers new-to-export companies to SBDCs for training, while the other state refers companies that need export financing to SBA. In contrast, we found that staff in three of the five state trade offices we visited work closely with Commerce. Officials from these state trade offices explained that the services their offices provided complement Commerce’s services, and they cited colocation or proximity of offices and good communication as factors that facilitate collaboration. According to Commerce’s client management system, for fiscal year 2012, state trade offices or other state government offices or agencies are among Commerce’s most frequent partners in achieving export successes. In one state, Commerce and state officials characterized their working relationship as seamless and their services and activities as complementary. For example, the state organizes trade missions while Commerce helps with recruiting companies and arranges one- on-one meetings with prospective buyers. Similarly, officials told us that in some cases the state trade office provides funding to businesses that is used to purchase Commerce’s services. This state trade office and Commerce also occasionally provide joint counseling to some companies, according to state trade office officials. In another state trade office, located in the same building as Commerce, the Director believed that a primary factor encouraging collaboration was that both offices had small staffs whose members knew each other well. According to state trade office officials, this state’s trade staff and Commerce’s staff provide joint counseling services and promote one another’s services to businesses. This state trade office also collaborates with Commerce and SBA to deliver export-focused training intended to provide introductory information to businesses on the exporting process and available federal resources to nonexporting companies. A third state trade office described its relationship with Commerce as complementary and noted that the organizations collaborate on joint trade missions and activities that bring together businesses interested in exporting with potential buyers. State trade office officials noted that Commerce relies on the state for their export education programs and that the state works mostly with new-to-export companies. This state trade office refers only companies who require very basic assistance, such as basic accounting information, to SBDCs. The Director of the state trade office commented that any duplication in the services provided by Commerce and her office would not be a concern because the demand for export assistance in the state was so great. In two states we visited, SBA grants to state trade offices through the State Trade and Export Promotion (STEP) program have facilitated collaboration with Commerce and SBA. In those states, for instance, Commerce refers their clients to state trade offices for STEP funding. The state trade offices reported using STEP funds to expand their services or replace lost state funding and said they were able to provide for export- related activities such as attendance at trade shows and to purchases of Commerce services. For example, Minnesota’s Trade Office was using STEP funds to provide matching funds to Minnesota businesses for export activities and their expenses, including trade missions. As another example, in partnering with a local SBDC, Florida’s trade office was using STEP funds to help new-to-export businesses create strategic exporting plans. Three TPCC initiatives utilize networks of state and local governments and other partners to advance federal-state collaboration in promoting U.S. exports. These efforts have produced limited results, however, in part because they have not consistently implemented key collaboration practices. In prior work, we found that collaboration is enhanced when collaborating partners follow certain key practices, such as articulating common outcomes; agreeing on roles and responsibilities; monitoring, evaluating, and reporting on results; and leveraging resources. In states we visited, we found weaknesses in the implementation of Export Outreach Teams, a TPCC initiative that was to be co-led by Commerce and SBA. Similarly, we found that TPCC’s involvement in a Brookings Institution effort to engage metropolitan area economic development groups in export promotion has unknown implications for federal export promotion efforts and resources. Finally, an agreement between Commerce (the TPCC Chair) and a national group representing state trade offices expired without achieving its collaboration objectives and without Commerce addressing ways for it to share within legal restrictions more information about its clients with state trade offices. In prior work, we found that federal agencies face a range of challenges and barriers when they attempt to work collaboratively. Such challenges and barriers also exist—and sometimes to an even greater degree— when federal agencies partner with state and local governments, nonprofit organizations, and the private sector. However, our prior work has also identified practices that can help to enhance and sustain collaboration among agencies and thereby maximize performance and results, and we have recommended that agencies follow them. Among the key collaboration practices we have identified and recommended in prior work are the following: Define and articulate a common outcome. Identify and address needs by leveraging resources. Develop a mechanism to monitor, evaluate, and report on results. Agree on roles and responsibilities. We have also found that there are key issues to consider when implementing collaborative mechanisms. These include leveraging resources, such as human, information technology, physical, and financial resources to sustain a collaborative effort, and composing written guidance and agreements documenting how the parties to the agreement will collaborate and a way to update the guidance and agreements as needed. The enhanced federal-state collaboration in export promotion envisioned in the NEI calls for the implementation of some of these key collaboration practices. In its 2010 report to the President on the NEI, the Export Promotion Cabinet identified improved coordination with state government trade offices as a priority, and called for federal agencies to make joint planning the standard procedure in all states that have export promotion programs. The Export Promotion Cabinet also stated that interagency coordination could be strengthened and that federal-state partnership could be particularly helpful in identifying potential exporters to meet the NEI goal. In addition, the subsequent National Export Strategies highlighted collaboration with state and local governments as a priority. In support of the NEI, the TPCC’s two most recent National Export Strategies made enhancing federal-state collaboration on export promotion a priority. The 2011 and 2012 strategies called for increasing collaboration with state export promotion programs and metropolitan areas. Accordingly, the TPCC identified three initiatives intended to increase U.S. exports under the NEI and enhance collaboration between federal and state efforts: (1) Export Outreach Teams, created to increase local awareness of export promotion resources and facilitate collaboration among local networks of export promotion service providers; (2) collaboration with the Brookings Institution to support its Global Cities Exchange (initially referred to as the Metropolitan Export Initiative), as part of TPCC efforts to expand export promotion activities in metropolitan areas; and (3) a memorandum of intent (MOI) with a national organization that represents state trade offices from across the country, as part of TPCC efforts to work with national organizations with common interests. The TPCC highlighted the first two initiatives in its 2011 and 2012 strategies, and TPCC officials told us that the MOI was important to federal-state collaboration. The TPCC promoted Export Outreach Teams as a mechanism for bringing together the various organizations that provide export promotion services in a locale to share information with small business service providers. The Export Promotion Cabinet’s 2010 report to the president on NEI and the TPCC’s 2011 and 2012 National Export Strategies discuss the role of these teams in leveraging state and local resources to help small businesses export successfully. Export Outreach Teams were originally a pilot program launched by SBA in 2010 as a way to inform small business service providers about local available export resources provided by the various federal agencies, state trade offices, and their partner organizations. Small business service providers include Small Business Development Centers, Women’s Business Centers, SCORE offices,chambers of commerce. local economic development agencies, trade associations, and In 2012, the Export Promotion Cabinet and TPCC issued an interagency communiqué that expanded SBA’s Export Outreach Teams across the country. In keeping with a key collaboration practice described above, the communiqué outlined two main purposes, or common outcomes, for the Export Outreach Teams: to increase local small business service providers’ awareness of available international trade expertise and to enhance communication and collaboration. According to the interagency communiqué, Export Outreach Teams were to serve as a forum for collaboration by including export promotion agencies from all levels of government—federal, state, and local. As a first step, the communiqué called for team members, including staff from Commerce’s CS, SBA, SBDCs, state trade offices, and small business service providers to participate in workshops with activities meant to advance the program’s objectives. One of those activities was the formation of referral protocols that would help to establish roles and responsibilities and to leverage available resources. These referral protocols were intended to help ensure that local businesses could find the right export promotion service provider for their particular needs. The interagency communiqué stated that Export Outreach Teams were to hold a workshop by September 30, 2013, and that the teams should convene quarterly thereafter. Figure 5 shows the expected outcomes, suggested membership, and planned activities of Export Outreach Teams. The interagency communiqué called for both Commerce and SBA to be co-managers of the Export Outreach Teams, but SBA, whose role in federal export promotion efforts has grown in recent years, took the lead in implementing the initiative in its 68 SBA districts across the country. SBA provided detailed guidance to each District Office on how to conduct the initial team workshops described in the communiqué. SBA tracked the dates and numbers of attendees of the initial workshops of the Export Outreach Teams through 2013 and reported that all but one district held its initial workshop by the September 30, 2013, deadline. In February 2014, SBA distributed to the District Offices a separate document to track the Export Outreach Teams’ quarterly meetings; that document requested information from District Offices about the agencies represented, the activities planned, and the topics covered at the meetings. Despite TPCC and SBA efforts to provide guidance on achieving the objectives of the Export Outreach Teams, we found weaknesses in their implementation that limit their ability to enhance federal-state collaboration. First, the team memberships and activities were inconsistent with the guidance. In some instances, not all relevant participants were present at the workshop, while in other instances, the events reported as Export Outreach Team workshops did not have the purpose of facilitating interagency collaboration. For example, in one location we visited, state trade office officials were not aware that the local Export Outreach Team existed, and at another site, the Export Outreach Team had not included state trade office representatives in its initial workshop. According to an SBA document that tracked the attendees at the first meetings of all Export Outreach Teams, the participants in the meetings varied widely. Teams reported from 4 to 220 attendees at their initial workshops, and some meetings reported as Export Outreach Team events were larger conferences, presentations to small businesses, or other events not intended to serve as a forum for interagency and intergovernmental coordination. Officials at two locations we visited reported that presentations about SBA’s export financing programs were Export Outreach Team workshops, and in one location we visited, the Export Outreach Team workshop was in fact a presentation to private companies, not the intended meeting to coordinate local export promotion resources. Second, we identified and examined two referral protocols developed by Export Outreach Teams and both lacked important details that would facilitate the referral process and better communication and collaboration. Specifically, the protocols were not consistent with the key collaboration practice of agreeing on roles and responsibilities. For example, the protocols we examined lacked important details that would facilitate the referral process such as information about federal, state, and local agency responsibilities and the circumstances under which a business would be referred from one agency to another. Furthermore, SBA officials did not emphasize the importance of the referral protocols, rather stating that they were a desirable but not a necessary outcome of the teams. Referral protocols with clear roles and responsibilities can help federal and state officials determine when to refer businesses to each other, for assistance, and to avoid delays and confusion on the part of businesses seeking export assistance. SBA recently created a document to track the Export Outreach Teams’ quarterly meetings and, as of March 2014, plans to use it to collect, review, and use the information to manage the program. Without monitoring, evaluating, and reporting on results, SBA does not know whether the teams are reaching out to and including the appropriate organizations, such as state trade offices; whether the teams are meeting regularly as intended; and whether the teams are exploring ways to foster better collaboration through referral protocols or other means. Therefore, SBA has lacked pertinent information needed to make decisions in managing the program, including how to adjust the program to strengthen local networks of export promotion service providers so that businesses receive the best possible assistance. As a result, until the TPCC and SBA collect and review information on the program, they cannot ascertain the extent to which Export Outreach Teams are achieving their objectives to increase awareness of local export resources and enhance collaboration between federal and state agencies. TPCC officials sought to enhance collaboration among regional, state, and local economic development entities to increase their involvement in export promotion through a Commerce collaboration with the Brookings Institution’s Global Cities Exchange (hereafter referred to as Global Cities). Initially launched as the Metropolitan Export Initiative, Global Cities was established to engage metropolitan area governments and other local entities in collaboration with federal and state providers to increase exports and economic growth and to broaden global economic engagement. The TPCC’s 2011 and 2012 National Export Strategies describe this initiative as a tool for federal collaboration with state and local governments, and the TPCC has identified it as a mechanism to broaden export promotion efforts to include new partners and help reach the goal of the NEI. According to the 2012 strategy, collaboration with metropolitan areas is particularly important because metropolitan areas account for most of the nation’s exports and are home to each region’s unique concentration of capital, investment, and innovation. According to the Brookings Institution, Global Cities will increase U.S. exports by leveraging the knowledge and connections of local economic development leaders to proactively identify firms and sectors with the greatest export potential, coordinate fragmented export assistance providers, and leverage limited resources for maximum benefit. As shown in figure 6, federal, state, and metropolitan agencies are to combine their varied missions, functions, and perspectives to achieve the Global Cities objective of enhancing each metropolitan area’s exports. Other participating organizations include universities, chambers of commerce, and regional economic development organizations. In 2011, Global Cities started in Los Angeles, California; Minneapolis-St. Paul, Minnesota; Portland, Oregon; and Syracuse, New York. Another 16 locations started the process as of December 2013. Ultimately, according to the Brookings Institution, Global Cities is to be implemented in a total of 28 metropolitan areas. Los Angeles, Minneapolis-St. Paul, Portland, and Syracuse have completed metropolitan export plans, each of which includes performance measures. The export component of Global Cities’ performance measures vary by location and include the value of exports as a percentage of regional gross domestic product, the increase in the number of new-to-export firms, the increase in the number of markets to which local companies export, and the number of new firms entering the export supply chain. A Brookings Institution representative noted that measuring performance across the large and diverse group of organizations involved in Global Cities’ efforts is a challenge and that Global Cities is hoping to establish coordinated services and develop success stories with specific companies. Commerce has provided resources to support Global Cities in the form of staff time and expertise provided at the local level, as specified in the memorandum of agreement (MOA) that Commerce signed with the Brookings Institution in September 2011. Consistent with key practices for collaboration, the MOA specified how Commerce would participate in the initiative in terms of roles and responsibilities and what resources Commerce would devote in four pilot locations. Subsequently, Commerce made several contributions to the program under the MOA, including providing data on global demand for U.S. exports in each metropolitan area. Furthermore, Commerce staff in each of the four pilot cities provided expertise to help develop metropolitan export plans, which outline strategies for increasing exports in each location. In some Global Cities locations—such as Minneapolis-St. Paul, Minnesota; Portland, Oregon; and Tampa Bay, Florida—Commerce officials have participated in the planning, development, and implementation of Global Cities projects, including the development of a document describing local export-related resources, providing export-related training to economic development professionals, and assisting with a survey of companies to understand the perceived challenges they face related to exporting their goods and services. The MOA expired in April 2012, and Commerce and the Brookings Institution do not plan to renew the agreement. However, Commerce will continue to support Global Cities through its USEACs at the local level as the program expands to a planned 28 metropolitan areas. A Brookings Institution representative stated that the MOA was only intended for the initial stage of the program. Similarly, Commerce officials told us they did not think it was necessary to renew the agreement with the Brookings Institution because they consider their role as limited to advising when requested and helping to inform Global Cities’ participants in new metropolitan areas about federal export promotion programs and services. Furthermore, Commerce officials told us that, because the Global Cities projects are implemented locally and facilitated by the Brookings Institution nationally, Commerce does not need a process to track and report on the results of the initiative in each city. Brookings officials told us that their initiative was meant to increase interest in exporting, not to create new export promotion service providers. Thus, to the degree the initiative is successful, it will put new demands for export promotion services on federal and state export promotion agencies. Of the 20 currently established Global Cities locations, 6 are served by a Commerce office of one employee. In locations we visited, Commerce officials told us that a lack of staff resources was already a concern, which would make it challenging to respond to an increase in demand from local businesses for export promotion services. As mentioned previously, metropolitan areas participating in Global Cities plan to track their progress through performance metrics specified in their metropolitan export plans. However, to date, Commerce has not requested these data. Therefore, the impact of the initiative is still uncertain. Commerce has no plans to monitor results, which is inconsistent with key collaboration practices. With no system in place to monitor the results of the Global Cities initiative, Commerce lacks information needed to participate effectively because it cannot anticipate any increased demand for its export promotion services that the program may generate, or know when to shift resources to certain metropolitan areas with a greater potential for enhancing U.S. exports. Moreover, without such information, Commerce would be unable to identify the most successful Global Cities efforts and to draw lessons from them that could be applied more widely to enhance federal, state, and local collaboration to achieve NEI goals. The third initiative to enhance federal-state collaboration identified by TPCC officials aimed to improve collaboration with national organizations that have shared interests in expanding U.S. exports. This initiative focused on strengthening Commerce’s relationship with the State International Development Organizations (SIDO), the only national organization devoted exclusively to supporting international trade development through state trade offices. TPCC agencies also work with other national nonprofit organizations, including the National Governors Association and the U.S. Conference of Mayors. For example, the National Governors Association is developing a means to track and share information on trade missions involving state governors. However, promoting trade is not the primary purpose of these two entities. SIDO’s mission is to provide a forum for collaboration and sharing best practices, advocating for states’ international trade and development issues, and supporting the goals of the NEI through coordination of federal and state resources. A SIDO official told us that while many states have good collaborative relationships with federal officials at the local level, there was less collaboration at the national level in Washington, D.C. Both Commerce and SIDO were hopeful that a formal agreement would encourage dialogue and improve collaboration. In September 2011, Commerce and SIDO signed a memorandum of intent (MOI) that articulated their commitment to work together to develop strategies and implement activities in three areas. They agreed to enhance Commerce’s partnership with state trade offices to coordinate and cooperate in the delivery of critical customer- focused services necessary to assist U.S. companies to successfully export their products and services and enter new foreign markets. They agreed to identify opportunities and mechanisms for Commerce and the states, working through SIDO, to increase awareness among companies of available services, trade programs, and overseas events. They agreed to create a formal consultation process between Commerce and SIDO to facilitate communication and coordination, exchange ideas, and identify joint activities to be carried out pursuant to a future agreement. The MOI represented a concerted effort on the part of Commerce and SIDO to improve collaboration consistent with two key collaboration practices. First, the MOI was a written agreement documenting how SIDO and Commerce believe collaboration should occur. Second, the MOI defines the three common outcomes listed above. Nonetheless, the MOI did not specify roles and responsibilities; provide a mechanism to monitor, evaluate, and report on results; address how to leverage state and federal resources; or identify activities to achieve the three outcomes. One Commerce official commented that there was no commitment embedded in the agreement, such as a list of specific actions and who would be responsible for carrying them out. Similarly, one SIDO member commented that, while the agreement had gotten the parties together, it was too general to help them move forward and accomplish much. In contrast, a draft memorandum of understanding (MOU) that had preceded the MOI, did call for specific actions, such as having Commerce and SIDO jointly organize a program of multistate trade missions to key markets worldwide and developing procedures to enable federal and state trade offices to share credit for successful export promotion efforts. According to SIDO members and Commerce officials, legal concerns about vetting such a specific agreement in the short time frames given prompted them to sign the MOI instead. Ultimately, the MOI, which was to be renewable annually, expired in September 2012 with limited results, according to SIDO officials. SIDO and Commerce officials reported diverging views about progress on achieving the three common outcomes. Regarding the first activity— coordinating and cooperating in the delivery of critical customer-focused services—SIDO officials noted a continued lack of joint planning. For example, several SIDO members had expressed frustration about being surprised by Commerce’s announcements of new nationwide initiatives, including those for Export Outreach Teams and the precursor of Global Cities, the Metropolitan Export Initiative. Regarding the second activity— increasing awareness among companies of available services—SIDO officials stated that Commerce has not yet instituted a formal mechanism to notify state trade offices about upcoming events such as trade shows in time for state offices to assist or participate. Commerce officials, on the other hand, told us that it resolved this issue by sending SIDO a long list of TPCC upcoming activities twice a year, as well as regular postings on Commerce’s export.gov website. Commerce officials pointed out that states also need to share with Commerce their plans regarding such events, which they said does not always occur, particularly since the states started obtaining additional funds through the STEP program. For example, the governors of Kentucky and Wyoming held a trade show for their mining equipment companies geared toward sales to China at the same time as Commerce’s similar efforts. Regarding the third activity, formal consultation, Commerce and SIDO created a consultation process through which, at the onset, they met quarterly, according to Commerce officials. However, according to SIDO officials, the meetings neither generated any actions nor produced an agreement, as called for in the MOI. Moreover, a 2013 survey of SIDO members indicated that most believed that despite the NEI, federal-state collaboration in promotion exports had not improved in recent years. According the Commerce and SIDO officials, several factors contributed to the limited collaboration stemming from the MOI. These officials agreed that collaboration was a joint responsibility and a SIDO official said that they had not met with Commerce since April 2013. A SIDO official also said that recent personnel changes on both sides made it more challenging to maintain regular consultations. Furthermore, SIDO officials said that they were now giving more priority to working with SBA and Congress to maintain funding for STEP. Nevertheless, the MOI’s consistency with key collaboration practices could have helped to enhance collaboration. Agreements to collaborate that define common outcomes but do not identify activities to achieve them or specify roles and responsibilities lack a roadmap to motivate parties to move forward. In addition, when such agreements lack a mechanism to monitor, evaluate, and report on results, they make it more difficult for the partners in collaboration to hold one another, and themselves, accountable for fulfilling their commitments. SIDO officials called sharing credit in the tracking of export promotion successes their number one priority to enhance federal-state collaboration, which, according to those officials, is why they sought to address this issue in the MOI. Sharing information is consistent with the key collaboration practice of leveraging resources, including information technology, to support a common outcome. However, Commerce pointed out that it is required by law to restrict the extent to which it can share information (e.g., clients’ new export markets, export sales, customer satisfaction, services obtained), including information related to Commerce’s export successes. State trade office officials in two of the five states we visited also mentioned sharing client information as important to federal-state collaboration. For instance, an official from one state stated that sharing their client lists with Commerce and vice versa ensured that their customers obtained the best service to meet their needs. Officials from a third state cited three specific issues related to sharing client information: First, they said they had no way of knowing whether their referrals of clients to Commerce for Gold Key services resulted in export sales, leading to underreporting of the state trade office’s results and limiting its ability to justify additional export promotion resources. Second, they said that it is more difficult to advise companies effectively without knowing what information the companies may have already obtained from Commerce. Third, the officials said that they and Commerce had to distribute separate evaluation forms after jointly sponsored training events, resulting in underreporting because participants generally did not want to fill out two forms. Commerce USEAC officials in three of the states we visited said that they were prohibited by law from sharing information on client services with state trade offices. A USEAC official in one of the three states explained that Commerce must keep a confidential relationship with clients so that clients will report to them their export successes, an important performance measure. A USEAC official in one state noted, however, that sharing export success information with his state trade office counterpart made sense because they used similar metrics. The official commented that the ability to share export successes with the states would encourage leveraging of resources through joint efforts. In addition, this same USEAC official reiterated that underreporting occurs because the state trade office and Commerce must distribute separate evaluation forms to clients that attend their joint trade shows and trade missions and most clients will not take the time to complete both forms. According to an official from Commerce’s Office of General Counsel, Commerce is legally barred by the federal Trade Secrets Act from sharing any information that would cause “substantial competitive harm” if released. The official commented that, as a general rule, companies are reluctant to release information to the government and that Commerce must obtain “affirmative permission” from each company in order to share nonpublic information with the states or other federal agencies. She stated the company must give permission in advance, in writing, through some formal process that would have to be captured in a database, with strict controls. She also stated that sharing client information with the state trade offices is particularly challenging because, unlike information Commerce shares with other federal agencies, information that Commerce discloses to a state government is automatically considered releasable to the public under the Freedom of Information Act. However, she noted that Commerce officials can share any information already publicly available, or information that would clearly not cause substantial competitive harm. She stated that examples of such publicly available information would include a press release about a company attending a trade mission or information in a company’s annual report. Finally, the official reiterated that Commerce can share any information with the states if a company gives formal written permission in advance. Commerce officials from ITA stated that they are now exploring ways for clients to voluntarily give permission for CS to release their information They noted that CS currently lacks but warned of several challenges. the technical capability to record a client’s advance permission to share their proprietary information, except in the case of fee-based services. In this regard, a Commerce official estimated that 15 percent of CS services are fee based. Finally, a Commerce official suggested that Commerce could aggregate the value of export successes by state. However, this would not indicate whether the state actually contributed to the export, thus preventing the state from documenting and sharing credit. Commerce has circulated guidance in the form of a slide presentation about the need to protect confidential commercial information consistent with limitations on disclosure in the Trade Secrets Act and the Privacy Act, which officials said they sent to CS offices. According to an official from Commerce’s Office of General Counsel, Commerce had provided no official general guidance to its employees on what client information may be shared with other federal agencies, the states, or both, and in what situations. Instead, this official stated that Commerce lawyers assess information-sharing requests on a case-by-case basis and offer informal guidance. She noted that currently Commerce has no way to obtain formal permission from companies with regard to sharing information. However, without some formal, general guidance from Commerce on the types of information that can be shared consistent with legal restrictions, and without some effort by Commerce to obtain permission from its clients to share their information, federal and state export promotion providers have less opportunity to leverage resources, such as sharing clients and credit for export successes, and thus, less incentive to collaborate. Federal and state governments share an interest in helping small businesses export in order to promote economic development and create good-paying jobs. Given the overlap of federal and state export promotion efforts and the current environment of constrained government resources, enhancing federal-state collaboration is important to help ensure that export promotion programs operate as efficiently and effectively as possible. It is also important for minimizing any potential barriers to small businesses seeking export promotion assistance. Nevertheless, collaboration is a two-way street, and the nature and extent of the federal and state relationship varies widely by state, depending on the unique factors in each state. The three recent TPCC initiatives to improve federal collaboration with states at the national and local levels recognize the varying nature of the collaborative relationship. Unfortunately, the federal initiatives have had limited success and only partially incorporated key collaboration practices. We found weaknesses in the implementation of Export Outreach Teams that limit their ability to achieve their objectives and enhance federal-state collaboration. Collaborative agreements with partners on the other two initiatives have expired with unknown or limited results. Nevertheless, these initiatives have demonstrated that fruitful opportunities exist to improve local networks of export promotion service providers, to expand activities to better include metropolitan area economic development agencies, and to work with national organizations representing state and local governments. Renewed effort by the TPCC agencies to implement these initiatives with greater attention to key collaboration practices can help improve the support available to small businesses hoping to export, as well as bolster, federal efforts to achieve national export goals. To improve federal-state collaboration in providing export promotion services in accordance with the National Export Initiative, and the Export Enhancement Act of 1992, we recommend that the Secretary of Commerce, as Chair of the TPCC, take the following three actions: Improve implementation of the Export Outreach Teams to better achieve their intended outcomes. This could include taking steps, including better monitoring, to ensure that key local participants are invited, that meetings are held as expected, and that the Export Outreach Teams seek to both increase awareness of available export resources and enhance interagency and intergovernmental collaboration. Take steps consistent with key practices for collaboration to enhance TPCC agencies’ partnering on export promotion with nonfederal entities, such as SIDO and Global Cities. This could include reassessing and strengthening the TPCC’s intergovernmental partnerships by clarifying expected outcomes, defining roles and responsibilities, monitoring results, and planning resource needs. Take steps consistent with key practices to enhance, where possible, federal information sharing with state trade offices on Commerce’s export promotion activities. This could include more formal guidance to Commerce staff regarding the circumstances, in light of legal restrictions, in which information can be shared with state trade offices and other nonfederal entities, and exploring ways for clients to give permission to release information useful to such nonfederal entities. We provided a draft of this report to Commerce and SBA for review and comment. In its written comments on the draft, which are reprinted in appendix III, Commerce concurred with our overall assessment that collaboration can be enhanced through strategic management. Commerce stated in its letter that our analysis was helpful in pointing out areas where TPCC agencies can enhance their relationship with state and local government partners. Commerce also noted that while our analysis of the status of federal cooperation with state trade promotion entities raises important issues, to make broader program decisions, Commerce would need to obtain input from a wider variety of state trade promotion entities beyond the five state programs that were the focus of our fieldwork. Commerce stated its intention to obtain comprehensive data on the overall federal relationship with state trade promotion entities and that once it obtained that additional data it would work to identify and implement strategies to enhance TPCC agencies’ collaboration with state entities on trade promotion. Commerce also described steps that are ongoing or planned to address each of our three recommendations. Consistent with our recommendation on Export Outreach Teams, Commerce noted that it will help SBA implement its reform plan to improve the Export Outreach Teams, including defining representative entities at the local level that should be included in the teams. Commerce also noted that it is working with the Brookings Institution on raising awareness of best practices under the Global Cities Exchange program and working with Brookings and participating cities to assess the program’s impact. Commerce stated its intention to monitor the impacts of Global Cities through data collected by the Census Bureau on the number of exporters in metropolitan areas. Finally, Commerce noted that it has begun transmitting to SIDO information on federal activities, such as trade missions, earlier in the planning cycle; is working with SIDO to further analyze what makes for good federal-state cooperation; and plans to hold quarterly calls with state trade promotion entities on mutually agreed-on topics. Commerce also stated in its letter that our analysis of the Export Outreach Teams paralleled that of SBA in its review of the new program; and that SBA is taking a number of steps that align with our recommendation, such as defining representative entities at the local level that should be included in the Export Outreach Teams. In addition, on May 1, 2014, the SBA Deputy Associate Administrator for International Trade provided us with comments in an e-mail on the draft, stating that our analysis and conclusions regarding its implementation of Export Outreach Teams were largely accurate and that SBA will allow the implementation process to unfold over the next year, after which it will review and assess any need for improvements, in line with our recommendations. SBA also clarified that the Export Outreach Teams were a TPCC initiative co-led by Commerce and SBA, as reflected in the mandate in the original TPCC Communiqué; we incorporated this change. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 14 days from the report date. At that time, we will send copies to the Secretary of Commerce, the SBA Administrator, and other interested parties or congressional committees. In addition, the report will be available at no charge on the GAO Website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8612 or at gianopoulosk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. In this report, we examine (1) the main characteristics of federal and state export promotion efforts, including their collaboration, and (2) the extent to which the Trade Promotion Coordinating Committee (TPCC) has advanced collaboration between state and federal efforts to promote U.S. exports. While some of the 20 TPCC member agencies directly assist small businesses to export overseas, this review focuses solely on the Department of Commerce (Commerce) and the Small Business Administration (SBA). This is because the other TPCC agencies generally provide export promotion services comparable to those that state trade offices provide. We conducted our work in Washington, D.C., and through fieldwork in five states: Florida, Minnesota, Oregon, Pennsylvania, and Virginia. We chose these states based upon the following criteria: presence of a District International Trade officer, SBA District Office, and Commerce U.S. Export Assistance Center in the metropolitan areas we planned to visit; initiation of Export Outreach Teams; state trade office presence overseas; participation in Global Cities Exchange; extent of collaboration with federal export promotion providers according to Commerce officials in Washington, D.C.; and state trade office staff size (mix of large and small). We chose these locations to better understand and test federal initiatives to advance collaboration with state trade offices at the local level; these five locations allowed us to assess federal implementation efforts overall but are not representative of the situations in each of the individual 50 states. To determine the main characteristics of federal and state export promotion efforts, we compared their sizes, services, types of clients, and performance measures using information collected from interviews and site visits to offices of Commerce’s Commercial Service (CS) and state trade offices in five states (Florida, Minnesota, Oregon, Pennsylvania, and Virginia). State trade office Directors and their staff provided detailed information regarding their office locations, staffing numbers, services provided, clients served, annual budgets, and the metrics used to measure their programs for fiscal years 2012 and 2013. To assess the reliability of these data, we created a standard set of questions that was sent to each of the five states. We cross-checked these data with similar fields in the SIDO 2012 and 2013 survey data. We also collected information on SBA services, clients, and resources from documents and interviews with SBA officials. We analyzed data collected by the State International Development Organizations (SIDO) in its annual member surveys for 2012 and 2013. To assess the reliability of the survey data from SIDO, we interviewed the SIDO representative responsible for developing and implementing the survey, performed a formal review of the survey questionnaire for methodological quality, and performed data testing. All variables from the survey in this report were determined to be sufficiently reliable for the purposes of this engagement. To calculate export successes and partnerships with state trade offices associated with export successes, we analyzed data from the Commerce’s Client Tracking System (CTS).specialists and local office management, tracks fee-based and non-fee- based activities. CTS is the principal database used for tracking “export successes”—the primary performance measure for CS. We conducted analysis of CTS data for fiscal year 2012 that identified export successes by state and types of partnerships. On the basis of interviews with knowledgeable agency officials and our assessment of the data for missing data, outliers, and obvious errors, we concluded that all data elements we assessed in the export successes data provided to us by Commerce were sufficiently reliable for the purpose of this report. To determine the extent of federal and state collaboration on export CTS, an operational database used by field promotion, we met with cognizant officials from CS and state trade offices in the five states we visited. We compiled information from state trade office websites on their international presence abroad. If information was not readily available, we contacted staff in the state trade offices by e-mail or telephone for information about their overseas presence. We cross- checked these data with similar fields in the SIDO survey data. U.S. trade data prepared by the U.S. Bureau of Economic Analysis are used in the report to illustrate the progress toward the National Export Initiative (NEI) goal of doubling 2009 export levels by 2015. To address the extent to which the TPCC has supported collaboration between federal and state efforts to promote U.S. exports, we reviewed the Export Enhancement Act of 1992, which directed the President to establish the TPCC; Executive Order 12870 (Sept. 30, 1993) that established the TPCC in accordance with the 1992 act; and Executive Order 13534 (Mar. 11, 2010) announcing the NEI, as well as the 2011 and 2012 editions of the TPCC’s annual National Export Strategy reports to Congress. We also asked the TPCC to identify their main efforts to collaborate with state and local agencies that have a role in export promotion. They identified three initiatives: Export Outreach Teams, the Global Cities Exchange (initially referred to as the Metropolitan Export Initiative), and the memorandum of understanding (MOU) between Commerce and SIDO. To evaluate the three initiatives, we interviewed staff of the TPCC Secretariat, which is housed in Commerce, as well as cognizant Commerce and SBA officials at headquarters. We analyzed the documents that provide a framework and guidance for the administration of Export Outreach Teams, as well as documents that describe the Global Cities Exchange and Commerce’s relationship with the Brookings Institution and SIDO. These documents include a TPCC interagency communiqué formalizing the Export Outreach Teams and the subsequent program guidance issued by SBA, data collected about the first Export Outreach Team meetings, formal agreements between Commerce and the Brookings Institution and between Commerce and SIDO, and documents from the Brookings Institution that describe the goals of the Global Cities Exchange. We also interviewed representatives from the Brookings Institution and SIDO to obtain their perspectives and discuss relevant ongoing changes to their programs and other efforts. In reviewing the Commerce-SIDO MOU, we reviewed the Trade Secrets Act and pertinent legal opinions regarding the act’s applicability, and we consulted with a representative of Commerce’s Office of General Counsel regarding that office’s views on how the act affects Commerce’s ability to share information with other federal, state, and local agencies. Finally, we met with a number of national organizations that represent the interests of local government entities to obtain their perspectives and ascertain their roles in promoting U.S. exports. The organizations we contacted include the National Governors Association, United States Conference of Mayors, National Association of Counties, and the National League of Cities. We also reviewed GAO’s guidance regarding good practices for coordinating and managing multiagency initiatives as described in other GAO reports, including those discussing implementation of the Government Performance and Results Act (GPRA) of 1993 and the GPRA Modernization Act of 2010. In states we visited, we interviewed staff at state trade offices, federal officials from Commerce and the SBA District Offices. On the basis of recommendations from federal or state officials in those states, we met with other local entities, including trade associations, economic development organizations, Small Business Development Centers (SBDCs), representatives from District Export Councils, local chambers of commerce, and the mayor’s offices in Miami, Florida; Minneapolis, Minnesota; and Portland, Oregon, to obtain information about local approaches to interagency collaboration and about implementation of the three TPCC initiatives. We then discussed what we found on our site visits with officials from TPCC, Commerce, and SBA in headquarters to obtain their input. We conducted this performance audit from February 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Figure 7 identifies countries in which the Department of Commerce (Commerce) and state trade offices have representation overseas. Commerce representation consists of U.S. Commercial Service staff located in offices overseas. State trade office representation includes state employees located in offices in-country and contractors hired as service providers who provide services to a country from offices located in-country, or on a regional basis from offices not located in-country, or both. Kimberly Gianopoulos, (202) 512-8612, or gianopoulosk@gao.gov. In addition to the contact named above, Adam Cowles (Assistant Director), Qahira El’Amin, Nina Pfeiffer, and Cristina Ruggiero made key contributions to this report. Gezahegne Bekele, David Dayton, David Dornisch, Etana Finkler, and Ernie Jackson provided technical assistance.
The 2010 National Export Initiative calls for the federal government to coordinate more with state and local governments and other public and private partners on export promotion. Recently, the TPCC identified three key initiatives to enhance collaboration among federal, state, and other partners. Congress requested that GAO review federal and state collaboration in export promotion. This report examines (1) the main characteristics of federal and state export promotion efforts, including their collaboration, and (2) the extent to which the TPCC has advanced collaboration between state and federal efforts. GAO analyzed federal and state documents and data from 2012 and 2013; interviewed officials from federal, state, and other export promotion organizations; and visited federal and state trade offices and other relevant organizations in five states selected as a nongeneralizable sample based on their participation in the TPCC initiatives and other factors. Federal and state governments share a common interest in promoting exports as a tool for economic growth and creating jobs. Both provide similar and overlapping export promotion services to similar clients, but their staffing, budgetary resources, and ways of measuring performance vary. Located across the country, Department of Commerce (Commerce), Small Business Administration (SBA), and state trade offices provide outreach, counseling and training, and trade leads, mostly to small businesses. In some states, state trade offices have more domestic staff than Commerce offices do. However, Commerce provides more overall coverage abroad, with offices in 72 countries, 15 of which have no state trade office representation. In the five states GAO visited, federal and state collaboration on export promotion varied from working closely in the same location to not collaborating at all, depending on unique factors in each state. The federal interagency Trade Promotion Coordinating Committee (TPCC) has three initiatives designed to advance federal-state collaboration in promoting U.S. exports by strengthening and expanding networks of state and local governments and other partners. Results of these efforts have been limited, however, in part because their implementation has not consistently followed key collaboration practices. In prior work, GAO found that collaboration is generally enhanced by following key practices, such as articulating common outcomes; agreeing on roles and responsibilities; monitoring, evaluating, and reporting on results; and coordinating resource planning. In the states it visited, GAO found weaknesses in the implementation of Export Outreach Teams, a TPCC initiative. For example, in some cases, activities were missing key participants and were inconsistent with the activities' objectives, in part because SBA is not fully monitoring implementation of the teams across its 68 district offices. Similarly, GAO found that TPCC's involvement in a Brookings Institution initiative to engage metropolitan areas in export promotion has unknown implications for federal export promotion efforts and resources because Commerce lacks a means to monitor the initiative's results. Finally, an agreement between Commerce (the TPCC Chair) and a national group representing state trade offices expired without achieving its collaboration objective or enhancing client information sharing so states can share credit with Commerce for helping companies make export sales. According to Commerce, by law, it cannot release its clients' confidential commercial information, and its policy is to make determinations on releasing information case by case, but it does not provide formal guidance to staff on what information sharing is allowable. GAO recommends that the TPCC take steps consistent with key practices for collaboration to (1) improve implementation of the Export Outreach Teams to better achieve their intended objectives; (2) enhance TPCC agencies' collaboration on export promotion with nonfederal entities; and (3) enhance federal information sharing with state trade offices, where possible, on Commerce's export promotion activities, for example, by providing formal guidance to staff on allowable information sharing. Commerce and SBA agreed with GAO's recommendations.
Military construction appropriations fund the planning, design, construction, alteration, and improvement of military facilities worldwide. The military construction appropriation request for fiscal year 2008 included approximately $21.3 billion for military construction and family housing, of which nearly $1.2 billion (5.6 percent) is designated for specific overseas locations, mostly comprising enduring installations, and not for new and emerging requirements outside existing basing structures. As of fiscal year 2006, DOD had 3,731 installations, with 766 installations located overseas. In recent years, DOD has been undergoing a transformation to develop a defense strategy and force structure capable of meeting changing global threats. As part of its transformation, DOD has been reexamining overseas basing requirements to allow for greater U.S. military flexibility to combat conventional and asymmetric threats worldwide. In September 2001, DOD issued its Quadrennial Defense Review Report, which addressed, among other issues, reorienting the U.S. military global posture. The report called for developing a permanent basing system that provides greater flexibility for U.S. forces in critical areas of the world as well as providing temporary access to facilities in foreign countries that enable U.S. forces to train and operate in the absence of permanent ranges and bases. In August 2004, President Bush announced what has been described as the most comprehensive restructuring of U.S. military forces overseas since the end of the Korean War. The initiative is intended to close bases no longer needed to meet Cold War threats, as well as bring home many U.S. forces while stationing more flexible, deployable capabilities in strategic locations around the world. The integrated global presence and basing strategy is the culmination of various DOD studies, including the overseas basing and requirements study, the overseas presence study, and the U.S. global posture study. As a part of DOD’s global realignment, in 2004 the United States and Japan began a series of sustained security consultations aimed at strengthening the U.S.-Japan security alliance to better address today’s rapidly changing global security environment. DOD’s Defense Policy Review Initiative established a framework for the future of U.S. force structure in Japan designed to create the conditions to reduce the burden on local Japanese communities and create a continuing presence for U.S. forces by relocating units to other areas, including Guam, while repositioning U.S. forces to respond better to regional crises. This initiative also includes a significant reduction and reorganization of the Marine Corps posture on Okinawa, Japan, to include relocating 8,000 marines and their estimated 9,000 dependents to Guam. More than 10,000 marines and their dependents will remain stationed in Okinawa after this relocation. The initiatives also include the relocation of Carrier Air Wing Five from Atsugi Naval Air Facility to Iwakuni Marine Corps Air Station, Japan; the replacement of the U.S. Marine Corps Futenma Air Station, Japan; transformation of Army headquarters at Camp Zama, Japan; deployment of a nuclear-powered aircraft carrier at Yokosuka Naval Base, Japan; deployment of a transportable ballistic missile defense radar system; relocation of training activities; land returns; and shared use of facilities. Guam is the westernmost territory of the United States and is strategically located in the Pacific Ocean approximately 3,810 miles southwest of Honolulu, Hawaii; 1,600 miles east of Manila, the Philippines; and 1,560 miles southeast of Tokyo, Japan (see fig. 1). Given its strategic location, Guam is an integral part of DOD’s logistical support system and serves as an important forward operational hub for a mix of military mission requirements. According to DOD, Guam provides strategic flexibility, freedom of action, and prompt global action for the Global War on Terrorism, peace and wartime engagement, and crisis response. About 29 percent of the land is controlled by DOD (see fig. 2), 52 percent is privately owned, and 19 percent is under the supervision of the Government of Guam. In 2003, the Senate Appropriations Committee expressed concern that the overseas basing structure had not been updated to reflect the new realities of the post-Cold War world. The committee has also expressed concern about the use of military construction budget authority for projects at bases that may soon be obsolete because of changes being considered in overseas presence and basing. Consequently, in Senate Report 108-82, the Senate Appropriations Committee directed DOD to prepare detailed, comprehensive master plans for the changing infrastructure requirements for U.S. military facilities in each of its overseas regional commands. According to the Senate report, at a minimum, the plans are to identify precise facility requirements and the status of properties being returned to host nations. In addition, the report stated that the plans should identify funding requirements and the division of funding responsibilities between the United States and cognizant host nations. The Senate report also directed DOD to provide congressional defense committees a report on the status and implementation of those plans with each yearly military construction budget submission through fiscal year 2008. Subsequently, the House conference report accompanying the 2004 military construction appropriation bill also directed the department to prepare comprehensive master plans with yearly updates through fiscal year 2009. The first report was due with the fiscal year 2005 military construction budget submission and is to be updated each succeeding year to reflect changes to the plans involving specific construction projects being added, canceled, or modified, or funding for those projects being redirected to other needs, and justification for such changes. The Senate report also directed GAO to monitor the comprehensive master plans being developed and implemented for the overseas regional commands and to provide the congressional defense committees with a report each year giving an assessment of the plans. As initiatives for expanding U.S. military presence on Guam began to emerge, the Senate Appropriations Committee noted the ambitiousness of the military construction program and the need for a well-developed master plan to efficiently use the available land and infrastructure. In July 2006, the committee recommended deferral of two military construction projects at Andersen Air Force Base that were included in the President’s budget request until such time as they can be incorporated into a master plan for Guam and viewed in that context. To that end, the committee directed the Secretary of Defense to submit to the appropriations committees a master plan for Guam by December 29, 2006, and a report accounting for the United States’ share of this construction program to project-level detail and the year in which each project is expected to be funded. The Senate report also directed GAO to review DOD’s master planning effort for Guam as part of its annual review of DOD’s overseas master plans. Within DOD, the Under Secretary of Defense for Acquisition, Technology, and Logistics was tasked to prepare the detailed, comprehensive master plans. In turn, the Under Secretary assigned the overseas regional combatant commands responsibility for preparing comprehensive master plans for their areas of responsibility. As shown in figure 3, PACOM coordinates East Asia and South Asia; EUCOM coordinates much of sub- Saharan Africa and Europe, as well as the Indian Ocean islands off the coast of southeast Africa; and CENTCOM coordinates efforts in the Middle East, the Horn of Africa, and Central Asia. Not shown are Northern Command, which coordinates activities in North America, and Southern Command, which coordinates activities in South America, Central America, and the Caribbean. We did not include Northern and Southern Commands in our review because they have significantly fewer facilities outside of the United States than the other regional commands in the Pacific, Europe, and Central Asia. There are also four functional unified combatant commands that are assigned worldwide functional responsibilities not bounded by geography: Special Operations Command, Strategic Command, Joint Forces Command, and Transportation Command. Initial implementation details for the movement of U.S. Marines to Guam and associated military construction projects took place under the leadership of PACOM. In August 2006, OSD directed the Navy to establish JGPO to facilitate, manage, and execute requirements associated with the rebasing of Marine Corps assets from Okinawa to Guam, including master planning efforts. The office’s responsibilities include integration of operational support requirements, development, program, and budget synchronization; oversight of the construction; and coordination of government and business activities. Specifically, JGPO was tasked to lead the coordinated planning efforts among the DOD components and other stakeholders to consolidate, optimize, and integrate the existing DOD infrastructure capabilities on Guam. The office is expected to work closely with the Government of Japan and the local Guam government, other federal agencies, and Congress in order to manage this comprehensive effort and to develop a master plan. At the time of our review, JGPO and the Department of the Interior had formed a federal interagency task force in order to coordinate efforts to address issues relating to commerce, transportation, environment, and other areas. JGPO falls under the direct oversight of the Assistant Secretary of the Navy for Installations and Environment. In our prior work, we found that while DOD’s master plans generally exceeded the reporting requirements established by Congress, opportunities existed for the plans to provide more complete, clear, and consistent information and to present a more definitive picture of future requirements. In 2006, we reported that the master plans did not always explain how their implementation could be affected by other relevant and related defense plans and activities because there is not a requirement for them to do so. However without such explanations and linkage, it was difficult to determine the extent to which the master plans were coordinated and synchronized with other defense plans and activities and the effects these other activities have on the master plans in terms of infrastructure and funding requirements. We also reported that while the plans addressed a number of challenges that DOD faced in implementation—such as uncertainties with host nation relations and environmental concerns— PACOM’s plan did not address training limitations in South Korea and Japan. We explained how some of these challenges could have a significant effect on infrastructure and funding requirements and, because the plans did not always describe such challenges and their potential effects, that Congress lacked the complete picture it needed to evaluate the annual military construction funding request. In 2005, we reported that without more complete, clear, and consistent reporting of various items—host nation agreements and funding levels, including special bilateral agreements; U.S. funding levels and sources in addition to military construction funds; environmental remediation and restoration issues; population levels; and facility requirements and funding levels for Hawaii, Guam, U.S. territories, and other insular areas in the Pacific—across the master plans, Congress and other users did not have the best data available to facilitate their annual review and oversight. Also, we reported that without the detailed information on individual construction projects and the anticipated strategic end state of the command’s overseas basing infrastructure, Congress did not have the best available and consistent data on which to track progress and changes from year to year and between commands. In 2004, before DOD issued its initial overseas master plans, we reported that various factors, such as residual property value, environmental remediation, and the availability of multiple U.S. funding sources, could affect the funding of U.S. infrastructure overseas as well as the implementation of the plans once they were issued. At that time, we recommended that the overseas regional commands address these and other factors in the development of their plans. The fiscal year 2008 master plans, which provide infrastructure requirements at U.S. military facilities in each of the overseas regional commands’ area of responsibility, reflect changes—to include recent decisions in the U.S. overseas defense basing strategies and requirements—and they generally describe the challenges that DOD faces in implementing the plans as well as our prior recommendations for improving the plans. The plans generally incorporate key changes associated with the continuing evolution of U.S. overseas basing strategies and provide a more comprehensive description of the challenges DOD faces in implementing the plans than in previous years. But while this year’s plans provide information to respond to most of our prior recommendations, they do not address residual value—that is, the value of property being turned over to the host nation based on its reuse of property. Furthermore, PACOM’s master plan does not describe the challenges the Air Force faces in training in South Korea, although it does describe for the first time the challenges addressing training limitations in Japan. This year’s master plans incorporated key changes—including some very recent changes—associated with the continuing evolution of U.S. overseas basing strategies and requirements. In the 2008 master plans, OSD recognized that further changes will result as it continues to implement the global defense posture decisions. The department reported that as the overseas political and military environment and strategic landscape further evolve, global defense posture plans will continue to mature to address new priorities. Specifically, several changes identified in the overseas master plans included updated information involving realignment initiatives in South Korea and Japan, DOD’s efforts to establish missile defense in Eastern Europe, and the creation of U.S. Africa Command. PACOM’s master plan discussed the progress of dynamic realignment initiatives, which will relocate military personnel and facilities in South Korea and Japan. For example, last year PACOM reported that the U.S. and Japanese governments had established an interim agreement in October 2005 involving the realignment of U.S. forces in Japan. This year, PACOM updated this information by indicating that final implementation documents were approved in May 2006. In addition, PACOM described the importance of relocating 8,000 marines and their dependents from Okinawa to Guam, returning additional land to Japan, and retaining a forward Marine Corps command and control capability to ensure a balanced, flexible contingency response capacity within the Asia-Pacific region. With respect to South Korea, PACOM provided information updating the status of the Land Partnership Plan and the Yongsan Relocation Plan, including a list of U.S. military camps and sites returned to the Government of South Korea, and describing the results from the October 2006 meeting between the Secretary of Defense and South Korea’s Minister of Defense. As a part of DOD’s efforts to establish a U.S. presence in Eastern Europe through a network of forward operating sites and cooperative security locations, EUCOM’s master plan stated that the United States signed individual agreements with the governments of Romania and of Bulgaria in 2005 and 2006, respectively, which will allow DOD access to facilities and training sites. EUCOM also provided additional details, such as the mission, planned capabilities, equipment and aircraft, and population. Furthermore, EUCOM provided a status of ongoing transformation realignments in its area of responsibility, including listing the return of facilities to host nations, changes to its basing categories, and the rationale for these realignments. The master plans also described recent efforts to proceed with formal negotiations with the governments of Poland and the Czech Republic on establishing missile defense sites. This year, DOD forecasted changes for next year’s master plans involving the development of a new command responsible for Africa, which is expected to be established by September 30, 2008. The President announced in February 2007 that the U.S. military will establish a new, separate U.S. Africa Command to enhance security cooperation, extend humanitarian assistance, and build partnership capacity on the African continent. At the time of our review, U.S. involvement in Africa is shared among three combatant commands. PACOM is responsible for Madagascar, the Seychelles, and the Indian Ocean area off the African coast. EUCOM is responsible for the largest swath of the continent: North Africa; West Africa, including the Gulf of Guinea; and central and southern Africa. CENTCOM covers the Horn of Africa—including Somalia, Ethiopia, Eritrea, Kenya, Djibouti, and Sudan. There are 13 cooperative security locations throughout Africa that historically have been identified in the EUCOM master plan. The new U.S. Africa Command eventually will encompass the entire continent of Africa except for Egypt, which will continue to fall under CENTCOM’s area of responsibility. Discussions are ongoing on the possible headquarters location and what kinds of military forces would be assigned to the command. This year, the changes identified in the plans provided useful information on evolving costs and facility requirements in overseas basing. In addition, the commands continue to focus first on the mission and then on the infrastructure requirements needed to support the mission. For example, in CENTCOM’s master plan, the descriptions of each forward operating site focus first on the mission and then on requirements by providing the type of mission the site has (such as providing logistical support), the unit that it could host, and its role in the region (such as supporting the war against terrorism or strengthening capabilities for rapid and flexible response in the central Asian states), as well as identifying the requirements for equipment and facilities to support the mission at the site. All of the commands provide similar information for their main operating bases, forward operating sites, and cooperative security locations. Even with the department’s effort to update the plans as changes occurred and decisions were made, the evolution of U.S. overseas defense basing strategies and requirements continues. Accordingly, OSD and the regional commands will be faced with more changes in the future, and the changes occurring after this year’s plans were submitted to Congress will have to be reflected in next year’s plans. The fiscal year 2008 master plans discussed a number of challenges that DOD faces in the implementation of the plans, such as uncertainties with host nation relations and environmental concerns. In our prior reports, we explained how these challenges could have an effect on infrastructure and funding requirements and, because the prior plans did not always describe such challenges and their potential effects, that Congress lacked the complete picture it needed to evaluate the annual military construction funding requests. This year, the plans provided a much more comprehensive description of challenges and the potential effects on implementation. All of the regional commands describe to varying degrees the status of recent negotiations and agreements with host nations in their fiscal year 2008 master plans. In our review of the overseas master plans in 2005, we found that none of the commands fully explained the status of or challenges to finalizing host nation agreements and recommended that the commands briefly explain the status of negotiations with host nations to provide more complete and clearer plans. These agreements depend largely on the political environment and economic conditions in host nations and can affect the extent of host nation support—access to facilities or funding—to U.S. forces. Accordingly, the resulting agreements may increase or decrease U.S.-funded costs for future infrastructure changes. For example, this year: PACOM’s master plan updated information on the results of the Defense Policy Review Initiative, including the importance of certain initiatives, such as the replacement of the Marine Corps Air Station Futenma in hopes that it will lessen the effect of military aviation operations on the local citizens of Japan. In addition, U.S. Forces Japan identified decreasing funds for the Japanese facilities improvement program, historically the source of major construction on U.S. bases in Japan. U.S. Forces Japan anticipates the Government of Japan will continue to decrease these funds on the basis that in addition to this program and other forms of host nation support (i.e., utilities and Japanese labor force), the Government of Japan is also responsible for providing funding for the Defense Policy Review Initiative. Potential Government of Japan financial constraints may result in U.S. facilities in Japan requiring more financial support from the U.S. government than in the past. In addition, USFK provided details on current realignment efforts, including the Government of South Korea’s approval of the Land Partnership Plan and Yongsan Relocation Plan and efforts to coordinate the transfer of U.S.-vacated bases. The plans also discussed USFK’s efforts to work with South Korea to complete the transition of wartime operational control from the United States to South Korea in the future. EUCOM’s master plan identified specific information on efforts to close or return installations, such as Naval Air Station Keflavik, Iceland; Naval Support Activity La Maddalena, Italy; selected sites in Germany, Belgium, and Turkey; and several classified locations in the region. The plan also recognized that current U.S. basing may not adequately support either strategic changes in an expanding North Atlantic Treaty Organization Alliance or the requirements of a rapidly changing area of responsibility while seeking to preserve assets with enduring value to its missions, goals, and national interests. EUCOM also explained that its transformation execution depends on host nation negotiations, political-military considerations, base realignment and closure, and fiscal limitations. CENTCOM’s master plan discussed efforts to solicit host nation contributions and the amount of coordination and support that is needed from DOD, the State Department, and Congress. The plan discussed the challenge of ongoing operations in Iraq and Afghanistan and CENTCOM’s intention to sustain long-term access to locations across its area of responsibility. The plan also reflected land return actions in Kuwait and Uzbekistan and changes to base category designations, such as consolidation of a cooperative security location into a forward operating site, both of which support surge capability for ground force support. All of the commands addressed the extent of their environmental challenges in this year’s overseas master plans. In contrast, during our review of the overseas master plans in 2005, none of the commands identified environmental remediation and restoration issues. This year, PACOM provided information on remediation actions taken by USFK before returning installations to South Korea, such as skimming fuel from groundwater at five camps. Last year, USFK also discussed its efforts to coordinate with the Government of South Korea on remediation of vacated U.S. bases; officials expect these efforts will accelerate the return of vacated facilities and areas to the Government of South Korea and the relocation of U.S. forces from Seoul and other locations. This year, EUCOM identified areas for cleanup, groundwater investigation, and monitoring and discussed contamination at one site that did not present an unacceptable risk to human health or the mission. Last year, CENTCOM did not report any environmental issues. Though a senior CENTCOM official said that there were no environmental issues last year in the command’s area of responsibility, this year, CENTCOM’s master plan identified funding requirements for a wastewater treatment plant and a water treatment and distribution system at Bagram Airfield, Afghanistan, in hopes of avoiding potential environmental problems. The extent to which the commands provide information involving environmental activities provides the users of the plans with the ability to compare and comprehend how costs have varied and how these costs may affect planned U.S. funding levels. The fiscal year 2008 overseas master plans have been updated to reflect our prior recommendations for improving the plans, though they do not address the issue of residual value as we recommended in 2004. To improve the overseas master plans and address our recommendations from last year, OSD provided additional guidance on October 12, 2006, to the regional commands in preparing this year’s plans. As a result, the fiscal year 2008 master plans identify how implementation of the plans could be affected by other relevant and related defense plans and activities. For example, PACOM’s force structure plans are linked to the military buildup on Guam and CENTCOM’s increased troop strength and facilities in Iraq and Afghanistan are linked to ongoing operations. In addition, the commands generally provided more detailed information on a variety of key areas, such as precise facility requirements and costs, time frames for an end state, base categories, host nation funding levels, and effects of other defense activities. For example: Facility requirements and costs. This year, all of the regional commands identified their precise facility requirements and costs for fiscal year 2008 and for fiscal years 2009 through 2013, and reported estimated facility sustainment costs for fiscal year 2008. In addition, CENTCOM provided information on supplemental appropriations for facilities and projects at Bagram Airfield, Afghanistan. Base categories. This year, all of the commands categorized their installations into applicable base categories of main operating base, forward operating site, and cooperative security location, which provided users a clearer picture of the infrastructure plans and requirements at these sites. The commands also supplemented the information on base categories with detailed data on the installations’ capabilities, overall mission, population, and types of equipment and facilities located at each site. For example, CENTCOM and EUCOM also identified adjustments to the base categories, such as redesignating a main operating base as a forward operating site or consolidating two cooperative security locations into one. EUCOM also provided specific details on sites no longer considered cooperative security locations in Bulgaria, Romania, and Poland, such as sites with no operational importance and a commercial facility readily available for military use that did not require U.S. investment or presence. End state date. This year, all of the commands identified a common strategic end state date of 2013, which identifies the last fiscal year of the construction time frame. The strategic end state date of 2013 provides users a more complete and clearer basis for tracking progress in meeting the command infrastructure objectives for their areas of responsibility. Previously, OSD had provided the commands the discretion in choosing an end date from 2011 to 2015. Host nation funding levels. This year, all of the commands reported host nation funding levels at the project level for fiscal year 2008 and at the aggregate level for fiscal years 2009 through 2013, which provided users a better basis to determine the extent to which U.S. funding is needed for facility requirements. Also, PACOM identified host nation funding for its bilateral agreements in South Korea, such as the Land Partnership Plan and the Yongsan Relocation Plan. On the other hand, PACOM did not identify specific host nation funding from the Defense Policy Review Initiative—while the Government of Japan’s share for the Guam relocation is $6.1 billion, the Government of Japan has not made an official, public estimate of the costs for several major realignments within Japan. Although, in relation to this initiative, the command did identify the need for U.S. military construction funds to support realignment costs not paid by the Japanese government. EUCOM provided information on North Atlantic Treaty Organization contributions and discussed a burden-sharing arrangement with the Government of Norway. CENTCOM also provided host nation estimates and explained that its efforts to attain host nation funding were ongoing. Effects of other defense activities. This year, all of the commands described the effects of other defense activities on implementation of their master plans. Last year, only PACOM’s plan gave some indication of how its implementation could be affected by another activity—the potential decrease in traditional Japanese construction funding that would help Japan offset its Defense Policy Review Initiative costs, such as those associated with the relocation of U.S. Marines to Guam. This year, PACOM discussed this topic as well as the progress of bilateral negotiations with Japan and challenges associated with this realignment. Last year, EUCOM’s master plan did not explain the potential effect of implementing base realignment and closure recommendations on the movement of troops from Germany to bases in the United States, commonly called overseas rebasing. EUCOM and Army officials told us that any delay in the implementation of base realignment and closure recommendations would cause them to delay the movement of Army servicemembers and their families if facilities were not available at receiving installations in the United States. This would delay the closings of Army installations in Europe and increase costs to operate those installations while they remain open. This year, the overseas master plan identified that the base realignment and closure recommendations supported overseas restructuring and that EUCOM’s transformation depends on this effort. Last year, CENTCOM’s master plan only made general references to operations in Iraq and did not fully explain the potential effects of such operations on other installations and facility requirements outside of Iraq in its area of responsibility. This year, CENTCOM officials emphasized that infrastructure requirements in their master plan directly supported and responded to ongoing operations in Iraq and Afghanistan, in terms of increased troop strength and its facilities requirements in theater. In addition, CENTCOM’s plan identified how future basing and infrastructure will be defined by ongoing contingencies and global defense posture. While the overseas master plans have continued to evolve and have provided more comprehensive data every year since 2004, two key topics continue to be omitted from the plans. First, the master plans do not address the issue of residual value—the value of property being turned over to the host nation based on its reuse of property. As we reported last year, residual value was excluded from OSD’s guidance because it is based on the reuse of property being turned over to the host nation, which is limited for most categories of military facilities and is often reduced by actual or anticipated environmental remediation costs. Consequently, as we have noted in the past, DOD officials believe that residual value cannot be readily predicted and therefore should not be assumed in the master plans. However, since these issues vary by host nation and may not be clear to all users of the plans, we continue to believe that OSD should require commands, at a minimum, to explain the issues with obtaining residual value in each host nation and report the implications for U.S. funding requirements. Also, the U.S. government has received approximately $592 million since 1989 in residual value and payment-in- kind compensation from property returns in EUCOM’s area of responsibility, and EUCOM continues to aggressively seek compensation for U.S. capital improvements at installations returned to host nations. As EUCOM continues to return facilities in Germany, Italy, and Iceland, this figure may increase. Accordingly, we continue to believe that residual value should be addressed in the master plans. Second, while PACOM’s master plan provided details on other challenges, it did not describe the challenges the command faces in addressing training limitations for the Seventh Air Force in South Korea, although senior officials told us that these limitations could cause the United States to pursue alternatives, such as training in other locations, downsizing, or relocating, which could affect overseas basing plans. Specifically, we found that the PACOM master plan did not point out that the Seventh Air Force in South Korea may be unable to maintain combat capability in the long term because of a lack of adequate air-to-surface ranges, according to senior Air Force and USFK officials. For decades, the Government of South Korea has attempted to relocate the Koon-Ni range, which had served as the primary air-to-ground range for the Seventh Air Force. The air and ground range management of the Koon-Ni training range was transferred to the Government of South Korea, which closed the range in August 2005. While there is an agreement with the Government of South Korea to train at other ranges, according to senior Air Force and USFK officials, the other ranges do not provide electronic scoring capabilities necessary to meet the Air Force’s air-to-surface training requirements and there is difficultly in scheduling these ranges. As a result, the Air Force has been using ranges in Japan and Alaska to meet its training requirements, which results in additional transportation costs to the U.S. government. In May 2007, officials said that some progress had been made in addressing the Air Force’s training challenges in South Korea and that they expected the needed upgrades to be completed by mid-2007. In contrast, the PACOM plan described the training limitations involving bombing and live fire training ranges and the effects of airspace access restrictions in Japan on C-130 training. In addition, the plan discusses how noise and land use sensitivities and maneuver area limitations in Okinawa require U.S. forces to deploy to other Pacific Rim locations to supplement their training, which results in additional transportation requirements and costs. The plan also discussed efforts by U.S. Forces Japan and the Government of Japan to engage in bilateral discussions to address training shortfalls and explore solutions. We have previously recommended that overseas regional commands address residual value issues and that PACOM explain how it plans to address existing training limitations in our prior reports. We believe that identifying these issues would make Congress aware of potential challenges to obtaining residual value and to training U.S. forces in South Korea, which may affect facility requirements and funding in this country. Even though our prior recommendations have not been fully addressed, we continue to believe that they have merit and that Congress would benefit from disclosure of this information. DOD’s planning effort for the buildup of military forces and infrastructure on Guam is in its initial stages, with many key decisions and challenges yet to be addressed. While the Guam Integrated Military Development Plan provides information on the projected military population, units, and infrastructure that may be needed for the Guam realignments, it lacks specific information and is not intended to be a master plan. Additional time is needed for DOD to address several challenges before JGPO can develop a Guam master plan. First, the required environmental impact statement—which will take up to 3 years to complete, according to DOD documents and officials—was initiated on March 7, 2007. According to DOD officials, the results of this environmental impact statement will influence many of the key decisions on the exact location, size, and makeup of the military infrastructure development on Guam. Second, exact size and makeup of the forces to be moved to Guam are not yet identified. Third, DOD officials said that additional time is needed to fully address the challenges related to funding uncertainties, operational requirements, and Guam’s unique economic and infrastructure requirements. At the same time, DOD has not established a comprehensive and routine process to keep Congress informed on its progress in dealing with these issues and the overall status of implementing the military buildup on Guam. While the Guam Integrated Military Development Plan provides the best available information on the projected military population, units, and infrastructure that may be needed for future Guam realignments, DOD officials told us that their planning effort was still in its initial phases with many key decisions and challenges yet to be addressed. In July 2006, PACOM issued its Guam development plan—a notional document describing the future development of the military services on Guam over the next decade and beyond. The plan is based upon a notional force structure that was used to generate land and facility requirements for basing, operations, logistics, training, and quality of life involving the Marine Corps, Army, Air Force, Navy, and Special Operations Forces in Guam. DOD officials told us that the plan was not a master plan because it did not include specific information on facility requirements, associated costs, and a timeline for specific actions and was not intended to meet the requirement to provide a master plan to both congressional appropriations committees by December 2006. In addition, the development plan does not direct individual service programming actions or provide for specific funding requirements. According to DOD documents and officials, additional detailed service and joint planning will be required to identify specific facility, infrastructure, and funding requirements and address the challenges associated with the military buildup. Among the challenges to be addressed before JGPO can develop a Guam master plan is to complete the required environmental impact statement. According to DOD officials, the results of the environmental statement— which could take up 3 years to complete—will affect many of the key decisions on the exact location, size, and makeup of the military infrastructure development. On March 7, 2007, the Navy issued a public notice of intent to prepare an environmental impact statement pursuant to the requirements of the National Environmental Policy Act of 1969 (NEPA), as implemented by the Council on Environmental Quality Regulations, and Executive Order 12114. The notice of intent in the Federal Register states that the environmental impact statement will: Examine the potential environmental effects associated with relocating Marine Corps command, air, ground, and logistics units (which comprise approximately 8,000 marines and their estimated 9,000 dependents) from Okinawa to Guam. The environmental impact statement will examine potential effects from activities associated with Marine Corps units’ relocation to include operations, training, and infrastructure changes. Examine the Navy’s plan to enhance the infrastructure, logistic capabilities, and pier/waterfront facilities to support transient nuclear aircraft carrier berthing at Naval Base Guam. The environmental impact statement will examine potential effects of the waterfront improvements associated with the proposed transient berthing. Evaluate placing a ballistic missile defense task force (approximately 630 servicemembers and 950 family members) in Guam. The environmental impact statement will examine potential effects from activities associated with the task force to include operations, training, and infrastructure changes. DOD officials recognize that the results of this environmental assessment process may affect the development and timing of DOD’s plan for Guam. Under NEPA and the regulations for implementing NEPA established by the Council on Environmental Quality, an environmental impact statement must include a purpose and need statement, a description of all reasonable project alternatives and their associated environmental impacts (including a “no action” alternative), a description of the environment of the area to be affected or created by the alternatives being considered, and an analysis of the environmental impacts of the proposed action and each alternative. Further, accurate scientific analysis, expert agency comments, and public scrutiny are essential to implementing NEPA. For example, federal agencies such as DOD are required to ensure the professional integrity, including scientific integrity, of the discussions and analyses in the environmental impact statement. Additionally, after preparing a draft environmental impact statement, federal agencies such as DOD are required to obtain the comments of any federal agency that has jurisdiction by law or certain special expertise and request the comments of appropriate state and local agencies, Native American tribes, and any agency that has requested that it receive such statements. Until an agency issues a final environmental impact statement and record of decision, it generally may not take any action concerning the proposal that would either have an adverse environmental impact or limit the choice of reasonable alternatives. DOD officials stated that performing these alternative site analyses and cumulative effects analyses will delay the Guam master plan’s completion. Based on the expected completion of the environmental impact statement, according to JGPO officials, the master plan may not be completed until fiscal year 2009. The exact size and makeup of the forces to move to Guam and the housing, operational, quality of life, and services support infrastructure required are not yet fully known and are expected to be identified and assessed during the parallel environmental analysis and the individual service and joint planning processes. While DOD identified some Marine Corps units for relocation as a part of realignment initiatives, there are assessments still under way to determine the optimal mix of units in Guam and in Okinawa. The following Marine Corps units have been identified for relocation to Guam: Third Marine Expeditionary Forces Command Element, Third Marine Division Headquarters, Third Marine Logistics Group Headquarters, 1st Marine Air Wing Headquarters, and 12th Marine Regiment Headquarters. The Marine Corps forces remaining on Okinawa will consist of Marine Air-Ground Task Force elements, such as command, ground, aviation, and combat service support, as well as a base support capability. Approximately 10,000 marines plus their dependents are expected to remain on the island of Okinawa following the realignment of forces to Guam. While these broad estimates provide a baseline, according to officials we visited, the Marine Corps is still determining the specific mix of units and capabilities needed to meet mission requirements on both Guam and Okinawa. The mix of units is significant because, according to Marine Corps officials, the functional and base support requirements will be based on the type, size, and number of units that will relocate to Guam. This determination will define the training and facility requirements, such as barracks, family housing, schools, and other infrastructure. In response to the ongoing assessment by the Marine Corps, a JGPO official said that the office was initiating a master plan that will reflect the building of flexible infrastructure that could accommodate any type of military units that may relocate to Guam. However, in the absence of information on the number and mix of forces, it will be difficult to provide an accurate assessment of specific facility requirements to support the Guam realignment actions. DOD is still determining requirements for berthing a transient aircraft carrier and the exact size and mix of the Army missile defense task force as well as the infrastructure requirements. In the future, the Navy is planning on periodically berthing an aircraft carrier in Guam and the support facilities needed for this ship are still being determined. According to Navy officials, a new carrier pier with additional capabilities will need to be constructed in order to accommodate this plan. Additionally, most of the aircraft from the aircraft carrier will also require temporary beddown at Andersen Air Force Base, which may cause additional facilities requirements. The Army is also planning on basing a ballistic missile defense task force in Guam, though the size and mix of this task force as well as the infrastructure requirements are still being determined. At the time of this review, Army officials projected that the missile defense site will be located at Andersen Air Force Base but acknowledged that the site may be located elsewhere depending on the capability that will be brought to Guam. DOD faces several significant challenges associated with its master planning effort for Guam, including funding requirements, operational challenges, and community impacts that could adversely affect the development and implementation of the master plan. Funding requirements for the military buildup on Guam are yet not fully identified and may be difficult to meet given other priorities and existing funding constraints, according to DOD officials. DOD agencies, such as the Defense Logistics Agency and Defense Education Activity, that will help support the services’ influx of personnel, missions, and equipment to Guam will likely incur additional costs that are not yet included in the current DOD $13 billion cost estimate for military buildup on Guam. According to DOD officials, this cost estimate includes the costs to move Marine Corps forces from Okinawa to Guam, to construct a Navy pier for a transient aircraft carrier, and to station an Army ballistic missile defense task force. However, it does not include the costs of other defense agencies to support the additional military personnel and dependents on Guam. According to JGPO, these costs will eventually be identified once further information is available on the master plan. Within the current DOD $13 billion cost estimate, the Marine Corps move from Okinawa to Guam is estimated to cost about $10.3 billion. Of this amount for the move, the Government of Japan has agreed to contribute about $6.1 billion to develop facilities and infrastructure on Guam. Nearly half of Japan’s contribution, or $2.8 billion, is expected to be direct contributions while the remaining $3.3 billion will consist of investment incentives for family housing and on-base infrastructure, such as utilities, which over time could be recouped by Japan in the form of rent or service charges. For example, the Government of Japan will finance construction of family housing units in Guam, but these construction costs will be reimbursed by payments from the servicemembers’ housing allowance using U.S. funds. Furthermore, the Government of Japan’s funds will not be made available until it has agreed to specific infrastructure plans for Guam. In addition, DOD officials recognize that the failure or delay of one plan outlined in the Defense Policy Review Initiative may affect another, since various planning variables need to fall into place in order for the initiative to move forward. For example, DOD officials expect that if the Futenma replacement facility in Okinawa (a facility intended to replace the Marine Corps Air Station Futenma and estimated to cost from $4 billion to $5 billion) is not built, the Marine Corps relocation to Guam may be delayed. DOD officials view the success of the Futenma replacement facility as a key objective of the initiative that will need to be completed in order for other realignment actions to take place. The Government of Japan may encounter challenges in funding its share of the Marine Corps move considering Japan’s other national priorities and its commitments associated with funding several other major realignments of U.S. forces in Japan under the Defense Policy Review Initiative. At the time of our review, the Japanese legislature had approved $228 million for planning and initial construction funds for force posture realignments, including efforts for project planning in Guam, and authorized the Japan Bank for International Cooperation to invest in businesses for Guam development. DOD officials also expressed concern regarding the department’s ability to obtain a continuous flow of funds adequate to pay its share of the current $13 billion cost estimate for military buildup on Guam in light of ongoing operations and funding constraints and challenges. These officials said that obtaining funding for the military buildup on Guam at current estimated levels will be difficult because of the pressures the department faces in funding other defense priorities and activities, including the ongoing operations in Iraq and Afghanistan and procurement costs for weapons systems. Also, there are other costs not included in the $13 billion cost estimate associated with the Marine Corps’ move to Guam that will increase overall costs. Historically, for example, the Government of Japan has paid a large portion of the operation and maintenance costs of the Marine Corps in Okinawa in the form of host nation support that will be borne solely by DOD after the move. For example, the DOD Inspector General reported that the relocation to Guam will increase the Marine Corps’ annual funding requirements by $465 million for operations and maintenance costs currently borne by the Government of Japan and for the costs of the additional strategic lift needed after the move. Additional costs will be incurred from building facilities that will house equipment and aircraft during inclement weather, and there may be additional incidental maintenance costs as a result of damage from typhoons and seismic shocks. Guam is located in an area of the Pacific commonly referred to as Typhoon Alley, where on average 31 tropical storms develop annually. Also, earthquake risk in Guam is caused by the island’s proximity to the Mariana Trench, which leads to earthquakes throughout the region. Marine Corps officials stated that in estimating Guam facility development costs, DOD took into account that additional costs will occur when constructing to Guam’s typhoon and seismic standards—including concrete and structural reinforcement and providing backup and redundant utility systems. Estimated costs to build infrastructure in Guam are based on the DOD Facilities Pricing Guide. The area cost factors identify Guam as one of the more expensive locations for military construction in comparison with other locations in the United States and its territories. Specifically, the construction costs for Guam are 2.64 times more expensive than the baseline average presented in the DOD Facilities Pricing Guide. The area cost factor is used by planners to adjust average historical facility costs to a specific project location, taking into consideration the costs of construction material, labor, and equipment, along with factors such as weather, climate, seismic conditions, mobilization, overhead and profit, labor availability, and labor productivity for each area. In addition, Marine Corps officials expect there will be additional facility repair costs periodically as a result of damage from typhoons and seismic shocks. Several operational challenges, such as providing appropriate mobility support and training capabilities to meet Marine Corps requirements, have not been fully addressed. For example, according to Marine Corps Forces, Pacific, officials, the Marine Corps in Guam will depend on strategic military sealift and airlift to reach destinations in Asia that will be farther away than was the case when the units were based in Okinawa. The Marine Corps depends on strategic lift for its operational and training- related movement needs, including transportation of forces and equipment. For example, in a contingency operation that requires sealift, the ships may have to deploy from Sasebo, Japan, or another location to transport soldiers and equipment in Guam and then return to the area of responsibility where the contingency is taking place. According to Marine Corps officials, amphibious shipping capability and airlift capacity are needed in Guam, which may include expanding staging facilities and systems support for both sealift and airlift. The Marine Corps estimated additional costs for strategic lift operating from Guam to be nearly $88 million annually. Existing training facilities and ranges on Guam are not sufficient to meet the training requirements of the projected Marine Corps force. A DOD analysis of training opportunities in Guam concluded that no ranges on Guam are suitable for the needs of the projected Marine Corps force, because of inadequacy in size or lack of availability. The services are in the process of conducting a training study that includes Guam and the Commonwealth of the Northern Mariana Islands to assess the options for training in the region. Marine Corps Forces, Pacific, officials stated that live fire artillery training, amphibious landings, and tracked vehicle operations will be challenging because of the combination of factors associated with the limited size of training areas available on the Northern Mariana Islands and the associated environmental concerns. Still, they are optimistic that the study, which will include environmental limitations, facility requirements, real estate requirements, and estimated costs, will result in the identification and development of new training areas. The effects of the increase in military forces, in terms of population and military infrastructure, on Guam’s unique economic and infrastructure requirements have not been fully addressed. The current population of Guam is estimated to be 171,000, and the projected future military population could increase it by more than 15 percent. The active duty military personnel and dependent population is estimated at 14,195 in Guam, and it is expected to increase to 39,130—an increase of 176 percent (see table 1). The population could also swell further because DOD’s estimates do not include DOD civilians, contractors, or Navy transient personnel from an aircraft carrier. According to Navy officials, transient personnel from an aircraft carrier could add as many 5,000 personnel on Guam during a port call. The sum of these increases is expected to have significant effects on Guam’s unique economic and infrastructure requirements. For example: Construction capacity. As a result of Guam realignment actions, the construction demands for infrastructure will exceed the availability of local contract labor on the island, though the extent to which the Guam local community and foreign workers can meet this increase has yet to be determined. Historically, construction capacity on Guam has been approximately $800 million per year, as compared with the estimated construction capacity of more than $3 billion per year projected to be needed to meet the fiscal year 2014 completion date for realignment actions. Preliminary analysis indicates that 15,000 to 20,000 workers will be required to support the development on Guam. Consequently, the increased demand for workers may require workforce training for the local population and possibly a need for foreign workers. Foreign workers would have to temporarily enter the United States on temporary nonagricultural workers visas, capped at 66,000 per year, and DOD officials have already indicated that visa waivers might be needed in order to mitigate limitations on the number of visas allowed into the United States each year. Other challenges associated with an increase of foreign workers in Guam include providing support facilities and services, such as housing and medical care for these workers, as well as possible social tensions between the local population and foreign workers because of job competition. Public infrastructure. The effects of the increased demand on Guam’s roads, port capabilities, and utility services—such as electrical generation, wastewater treatment, and solid waste disposal—have not been fully addressed. DOD and Guam officials recognize that the island’s infrastructure is inadequate to meet the projected demand and will require significant funding to address these needs. For example, the Government of Guam has estimated that it will cost about $2.6 billion to improve the local infrastructure to accommodate forecasted military and civilian growth on the island and that federal assistance is needed to meet these requirements. DOD officials and the Guam Integrated Military Development Plan identified several infrastructure areas that are in need of improvements: (1) the two major roads in Guam are in poor condition and, when ordnance (ammunition and explosives) is unloaded from ships for the Air Force now and for the Marine Corps in the future, it must be transported on one of these major roads that runs through highly populated areas; (2) the Government of Guam plans a number of projects to upgrade the capability and efficiency of Guam’s port facilities that total about $155 million with only $56 million funded at the time of our review; (3) the utilities transmission lines are antiquated and the system is not reliable, and voltage and frequency fluctuations are common; (4) the wastewater treatment facilities have a long history of failing and are near capacity; and (5) the solid waste landfills have a number of unresolved issues related to discharge of pollutants and are near capacity. Although the Government of Japan has agreed to provide $700 million for utilities infrastructure on DOD bases in Guam, this funding is neither intended nor is it sufficient to improve the infrastructure throughout the island. Future DOD operations may be constrained on Guam if improvements are not made to Guam’s infrastructure. DOD land use on Guam. DOD officials initially told Guam officials that they could implement force structure plans with currently held land although they are now reviewing the possibility of using additional land to prevent future encroachment. For example, the Guam Integrated Military Development Plan considered both existing and former DOD land areas for potential use to accommodate realignment actions. In terms of existing land, DOD owned about 40,000 acres of land in Guam at the time of this review—approximately 29 percent of the island. Former DOD land areas have previously been a part of the base realignment and closure process or released to the Government of Guam. There are political sensitivities to using former DOD land areas, since local community officials in Guam are concerned with the community’s reaction to DOD’s possible expansion of land holdings on the island. Funding uncertainties, operational challenges, and community impacts may not only affect the development of the Guam master plan but also increase costs for the U.S. government. Until DOD provides further information on how these challenges will be resolved, it will not know the precise costs of the Guam realignment plans to the U.S. government. DOD’s has begun efforts to create a successful partnership and coordinate with other federal departments and agencies, the Government of Guam, and other organizations, which are important in addressing Guam’s unique economic and infrastructure requirements. At the same time, DOD has not established a comprehensive and routine process to keep Congress informed on its progress dealing with these issues and the overall status of implementing the military buildup on Guam. In the absence of this information on how challenges will be addressed in the future, Congress is not in a position to help ensure the best application of limited federal funds and the leveraging of all available options for supporting the military buildup on Guam. As U.S. overseas defense basing strategies and requirements continue to evolve, so do the department’s master plans. The plans continue to improve each year by providing more complete, clear, and consistent information and descriptions of the challenges DOD faces overseas. However, we have previously recommended that overseas regional commands address the extent to which they are seeking residual value compensation for U.S. capital improvements at installations returned to host nations and that PACOM explain how it plans to address existing training limitations that may affect infrastructure and funding requirements. We believe that identifying these issues would provide Congress an awareness of potential challenges of recouping residual value from host nations and training U.S. forces in South Korea, which may affect facility requirements and funding in these countries. We continue to believe that these recommendations have merit and that Congress would benefit from disclosure of this information. In July 2006, the Senate report accompanying the fiscal year 2007 military construction appropriation bill directed DOD to provide a master plan on the military buildup in Guam. DOD needs several more years to complete a master plan. Completion of a Guam master plan depends on the outcome of the environmental impact assessments and statement that could take up to 3 years to complete, on decisions that finalize the exact size and makeup of the forces to be moved to Guam, and on efforts that address challenges associated with the military buildup, including funding, operational requirements, and local economic and infrastructure needs. DOD’s planning efforts for Guam are evolving and up-to-date information on facility requirements and associated costs would be useful for funding decisions and assessments of all available options to assist DOD, federal departments and agencies, the Government of Guam, and other organizations in addressing the challenges associated with the military buildup. GAO is not recommending executive action. However, to further facilitate annual review and oversight by Congress and other users of the overseas master plans, Congress should consider requiring the Secretary of Defense to ensure that (1) future overseas master plans address the extent to which the regional commands are seeking residual value compensation for U.S. capital improvements at installations returned to host nations and (2) future PACOM plans address existing training limitations in its area of responsibility and the potential effects of those limitations on infrastructure and funding requirements. To help ensure the best application of limited federal funds and the leveraging of all available options for supporting the military buildup on Guam until DOD prepares a master plan, Congress should consider requiring the Secretary of Defense to report periodically to all the defense committees on the status of DOD’s planning efforts for Guam, including DOD’s efforts to complete its environmental impact statement, identify the exact size and makeup of the forces to be moved to Guam and the associated infrastructure required, and address the various challenges associated with the military buildup. In comments on a draft of this report, the Deputy Under Secretary of Defense for Installations and Environment responded that congressional action is not necessary. In commenting on our matter for congressional consideration to require that future overseas master plans address the extent to which commands are seeking residual value compensation, the Deputy Under Secretary of Defense stated that DOD already provides status reports on its residual value negotiations to the Committees on Appropriations and Armed Services and that prior legislation outlines reporting requirements on the closure of foreign military installations worldwide, with specific reporting requirements throughout the residual value negotiation process. While we were aware of these reporting requirements, these reports do not provide users of the master plans the kind of information needed to address their concerns about the status of residual value negotiations or the implications for U.S. funding. Our recommendation to Congress is grounded in the fact that residual value issues vary by host nation and the implications for U.S. funding also vary accordingly and thus may not be clear enough to all users of the plans. We continue to believe that the Secretary of Defense should require commands to explain the issues with obtaining residual value from each host nation and report the implications for U.S. funding. In commenting on our matter for congressional consideration that future PACOM plans address training limitations in its area of responsibility, the Deputy Under Secretary of Defense responded that the department agrees that validated training requirements that are affected by force posture transformation plans should be addressed in overseas master plans. He further stated that nonprogrammed and nonvalidated training limitations experienced by service components were not appropriate for inclusion and would not be addressed in the overseas commands’ risk assessment for their master plans. While we are not aware of any nonprogrammed and nonvalidated training limitations, our report discusses only those training limitations raised by senior command officials during our review. We assume that if there is a need to make a distinction between nonvalidated versus validated training limitations, OSD and the overseas commands would work together to identify those validated limitations that should be addressed in their master plans. In addition, last year OSD included in its guidance a requirement for the combatant commands to identify and discuss risks to their master plans as well as steps taken to mitigate those risks, including validated training requirements and limitations. In response to this guidance U.S. Forces Japan provided information on training limitations, while USFK omitted this information from the overseas master plan. This inconsistency led to our recommendation that Congress require such reporting, and we continue to believe that this information is necessary to provide a complete picture of the potential effects on infrastructure and funding requirements in South Korea. In commenting on our matter for congressional consideration that the Secretary of Defense report periodically to all the defense committees on the status of DOD’s planning efforts for Guam, the Deputy Under Secretary of Defense responded that the Guam master plan is scheduled to be completed in 2008, at which time a copy will be provided to congressional defense committees. It should be noted that Senate Report 109-286 directed DOD to submit a master plan for the military buildup on Guam by December 2006; however, DOD did not submit the plan for several reasons that we discuss in this report. Moreover, because the master plan cannot be completed until the environmental impact statement is completed, a process that could take until 2009, Congress may not see the master plan for another 2 years at least. Also, DOD faces a variety of funding challenges, operational challenges, and community impacts that may both affect the development and timing of the Guam master plan and increase costs for the U.S. government. Thus, in the interim before receiving a master plan, congressional oversight could be enhanced by Congress periodically receiving an update on the planning efforts in Guam, including DOD’s efforts to complete its environmental impact statement, identify the exact size and makeup of the forces to be moved to Guam and the associated infrastructure required, and address the various challenges associated with the military buildup. The Deputy Under Secretary of Defense’s comments are reprinted in appendix II. DOD also provided technical comments on a draft of this report, which we incorporated where appropriated. We are sending copies of this report to the Secretaries of Defense, the Army, the Navy, and the Air Force; the Commandant of the Marine Corps; and the Director, Office of Management and Budget. Copies will be made available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4523 or leporeb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff members who made key contributions to this report are listed in appendix III. To determine the extent to which the fiscal year 2008 overseas master plans have changed since last year, and the extent to which the plans address the challenges faced by the Department of Defense (DOD) during implementation, we compared the reporting requirements in the congressional mandate and the Office of the Secretary of Defense (OSD) guidance, which incorporated our prior recommendations. In order to identify improvements to the overseas master plan, we compared and contrasted the fiscal year 2007 and 2008 plans. We assessed the quantity and quality of one plan’s responses for each of the data elements, including details on base categories, host nation funding levels, facility requirements and costs, environmental remediation issues, and other issues affecting the implementation of the plans, and compared them to equivalent responses in other plans; formed conclusions as to the completeness, clarity, and consistency of the latest plan’s responses; and generated observations and recommendations for improving the plans. We also discussed with DOD officials our observations and recommendations, specific reporting requirements, and whether improvements in the guidance and reporting were needed. We also interviewed cognizant officials from DOD about the various changes and challenges that were identified within the plans. We met with officials from OSD and each of the following commands and agencies: U.S. Pacific Command (PACOM); U.S. Army Pacific; Commander, U.S. Pacific Fleet; U.S. Marine Corps Forces, Pacific; U.S. Pacific Air Forces; U.S. Forces Korea; U.S. Eighth Army; Seventh Air Force; Commander, Naval Forces Korea; U.S. Army Corps of Engineers, Far East District; DOD Education Activity; U.S. Forces Korea Status of Forces Agreement Office; U.S. Forces Japan; U.S. Army Japan; U.S. Air Forces Japan; Commander, Naval Forces Japan; U.S. Marine Forces Japan; Naval Facilities Engineering Command-Pacific, Japan; U.S. European Command; U.S. Army Europe; Commander, U.S. Naval Forces Europe; Naval Facilities Engineering Command-Italy; U.S. Air Force Europe; Army Installation Management Agency, Europe Regional Office; U.S. Central Command; and Special Operations Command. In general, we discussed the reporting requirements contained in OSD’s guidance, host nation agreements and funding levels, U.S. funding levels and sources, environmental remediation and restoration issues, property returns to host nations, and training requirements. We also analyzed available reports, documents, policies, directives, international agreements, guidance, and media articles to keep abreast of ongoing changes in overseas defense basing strategies and requirements. To see firsthand the condition of facilities and status of selected construction projects, we visited and toured facilities at Camp Schwab, Camp Kinser, Camp Foster, Torii Station, Camp Zama, Yokosuka Naval Base, and Yokota Air Base, Japan; Camp Humphreys and Kunsan Air Force Base, South Korea; and Aviano Air Base, Caserma Ederle, Dal Molin, and Naval Support Activity La Maddalena, Italy. To determine the status of DOD’s planning effort for the buildup of forces and infrastructure on Guam, we met with officials from OSD, the Navy, PACOM, and the Joint Guam Program Office (JGPO). In general, we discussed the development of a Guam master plan and the Integrated Military Development Plan with PACOM and JGPO officials. We also met with officials from U.S. Pacific Fleet; U.S. Marine Corps Forces, Pacific; U.S. Marine Forces Japan; Third Marine Expeditionary Forces; U.S. Forces Japan; U.S. Army Pacific; and Pacific Air Forces to discuss the various factors that can affect U.S. infrastructure requirements and costs associated with the buildup in Guam. We visited Naval Base Guam and Andersen Air Force Base in Guam to see the installations and future military construction sites firsthand. We also reviewed DOD’s military construction budgets for fiscal years 2007 and 2008 and planned for future years to identify U.S. funding levels and sources planned for the military buildup in Guam. To identify challenges associated with the buildup in this planning effort, we met with the aforementioned DOD officials and other interested parties in Guam, including the Governor, legislative leaders, the Chamber of Commerce, the Civil Military Task Force, the Guam Women’s Group, and the Office of the Delegate from Guam to the U.S. House of Representatives. We did not evaluate concerns raised by the officials, but we reviewed relevant federal laws and discussed them with DOD officials. We also analyzed available reports, documents, international agreements, and media articles to keep abreast of ongoing activities in Guam pertaining to challenges that may affect DOD’s development and implementation of a master plan. While we met with Special Operations Command officials, its planning efforts were not specifically required for the master plans in response to the congressional mandates. In addition, we did not include Southern and Northern Commands in our analysis because these commands have significantly fewer facilities overseas than the other regional commands in the Pacific, Europe, and Central Asia. We conducted our review from September 2006 through July 2007 in accordance with generally accepted government auditing standards. In addition to the contact named above, Mark Little, Assistant Director; Nelsie Alcoser; Kate Lenane; Erika Prochaska; Roger Tomlinson; and Cheryl Weissman made major contributions to this report.
Over the next several years, implementation of the Department of Defense's (DOD) Integrated Global Presence and Basing Strategy will result in the realignment of U.S. forces and the construction of new facilities costing billions of dollars at installations overseas. The Senate and House reports accompanying the fiscal year 2004 military construction appropriation bill directed GAO to monitor DOD's overseas master plans and to provide congressional defense committees with assessments each year. The Senate report accompanying the fiscal year 2007 military construction appropriation bill directed GAO to review DOD's master planning effort for Guam as part of these annual reviews. This report, first, examines how the overseas plans have changed and the extent to which they address the challenges faced by DOD and, second, assesses the status of DOD's planning effort and the challenges associated with the buildup of military forces and infrastructure on Guam. The fiscal year 2008 overseas master plans, which provide infrastructure requirements at U.S. military facilities in each of the overseas regional commands' area of responsibility, have been updated to reflect U.S. overseas defense basing strategies and requirements as well as GAO's prior recommendations for improving the plans. The plans also address DOD's challenges to a greater extent than they did in previous years. However, two areas continue to be of concern. First, the master plans do not address the issue of residual value--that is, the value of property being turned over to the host nation based on its reuse of property. Although DOD officials believe that residual value cannot be readily predicted and therefore should not be in the master plans, compensation received for U.S capital improvements at installations returned to host nations could affect U.S. funding requirements for overseas construction. Second, the master plan for PACOM, which provides details on the command's training limitations in Japan and several other challenges, does not provide details regarding training limitations for the Air Force in South Korea, which could cause the United States to pursue alternatives, such as training in other locations, downsizing, or relocating that could affect overseas basing plans. Without addressing the residual value issue and providing details on these training challenges, DOD cannot provide Congress a comprehensive view enabling it to make informed decisions regarding funding. GAO has previously recommended that overseas regional commands address residual value issues and that PACOM explain how it plans to address existing training limitations. Because these recommendations have not been fully addressed, GAO considers them to be open and believes that they still have merit. DOD's planning effort for the buildup of military forces and infrastructure on Guam is in its initial stages, with many key decisions and challenges yet to be addressed. Among the challenges to be addressed is completing the required environmental impact statement, initiated in March 2007. According to DOD officials, this statement and associated record of decision could take up to 3 years to complete and will affect many of the key decisions on the exact location, size, and makeup of the military infrastructure development--decisions needed to develop a master plan for the military buildup on Guam. DOD and the services are still determining the exact size and makeup of the forces to be moved to Guam, needed in order to identify the housing, operational, quality of life, and services support infrastructure required for the Marine Corps realignment and the other services' buildup. DOD officials said that additional time is needed to fully address other challenges associated with the Guam military buildup, including funding requirements, operational requirements, and community impact. Until the environmental assessment and initial planning efforts are completed, Congress will need to be kept abreast of developments and challenges affecting infrastructure and funding decisions to make appropriate funding and oversight decisions.
In December 1994, the Secretary of Housing and Urban Development announced a plan to reinvent the Department to transition it from a “lumbering bureaucracy to a streamlined partner with state and local governments.” With the streamlining, the Secretary expects HUD to reduce its staffing from about 11,900 to 7,500 by the year 2000. In March 1995, the Secretary laid out the envisioned changes for HUD in a plan entitled, HUD Reinvention: From Blueprint To Action. The plan was subsequently updated in January 1996. The HUD reinvention plan acknowledges that FHA is behind the times technologically and increasingly ill-equipped to manage its business. The plan notes that FHA needs to streamline operations and acquire state-of-the-art technology and information systems to transform itself into a results-oriented, financially accountable operation. One of the mandates of the plan is to reduce FHA staffing from about 6,000 to 2,500. As part of the downsizing, FHA’s Office of Single Family Housing is planning to reduce its staff from a 1994 level of 2,700 to 1,150 by the year 2000. The mission of FHA’s Office of Single Family Housing is to expand and maintain affordable home ownership opportunities for those who are unserved or underserved by the private market. Single family housing carries out its mission by insuring private lenders against losses on single family home loans. FHA’s insurance operations target borrowers such as first-time home buyers, low-income and moderate-income buyers with little cash for down payments, residents of inner cities and rural areas with inadequate access to credit, minority and immigrant borrowers, and middle-income families in high cost areas. At the end of fiscal year 1995, FHA had insurance outstanding valued at about $350 billion on mortgages for 6.5 million single family homes. FHA processed an average of about 1 million applications for mortgage insurance and disposed of properties acquired from borrower defaults on over 50,000 loans annually during fiscal years 1994 and 1995. Single family housing operations consist primarily of four functions: loan processing, quality assurance, loss mitigation and loan servicing, and real property maintenance and disposition. The following summarizes these basic functions. Loan processing: FHA records data on loans originated by FHA-approved lenders, issues insurance certificates, and conducts underwriting reviews of loan documentation. FHA-approved lenders perform the underwritingtasks necessary to determine whether loans meet FHA’s insurance guidelines. Quality assurance: FHA reviews selected loans to ensure that approved lenders are originating loans in accordance with FHA’s guidelines. Loss mitigation and loan servicing: FHA attempts to resolve delinquencies to minimize losses that can result if borrowers default on loans. FHA’s loss mitigation efforts have generally involved (1) placing delinquent loans in the mortgage assignment program, which offered reduced or suspended payments for up to 3 years to allow borrowers to recover from temporary hardships, (2) offering alternative default resolution actions such as refinancing, or (3) using preforeclosure sales of homes. FHA services loans that are in the mortgage assignment program, which includes collecting monthly payments, paying property taxes, and maintaining accounting records. Property maintenance and disposition: FHA acquires properties from voluntary conveyances by borrowers or foreclosures. FHA inspects and secures the properties, performs necessary repairs, and sells the properties. The functions performed by FHA generally parallel those performed by other organizations in the single family mortgage industry, such as Fannie Mae, Freddie Mac, and large private mortgage insurance corporations. However, the functions FHA performs differ from those of the other organizations because of differing business objectives. Fannie Mae and Freddie Mac are government-sponsored, privately owned enterprises that purchase mortgages from lenders and (1) hold them as investments in their portfolios or (2) sell securities that are backed by mortgage pools. Therefore, in addition to the functions FHA performs, Fannie Mae and Freddie Mac also establish purchase prices for mortgages, negotiate purchase contracts, and market mortgage-backed securities. Private mortgage insurers perform the same functions as FHA and perform loan underwriting for a significant portion of the loans they insure. Similar to FHA, private mortgage insurers also accept loans underwritten by lenders to whom they have delegated the authority to initiate insurance. FHA performs some functions that are unique in the mortgage industry. For example, FHA sells houses from its real property inventory to interested nonprofit organizations, states, and local governments, and FHA works with local community development officials in their efforts to increase home ownership opportunities. Because of its mission, FHA accepts higher levels of risk on many of the mortgages it insures. FHA covers 100 percent of losses on the mortgages it insures, whereas Fannie Mae and Freddie Mac share losses with mortgage insurers and private insurers share losses with mortgage lenders. FHA also insures higher risk mortgages because it accepts higher loan-to-value and borrower debt-to-income ratios than the private mortgage insurers. In addition, FHA has a higher proportional volume of defaults that it must manage and a higher volume of real property maintenance and disposition activities. Historically, FHA’s single family housing operations have had significant management control problems in originating insured loans, resolving delinquencies, managing assigned mortgages, and managing property maintenance and disposition activities. Information system weaknesses have been cited as a contributing factor for many of FHA’s management control weaknesses. For example, independent audit reports have cited FHA systems that collect delinquency data and track default resolution actions as inadequate to support oversight responsibilities and as factors contributing to inadequate loss mitigation efforts. Similarly, FHA’s information systems have not adequately supported the tracking and monitoring of collection and foreclosure actions on loans in the mortgage assignment program. In addition, the lack of information system support for controlling and accounting for properties assigned to real estate brokers for property disposition was cited as a major cause of the highly publicized HUD scandals in the 1980s. According to HUD’s Federal Managers’ Financial Integrity Act (FMFIA) compliance reports and independent auditors’ reports for fiscal years 1994 and 1995, FHA has corrected system weaknesses in the mortgage assignment and property disposition areas but is still developing systems to support delinquency monitoring and resolution. FHA plans to use its existing information technology capabilities to facilitate some streamlining and staff reduction initiatives, while other initiatives will require new information technology applications. FHA plans to achieve the majority of the single family housing staff reductions by reducing its field staff performing loan processing from about 600 to 310, loss mitigation and loan servicing from 600 to 90, and real property maintenance and disposition from 750 to 75. FHA also plans to reduce its single family housing headquarters staff from about 200 to 85. Some of these reductions will be offset by increases in field staff performing quality assurance, marketing and outreach, legal, and administrative support functions. While the staff levels for each function are not final, single family housing officials expect to reach the projected 1,150 staffing target. The planned reduction of loan processing staff is to be facilitated by expanding the use of existing electronic data transfer capabilities to enable the reduction of data entry by FHA staff and the consolidation of operations into fewer locations. New information system support will be needed for FHA’s planned changes to loss mitigation and disposition operations. To reduce its loan processing staff, FHA is (1) expanding the use of its electronic data transfer capabilities so that fewer staff are needed to enter data into systems from paper documents and (2) using its information systems to support the consolidation of operations from 81 offices to 5 offices. FHA established its electronic data transfer capabilities for loan processing and made them available to lenders in 1991. In fiscal year 1995, lenders submitted about 35 percent of loan data electronically. To take further advantage of this capability, FHA plans to ask lenders to increase the use of electronic transfers to deliver loan data. In 1994, FHA began consolidating loan processing operations into fewer offices to increase efficiency. According to officials responsible for single family loan processing operations, variations in workloads have resulted in idle time for loan processing staff at some field offices, while staff at other field offices have been overloaded and processing has been backlogged. Consolidating the work to fewer locations helps eliminate the variations in workload and increase efficiency of operations, thus reducing the number of staff needed to perform the work. FHA is consolidating into its Denver office the loan processing workload that had been performed in 17 field offices. The Denver office is using 42 staff for the loan processing work that was performed by an estimated 96 staff before the consolidation. FHA officials in charge of the pilot attributed the increased efficiency to consolidating the work at one site and increasing the use of electronic data transfer to submit loan data to FHA. According to Denver project officials and documentation, FHA persuaded lenders to increase their use of electronic data transfer from less than 40 percent of all submissions before the consolidation to about 90 percent after consolidation. They also said that loans submitted electronically can be processed in about one-third the time it takes to process loans submitted in paper form. When lenders electronically transfer the loan data, the loan processing staff only need to check that data against the paper forms submitted by the lender. If the data are not submitted electronically, the staff have to enter the loan data from the paper forms into FHA’s information systems. With the initial consolidation and processing changes, the Denver office loan processing times were reduced from 5 to 8 days to an average of 2 days, according to FHA. At the time of our review, the Denver consolidation was substantially complete. In April 1996, FHA announced that it would start consolidating loan processing operations in 32 field offices in eastern states into 2 offices—Philadelphia and Atlanta. FHA plans to complete these consolidations in 1997 and start consolidating the remaining offices in 1998. For loss mitigation, FHA plans to phase out its staff-intensive mortgage assignment program—which is expected to reduce loan servicing staff from about 600 to 40 staff—and implement a new default resolution program that is to be performed by 50 staff. The new processes are to be supported by a new system that will employ electronic data transfers for lender reporting of actions to resolve mortgage payment delinquencies. FHA’s mortgage assignment program was cited in independent auditors’ reports on FHA financial statements and HUD’s FMFIA compliance reports for fiscal years 1994 and 1995 as a material management control weakness because of extensive losses from uncollected payments. Under the mortgage assignment program, FHA (1) pays the lender for defaulted loans, (2) offers the borrowers reduced or suspended payments for up to 3 years to help them overcome temporary hardships, and (3) services the loans while they are in this program. In October 1995, we reported that while the program helps borrowers avoid immediate foreclosure, in the longer term about 52 percent of the borrowers eventually lose their homes through foreclosure. We also reported that FHA’s losses will total about $1.5 billion more than they would have in the absence of the program. As part of HUD’s fiscal year 1996 appropriations act, the Congress included a provision directing FHA to stop accepting delinquent loans into the mortgage assignment program and providing FHA with increased flexibility to use loss mitigation alternatives. In addition to not accepting loans into the program, FHA is selling mortgages from the program portfolio to reduce the workload associated with servicing them. FHA’s current processes and system have also been labor intensive because lenders report delinquency data on paper documents that require manual handling and data entry, and the automated system is capable of producing only reports that list data for each lender and does not summarize data concerning the timeliness of actions, alternatives selected, and results of resolution actions. To improve efficiency, FHA modified its system to accommodate electronic data transfers of the delinquency data from lenders and issued instructions that require all lenders to submit delinquency reports electronically by the end of 1997. FHA also plans to develop a new system to track and analyze lenders’ use of available loss mitigation alternatives to resolve mortgage delinquencies. FHA is considering using one or more of three alternatives to replace the current property maintenance and disposition operations and reduce staff. These alternative approaches include (1) using contractors to maintain and dispose of properties, (2) forming and using joint ventures with other organizations (which is similar to using contractors, but the partner will have an investment in the venture) to maintain and dispose of properties, and (3) selling the defaulted mortgages rather than acquiring the properties. According to FHA officials responsible for property maintenance and disposition, they will need new information technology support to track and manage the new operations regardless of the choice made. FHA is testing the use of contractors to perform property maintenance and disposition activities for three field offices and has contracted for feasibility studies of the other two alternatives. FHA plans to complete its analyses of the studies in mid-1997 and decide which of the alternative approaches it will use. FHA’s planned information technology initiatives are similar to those undertaken by other mortgage industry organizations to increase productivity. Additional efficiency and effectiveness improvements may be possible if FHA incorporates other information systems capabilities used by the organizations. The mortgage industry organizations we visited have been using electronic data transfer extensively to eliminate or reduce the manual processes associated with the receipt and processing of data from paper documents. For example, Fannie Mae and Freddie Mac have had lenders submit loan data electronically for more than 2 years. These organizations have also consolidated their loan processing, loss mitigation, and property disposition operations to increase efficiency and improve consistency of operations and management controls. As a result of the shift to electronic data transfer and consolidation of operations, officials of these organizations stated that they achieved productivity improvements ranging up to 250 percent for the loan processing function. FHA may be able to achieve greater efficiency and effectiveness if it adopts the automated capabilities that are used by the other mortgage industry organizations. These capabilities include (1) the ability to electronically analyze loan data to ensure that loans meet their underwriting guidelines and (2) the use of computer models to automatically focus quality assurance activities on areas with the most vulnerability, select the most promising default prevention alternatives for delinquent loans, and analyze repair and marketing data to identify options that will minimize losses and provide the greatest returns on property repair and disposition activities. In addition, officials of the mortgage industry organizations we visited told us that they achieved further staff efficiencies through extensive use of graphical user interfaces, integration with other systems, and telecommunications to facilitate data acquisition and correspondence. According to information provided by Freddie Mac and Fannie Mae officials, these organizations are able to process similar loan volumes with about 20 percent of the staff planned for FHA loan processing operations because (1) all essential data for delegated loan underwriting are submitted electronically rather than in paper form, (2) their systems electronically perform all edit checks and comparisons against underwriting criteria, and (3) their systems use mortgage scoring models to automatically identify loans with the greatest risk of default for underwriting and other quality assurance purposes. Conversely, FHA requires lenders to submit paper files that staff use to check data submitted electronically, enter data not submitted electronically, and perform compliance checks. According to loan processing staff at the Denver pilot site, working with the paper documents consumes over 90 percent of the processing time. The remaining time is used to deal with exceptions, such as notifying lenders of missing or incorrect data. Since Freddie Mac’s and Fannie Mae’s systems have automated edits and compliance checks, their staffs need to work only with exception cases. Freddie Mac’s and Fannie Mae’s systems also use mortgage scoring models to electronically perform underwriting reviews that FHA performs manually with the paper documents in the loan files. Freddie Mac, Fannie Mae, and private mortgage insurers use other models in their systems that have increased staff productivity. These include models that electronically analyze data to help them select the (1) most promising default prevention alternatives for delinquent loans and (2) repair and marketing options to minimize losses and provide the greatest returns. For example, officials of one organization stated that by using a model to determine whether repairs would increase sales proceeds, they realized $40 million of returns on $15 million of repair investments last year. Officials of another organization said their models have helped to reduce real property disposition losses by about $13,000 for each home. Officials from Fannie Mae, Freddie Mac, and the private mortgage insurers also cited efficiency improvements through the use of graphical user interfaces, integration with other systems so that needed data are readily available, and telecommunications to facilitate the transfer of data from other databases and the transmission of business correspondence. In the real property maintenance and disposition function, for example, one organization reported a 50-percent increase in the productivity of workers when the new system was implemented. According to officials, the new system’s graphical user interfaces enabled workers to quickly, easily, and electronically extract data from other systems, analyze investment options, and prepare and send correspondence by facsimile or electronic mail. The Deputy Assistant Secretary for Single Family Housing told us that FHA (1) recognizes the potential for using information technology to further improve the efficiency and effectiveness of operations and (2) intends to incorporate the best available technologies and move to a paperless work environment. However, the official added that FHA faces several challenges in making these information system improvements. For example, FHA officials stated they must deal with budget and procurement limits and the lack of skilled managers and technical staff that are necessary to quickly develop and implement the needed information systems. In this regard, as part of its efforts to improve operations, FHA officials told us that they are considering using the expertise of other organizations. For example, FHA recently entered into an agreement with Freddie Mac to use a modified version of Freddie Mac’s mortgage scoring system for loan origination. This system helps speed the lenders’ loan origination process and reduce their costs by using mortgage scoring models to more efficiently and effectively analyze risks associated with borrower credit and loan characteristics. The system is being modified for FHA’s underwriting criteria and historical experience with insured mortgages. Freddie Mac and FHA are testing the system to determine if lenders can achieve similar benefits for FHA mortgages without adversely affecting applicants who would otherwise qualify for FHA insurance. FHA has also established a process for approving lenders’ use of other automated loan origination systems. A strong system of management controls and adequate information and financial management systems are key ingredients in helping federal officials to manage operations and control risks. For many years, single family housing has had significant management control problems in its loan origination, delinquency resolution, and property disposition activities. Information system weaknesses have been cited in FMFIA compliance reports and independent audit reports as contributing factors for the last two management control weaknesses. FHA has been taking corrective actions to address these control weaknesses as part of its ongoing efforts to improve management controls. Some of these actions include the use of information technology. Because FHA is still in the planning stages for its streamlining initiatives, sufficient information is not available at this time to assess the impact that streamlining actions will have on management controls. Appendix II describes the status of efforts to address control weaknesses. Office of Single Family Housing officials recognize that FHA needs to invest in information technology to achieve the efficiency and effectiveness of leading mortgage organizations. In making future decisions on technology acquisitions, the agency can incorporate the technology investment framework established by the new Information Technology Management Reform Act of 1996 (ITMRA), which is based on industry best practices. Some of FHA’s information technology needs are described in single family housing’s 1995 Information Strategy Plan. The plan discusses FHA’s current information technology environment and shortfalls and proposes investments to provide improved management controls, expanded capabilities to analyze existing data for evaluating performance and setting policy, and expanded capabilities to automate all critical functions with state-of-the-art technology. The plan was developed using a widely accepted approach to identify needed information technology improvements, including (1) an analysis of the goals and objectives specified in the Office of Single Family Housing’s Business Strategy Plan and (2) a survey of information systems users to identify weaknesses and opportunities to automate tasks and enhance efficiency or effectiveness. In formulating the streamlining plans, the Deputy Assistant Secretary for Single Family Housing and the directors of some program areas contacted officials of Freddie Mac, Fannie Mae, and selected private insurers to discuss how their operations differ with FHA’s operations. These streamlining efforts include planning operational changes and information technology applications. The efforts have not included data collection and analysis to enable benchmarking comparisons of system support in terms of costs and performance or calculation of the benefits, costs, and potential return on investment for the information technology investments. As FHA continues its planning effort and begins sorting through its investment alternatives, effective implementation of the recently enacted ITMRA could help FHA maximize the value of its investments. Although the act was not in effect at the time FHA selected and began implementing its current initiatives, the act provides an analytical framework that will be helpful as FHA continues to streamline its operations and make improvements using information technology. The act specifies that where comparable processes and organizations exist in the public or private sectors, the agency head is to quantitatively benchmark agency process performance against such processes in terms of cost, speed, productivity, and quality of outputs and outcomes. ITMRA also requires agency heads to (1) analyze mission-related processes before making investments and (2) implement a process for maximizing the value and assessing and managing the risks of their information technology investments. The process, among other things, is to provide for the use of minimum investment selection criteria, including risk-adjusted return on investment, and specific quantitative and qualitative criteria for comparing and prioritizing alternative information systems projects. In addition to the act, the Office of Management and Budget’s information technology investment guide, issued in November 1995, establishes key elements of the investment process for agencies to follow in selecting, controlling, and evaluating their information technology investments. According to HUD’s Office of Information Technology, the Department plans to have its Technology Investment Board ensure that the investment provisions of ITMRA are implemented. HUD established the Board in fiscal year 1994 to evaluate, rank, and select proposed information technology investments for all HUD components, including FHA. The Board’s charter has been recently revised to charge it with following ITMRA capital planning and performance-based management requirements, including determining whether the functions supported by the proposed investments should be performed by the private sector or another agency. HUD plans to incorporate ITMRA investment requirements, including quantified benefit and risk management criteria, into its strategic investment process. FHA is planning to streamline its single family housing operations to increase efficiency and meet mandated staff reductions. Information technology figures prominently in the plans to support and enable the operational changes that are being contemplated. Thus far, the planned actions are consistent with, but are not as extensive as, efficiency improvement actions taken by leading mortgage industry organizations. However, the streamlining efforts are still in the early stages and, as these efforts continue, FHA will be making decisions on specific operational changes, information technology applications, and management controls that will determine the efficiency and effectiveness of operations and the achievement of staff reduction goals. In doing so, it can use the recently enacted Information Technology Management Reform Act of 1996 to establish an effective framework for making these information technology decisions. On September 13, 1996, we discussed a draft of this report with officials from FHA’s Office of Single Family Housing. In general, the officials agreed with the facts and conclusions. FHA officials suggested some clarifications to our report, and we have incorporated the suggested changes where appropriate. We are sending copies of this report to Ranking Minority Members of your Subcommittees; interested congressional committees; the Secretary of Housing and Urban Development; the Assistant Secretary for Housing-Federal Housing Commissioner; the Director, Office of Management and Budget; and other interested parties. We will also make copies of this report available to others on request. Please call me at (202) 512-6240 if you or your staffs have further questions. Major contributors to this report are listed in appendix III. As requested by the Chairs of the Subcommittee on Government Management, Information and Technology and Subcommittee on Human Resources and Intergovernmental Affairs of the House Committee on Government Reform and Oversight, our objectives were to determine (1) how FHA plans to use information technology to support the streamlining of single family housing operations and reduce staff, (2) whether FHA’s planned initiatives are similar to those undertaken by leading mortgage organizations to increase productivity, and (3) what FHA is doing to ensure that information technology initiatives will maintain or improve management controls over single family housing operations. To determine how FHA plans to use information technology to streamline single family housing operations, we identified specific reinvention initiatives planned to reduce staff, obtained an explanation of how information technology will be used for each of the reinvention initiatives, and determined the basis for the estimated staff reductions from the new uses of information technology or innovative practices. We obtained and reviewed HUD’s plan entitled, HUD Reinvention: From Blueprint To Action and the January 1996 update to the plan. To identify how FHA plans to use information technology in its streamlining initiatives, we (1) obtained a briefing from the Deputy Assistant Secretary for Single Family Housing, (2) interviewed officials in each functional area, and (3) obtained and analyzed documentation on planned streamlining initiatives, including single family housing’s July 1995 Business Strategy Plan and September 1995 Information Strategy Plan. We also reviewed provisions in HUD’s fiscal year 1996 appropriations authorization that allowed changes to FHA’s mortgage assignment program and loss mitigation operations. In addition, we reviewed and analyzed proposed regulations and instructions to lenders on new operating procedures. As part of our work to determine what information is available that the planned information technology initiatives can help achieve the projected staff reductions and efficiencies, we analyzed information from FHA’s pilot test of consolidated loan processing operations, identified information technology applications and systems used by other mortgage industry organizations, and compared FHA’s reinvented processes and systems to those of the other mortgage organizations. For the consolidated loan processing operations that were pilot tested in FHA’s Denver field office, we interviewed FHA officials, reviewed documentation on operating procedures and workload data, and observed processes and systems in operation. We interviewed officials at Fannie Mae, Freddie Mac, and the two largest private mortgage insurers in the United States—Mortgage Guaranty Insurance Corporation and GE Capital Mortgage Insurance—observed operations, and obtained documentation on processes and systems on their single family mortgage operations. We did not verify data provided by officials of these organizations concerning staff numbers, workload, productivity, and savings produced by information technology investments. We analyzed and performed general comparisons of FHA’s planned operating procedures, information systems, and staffing levels to those of the other mortgage organizations. The comparisons were performed to identify major differences and did not include detailed analyses of work processes. To ascertain what FHA has done to ensure that information technology initiatives will maintain or improve management controls over single family housing operations, we reviewed plans for proposed operations and systems to determine how they specifically address reported control weaknesses. To identify reported control weaknesses, we reviewed and analyzed HUD’s Federal Managers’ Financial Integrity Act compliance reports for fiscal years 1994 and 1995, independent auditors’ reports on FHA financial statements for fiscal years 1994 and 1995, and the HUD Inspector General’s reports on single family housing operations. We also interviewed FHA officials to obtain their views on how information technology initiatives will address management control weaknesses. We visited FHA’s Office of Single Family Housing in Washington, D.C.; FHA’s field office in Denver, Colorado; Fannie Mae in Washington, D.C.; Freddie Mac in McLean, Virginia; Mortgage Guaranty Insurance Corporation in Milwaukee, Wisconsin; and GE Capital Mortgage Insurance in Raleigh, North Carolina, and Memphis, Tennessee. We performed our work between December 1995 and August 1996 in accordance with generally accepted government auditing standards. We requested comments from the Secretary of Housing and Urban Development or his designee. On September 13, 1996, we discussed the facts and conclusions in our report with cognizant HUD officials. Their comments are discussed in the “Agency Comments” section of this report. HUD has experienced long-standing deficiencies in its internal controls and information and financial management systems. Specifically, the Office of Single Family Housing has had significant management control weaknesses in loan origination, delinquency resolution, and property disposition. While planned single family housing initiatives may help resolve management control weaknesses, insufficient information is available to assess them because detailed operating procedures and system designs have not yet been developed. In 1992, we reported inadequate oversight of loan origination and underwriting activities as a material management control weakness. The problems included fraudulent activities of borrowers, real estate agents, and lenders; approval of loans exceeding the statutory loan limit; inadequate assessment of applicants’ repayment ability; and inflated appraisals. FHA experienced high losses in the single family mortgage program because of improper loan origination activities. HUD’s FMFIA compliance report and independent auditor’s report for fiscal year 1995 discuss FHA’s actions to correct the loan origination and underwriting management control weakness. According to the FMFIA report, the control weakness has been corrected but not yet validated. The corrective actions include standardizing the monitoring of lenders’ loan underwriting practices, and establishing a mechanism to follow up and track sanctions imposed on lenders that do not adhere to FHA underwriting requirements. FHA is also planning to expand staff in the Quality Assurance Division to enhance loan origination oversight as part of its streamlining efforts. FHA plans also include a proposal for a data warehouse system to make data available on lenders to support the underwriting and quality assurance operations. In its fiscal year 1995 report, the independent auditor recommended that FHA continue and accelerate these initiatives to address the control weaknesses. Because FHA’s initiatives to correct its loan origination weaknesses—including the design of the data warehouse system—are still being planned, sufficient information is not available to assess the impact on management controls. In HUD’s FMFIA compliance reports and independent auditors’ reports for fiscal years 1994 and 1995, default monitoring and loss prevention are identified as material management control weaknesses. The FMFIA reports stated that FHA did not emphasize working with borrowers to cure defaults and delinquencies and many lenders did not report on the default status of borrowers. Contributing to the management control weaknesses is an inadequate information system to collect delinquency data and track default resolution actions. The lack of management controls has resulted in high default and foreclosure rates and a large inventory of defaulted loans. Industry experience indicates that effective monitoring of delinquent mortgages and early intervention helps those borrowers experiencing financial hardships and helps reduce losses. To correct the default monitoring and loss prevention management control weaknesses, FHA is (1) assessing penalties against lenders who are negligent in reporting defaulted mortgage loans and (2) enhancing the Single Family Default Monitoring System to track lender and servicer use of mitigation tools and provide default rates and other information for evaluating and providing feedback to lenders and servicers. Coupled with these actions, FHA established the Office of Loss Mitigation in 1995 and is implementing new loss mitigation alternatives. In assessing FHA’s efforts to improve loss mitigation operations, the independent auditor’s report on FHA’s fiscal year 1995 financial statements stated that use of the new loss mitigation alternatives should help FHA to reduce claims and losses. However, the report stated that FHA currently does not have the appropriate tools to monitor the use of the loss mitigation programs and their costs. According to an official responsible for loss mitigation operations, FHA is developing the detailed operating procedures and the design and requirements for new systems to support these operations. Since these plans have not yet been developed, it is too early to assess whether the actions will strengthen management controls. In 1992, we reported the disposition of single family foreclosed properties as a material management control weakness that resulted in financial losses. These losses and problems were part of the highly publicized HUD scandals. Among the factors contributing to the management control weakness were (1) inadequate oversight of property management, collection of sales proceeds, and services provided by third parties and (2) inadequate information system support of the disposition process. In HUD’s fiscal year 1994 FMFIA compliance report, the property disposition material weakness was listed as corrected. The corrective actions included implementation of an information system to manage the property disposition process. This issue was not identified as a control weakness in the independent auditor’s report for fiscal year 1994. Although the control weakness is now considered to be corrected, it is important to continue adequate management control over this area after it is streamlined. As discussed earlier, FHA is considering which one or more of three streamlining alternatives it will use to perform real property maintenance and disposition and foreclosed mortgage disposition activities. FHA’s decision will impact on the management controls and information systems support requirements. Until decisions are made and detailed plans are prepared, sufficient information is not available to assess how the changes will impact on management controls. Bennet Severson, Senior Evaluator Joe Sikich, Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Federal Housing Administration's (FHA) streamlining plans, focusing on: (1) FHA plans to use information technology to support the streamlining of single-family housing operations and reduce staff; (2) similarities between FHA initiatives and those undertaken by leading mortgage organizations to increase productivity; and (3) FHA efforts to ensure that technology initiatives will maintain or improve management controls over single-family housing operations. GAO found that: (1) FHA plans to use existing information technology capabilities to facilitate some streamlining and staff reduction initiatives, while other initiatives will require new information technology applications; (2) FHA plans to reduce single family housing staff from its 1994 level of about 2,700 to 1,150 in the year 2000 by: (a) expanding the use of existing electronic data transfer capabilities and using information systems to support the consolidation of loan processing operations from 81 offices to 5 offices; (b) implementing new loss mitigation processes that will be supported with a new information system; and (c) using information technology to support new processes associated with conducting real property maintenance and disposition operations or selling defaulted mortgage notes rather than foreclosing on properties; (3) FHA plans to incorporate information technology initiatives that are similar to, but not as extensive as, those used by other mortgage industry organizations to improve productivity; (4) further improvements may be achieved if FHA adopts other automated capabilities used by these organizations; (5) some of FHA's planned changes may help resolve management control weaknesses or maintain adequate controls for loan origination, loss mitigation, and property disposition; (6) however, GAO was unable to assess the impact of the planned changes because FHA has not yet made all of the decisions, developed the detailed operating procedures, or identified the information systems requirements that will be needed to implement the planned initiatives and management controls; (7) FHA officials recognize that additional information technology investments are needed to achieve the efficiency and effectiveness of other mortgage organizations; (8) however, they added that they must deal with budget and procurement limits and technical skills shortfalls to make needed improvements; (9) in this regard, FHA is considering using the expertise of other organizations; (10) in making future technology acquisitions, FHA can take advantage of the recently enacted Information Technology Management Reform Act of 1996, which establishes a framework for information technology decisionmaking and implementation based on best industry practices.
Congress enacted the EIC in 1975 with the goal of offsetting the Social Security taxes paid by the working poor and creating a greater work incentive for low-income taxpayers. According to data cited in the task force report, an estimated 4.3 million individuals were lifted out of poverty in 1998 by the EIC, including 2.3 million children. The EIC is a refundable tax credit, meaning that qualifying working taxpayers may receive a refund greater than the amount of income tax paid during the year. Taxpayers can qualify for the credit in one of two ways: with a “qualifying child” or by “income only,” if they do not have a qualifying child. For example, for tax year 2002, the amount of EIC that could be claimed with a qualifying child or children ranged from $0 to $4,140. EIC payments have a phase-in range in which higher incomes yield higher EIC amounts, a plateau phase in which EIC amounts remain the same even as income rises, and a phase-out range in which higher incomes yield lower EIC payments or tax liability. EIC requirements for tax year 2002 include rules for everyone, additional rules for taxpayers with qualifying children, and additional rules for taxpayers without qualifying children, as shown in table 1. IRS has periodically measured EIC compliance for overclaims and underclaims. The most current data available, for tax year 1999, show EIC overclaim rates estimated to be between 27 and 32 percent of dollars claimed or between $8.5 billion and $9.9 billion. IRS has limited data on underclaims, which for tax year 1999 were estimated to be between $710 million and $765 million. IRS has tried to reduce noncompliance through various means, including education and outreach to taxpayers and tax return preparers. In addition, Congress has enacted legislation aimed at resolving some concerns with EIC rules. Because a new analysis of EIC compliance using 2001 tax return information is not expected to be complete until late in 2004, IRS does not know whether compliance has significantly changed since 1999, but officials do not think it has improved substantially. Because of the persistently high rates of noncompliance, we have identified the EIC program as a high-risk area for IRS since 1995. Currently, taxpayers claim the EIC by filing an individual income tax return (e.g., a Form 1040 or 1040A) and including a Schedule EIC—a procedure similar for claiming other tax credits. Unlike with other benefit programs such as Supplemental Security Income, however, EIC taxpayers are not required to be found qualified before claiming the credit or file any other documents with their return to establish eligibility. Instead, IRS uses four primary means to evaluate EIC eligibility and check for noncompliance after the return is filed and checks some aspects of taxpayers’ eligibility before and after the credit is granted: (1) the math error program, (2) correspondence and face-to-face examinations (also called audits), (3) the document matching program, and (4) criminal investigations. Some of these means, such as the math error program, check all EIC returns, but only for limited aspects of eligibility. Other means, such as examinations, only check a small subset of EIC returns, but the review is more expansive. In general, IRS subjects all returns to its math error program and takes corrective action on errors found. Depending on the resources IRS has available, IRS works only a small portion of cases identified as potentially meriting follow-up under its examination, document matching, and criminal investigations efforts. While processing all tax returns, IRS uses its automated math error program to identify and correct the simpler errors found in claiming the EIC. For example, the math error program can identify invalid Social Security numbers and taxpayers who fail to follow recertification requirements. As a result, some inappropriate EIC claims are stopped before refunds are issued. During fiscal year 2001, IRS stopped more than 371,000 incorrect EIC claims using its math error authority. After identifying errors, IRS corrects them so the tax return can be processed and sends a computerized notice to the taxpayer identifying the error and stating that IRS disallowed or reduced the EIC claim. The notice tells taxpayers that if they can correct the error, the EIC claim will be allowed and any refund related to the EIC claim will be issued. Two types of examinations—-correspondence and face-to-face—are used when EIC noncompliance is suspected, in most cases before refunds are issued. IRS uses various systematic means to “score” the likelihood of noncompliance on any return and uses experienced staff to manually identify the specific items on returns for examination. Most EIC examinations occur shortly after a return is filed, largely because of the difficulty in recovering refunds. IRS stops refunds on these returns until examinations are completed. This contrasts with IRS’s normal examination practice of performing examinations many months after tax returns have been processed and any refunds paid. The EIC examinations usually rely on correspondence with taxpayers rather than face-to-face contacts. IRS completed about 368,000 EIC related correspondence exams during fiscal year 2002. IRS tends to use face-to-face meetings with taxpayers to examine tax returns with EIC claims on a very limited basis and primarily when examinations are initiated for other reasons. As part of either type of examination, however, IRS would describe the potential noncompliance in a computerized notice to taxpayers claiming the EIC. IRS requests documentation, such as a school record or birth certificate, to establish EIC requirements. Depending on whether IRS officials accept or reject the support, they may make changes to the return and refund related to the EIC claim. If taxpayers disagree with IRS’s decisions, they have the right to appeal administratively and/or through the courts. IRS also uses its document matching programs to identify potentially misreported income on tax returns claiming the EIC. By comparing the tax return to wage and income statements provided by third parties such as employers and financial institutions, the document-matching program identifies whether a taxpayer appears to have misreported income. Given the phase-in and phase-out ranges of the EIC, some taxpayers may claim too much EIC by overreporting or underreporting their income. This program notifies such taxpayers months after returns are filed and refunds are issued. Similar to audits, a notice is issued telling a taxpayer that an error appears to have been made, that he or she may disagree and provide any support for income reported, and that he or she may appeal IRS’s decision about additional taxes owed. Unlike audits, the program is highly automated and is designed to require less contact with taxpayers by IRS staff. IRS also uses criminal investigations to stop the payment of false refunds, identify refund scams/schemes, and prosecute perpetrators, including those with fraudulent EIC claims. For EIC, IRS uses a specific computer program that looks for questionable refund claims and for return preparers known to have prepared questionable returns. IRS also has teams that scan returns and receive referrals from other parts of IRS and informants. IRS stops many returns as they are being processed so that criminal investigators can review the claims before the refund is paid or after the return has been processed. When IRS’s study of EIC compliance rates for 1999 was released, the Assistant Secretary of the Treasury and IRS Commissioner convened a task force in February 2002 to find ways of reducing EIC overclaims while minimizing the burden to taxpayers and maintaining the EIC’s relatively high participation rate. The task force considered whether changes in statutes recently enacted by Congress or proposed by Treasury may have lessened the need for new EIC compliance initiatives and concluded that, while statutory changes addressed some sources of noncompliance, they likely would not reduce other leading sources of noncompliance. The task force also considered a range of new methods, including partnering with other federal or state agencies or programs and developing a new database to verify EIC eligibility before issuing tax refunds, but decided that these options were not viable. Ultimately, the task force recommended the qualifying child certification program. The task force reviewed IRS’s EIC compliance study results and other data, as well as other studies, to identify the sources and develop new methods of addressing noncompliance. The joint Treasury and IRS task force addressed a long-standing problem of high EIC overclaim rates. Although the release of IRS’s 1999 compliance study precipitated the formation of the EIC task force in February 2002, the study results were generally consistent with high overclaim rates reported in prior IRS studies. While some stakeholders view the 1999 study as having some methodological weaknesses, it showed that of the approximately 20 million taxpayers that claimed the EIC in 1999, 46 to 50 percent of their tax returns had errors that led to claiming too much of the credit (IRS often refers to this as the error rate). IRS also estimated that the total dollars overclaimed on those returns represented between 27 and 32 percent of total EIC dollars claimed in 1999, or between $8.5 billion and $9.9 billion. IRS also has some data on underclaims—instances where taxpayers claimed less than they were entitled to receive. For tax year 1999, underclaims were estimated to be between $710 million and $765 million. IRS has conducted EIC compliance studies for several years and the overclaim rate, which is the percentage of total dollars paid out in error, was estimated to be about 24 percent in tax year 1994. According to IRS officials, because different methodologies were used in the subsequent studies, changes in estimated overclaims found in other studies do not support conclusions about trends in the overclaim rate over time. However, IRS officials also acknowledged the overclaim rate has not improved significantly. Overclaim rates for tax years 1997 and 1999 are shown in table 2. The information in table 2 does not reflect the current compliance situation; for example, it does not reflect the presumably positive impact of new legislation that has taken effect since 1999 aimed at improving compliance. Although IRS’s studies have shown high EIC overclaim rates for many years, other studies had shown that EIC’s participation rate was fairly high. For example, in 2001 we reported that an estimated three of every four eligible participants received the EIC in tax year 1999. For taxpayers with one or two qualifying children, we estimated that participation rates exceeded 90 percent. Individuals with no children, who receive a much smaller credit than taxpayers with qualifying children, had a much lower participation rate that we estimated to be about 45 percent. Although at the time we reported that available data did not enable us to determine the reasons for these differences, IRS officials attributed these differences, in part, to the lower EIC amounts allowed for individuals and because the program did not include individuals without children when it first began. The EIC task force reviewed whether recent statutory changes have the potential to reduce the major sources of EIC noncompliance, either by changing the rules or providing IRS new enforcement options. Because the study of tax year 1999 compliance was the most recent available, the task force lacked data on the effect of the recent changes and relied on other analyses that showed whether the changes would affect compliance. Of the recent changes, the task force estimated that one change, to the Adjusted Gross Income (AGI) tiebreaker rule, would likely reduce noncompliance. The task force judged that the other legislative changes, including those proposed by Treasury, while potentially helping reduce noncompliance from other sources, would not be enough to reduce noncompliance without further IRS efforts. Three key pieces of legislation, which have been recently enacted or taken effect, were at least partially aimed at improving EIC compliance, as shown in table 3. They may eventually help reduce noncompliance after taxpayers and tax preparers become familiar with the new laws. The statutory changes were to serve several purposes, including improving compliance and simplifying tax laws associated with the EIC. A Treasury study showed that the change in the AGI tiebreaker rule effective for tax years after December 31, 2001, would likely have eliminated about $1.4 billion of the nearly $2 billion in tax year 1999 EIC overclaims that were due to tiebreaker errors. Accordingly, the task force decided that this source of EIC overclaims did not need to be further addressed by a new compliance initiative. Although officials recognized the benefits of these recent legislative changes to help improve EIC compliance, they concluded that additional initiatives were still needed. For example, officials recognized the value of IRS being able to use math error authority to deny EIC claims on and after January 1, 2004, when the Department of Health and Human Services’ Federal Case Registry (FCR) indicates that the taxpayer is the noncustodial parent of the qualifying child. However, officials told us that this authority was limited and not applicable to a significant number of taxpayers whose compliance may be problematic. IRS has a study in process to determine the effectiveness of using FCR data to deny EIC claims using its math authority. The study was scheduled for completion by July 30, 2003, but as of August 20, 2003, was not yet completed. The EIC task force considered three key options to verify taxpayers’ qualifying children: (1) partnering with other federal or state agencies or government programs to verify EIC taxpayers’ eligibility, (2) creating a federal database that would automatically match and detect questionable or erroneous EIC claims, and (3) certifying taxpayers’ eligibility for certain EIC criteria. Ultimately, in August 2002, the Secretary of the Treasury approved the qualifying child certification program, which at the time was to include providing proof of eligibility in advance of the filing season (July–December), and was referred to as “precertification.” The first two options were expected to impose little or no documentation requirements on taxpayers. The task force was trying to determine for both options whether sufficient information was already available from others, or that little additional information would need to be collected by others, to verify a taxpayer’s qualifying children. However, the task force found that there was little overlap between the EIC population and verification criteria used to administer other federal or state programs. In addition, although some databases existed, the task force found that they could not be used to effectively verify EIC eligibility, largely for the same reason. Consequently, the task force judged that these options were not likely to be useful in addressing EIC compliance problems. Similarly, the task force also found that if a federal database were created to facilitate EIC verification, IRS would have to gather the bulk of the information itself, thus imposing a burden on taxpayers, which would also be costly and time- consuming for IRS. The third option, which the task force selected, required taxpayers to demonstrate EIC eligibility for certain criteria, namely residency and relationship tests for qualifying children, prior to receiving the credit. The relative cost of the options the task force considered did not drive the decision to select the qualifying child program because the other two alternatives were not considered viable. The task force did compare IRS’s EIC administrative costs to those of other federal benefit programs and found them to be much smaller. IRS has had a special appropriation for EIC compliance initiatives since 1998—and has received about $875 million total through fiscal year 2003. It requested a total of about $250 million in fiscal year 2004, which included $100 million for the EIC compliance initiatives, including the qualifying child program, and about $150 million for the special appropriation. IRS estimated that this $250 million total was about 0.8 percent of the total annual EIC benefits distributed, and therefore much smaller than the 9 to 13 percent administrative costs the task force had found for other benefit programs. See appendix II for more information we obtained on administrative costs for other benefit programs. The EIC task force reviewed IRS studies, other IRS data, and studies by other parties to better understand the sources of EIC noncompliance and devise new initiatives to address those known sources. In reviewing IRS studies and data, the task force found that the three leading sources of EIC errors resulting in overclaims in 1999 were (1) claiming nonqualifying children incorrectly, accounting for about $3 billion, (2) using the wrong filing status, accounting for about $2 billion, and (3) misreporting income, also accounting for about $2 billion. Three administrative proposals resulted, involving (1) qualifying child certification, (2) improper filing status, and (3) income misreporting. In 1999, another leading source of EIC overclaims involved taxpayers with lower modified adjusted gross income claiming a child when another person with a higher income should have done so. The task force did not propose an initiative dealing with these errors, primarily because the “AGI tiebreaker” legislation was specifically enacted to decrease this source of noncompliance, as previously discussed. To deal with the error attributable to claiming children who are not EIC qualifying children, the task force proposed a qualifying child certification program. Based on analyses of past compliance data, IRS found that taxpayers who overclaimed the EIC, most frequently claimed children who did not meet the residency or relationship criteria. As a result, the task force proposed the qualifying child program that was to include an annual residency certification and a one-time relationship certification. Under this program, during the period from July through December, taxpayers would have been asked to document that the children they intend to claim under the EIC, meet the EIC relationship and residency criteria. The task force proposed targeting the program to those taxpayers with qualifying children for whom IRS could not establish residency or relationship through other available means and it proposed that this concept be tested on a sample of EIC taxpayers for the tax year 2003. The task force envisioned that ultimately all EIC claimants whose eligibility could not be verified through available means would be asked to provide additional eligibility documentation prior to the filing season. Taxpayers who successfully certified qualifying children’s eligibility in advance of the filing season would have their claims processed and paid expeditiously during the filing season, absent any other problems with their tax return or EIC claim. Having taxpayers certify between July and December was also intended to allow IRS to process the taxpayers’ documents outside of the filing season when IRS processing systems are in highest demand. Taxpayers who did not respond and/or were unable to document their eligibility during the certification period, but then claimed the EIC when they submitted their tax returns, would have the EIC portion of their tax refund frozen. Then they would be required to provide the same documentation during or after the filing season as they were asked to provide during the certification period. When and if they document their eligibility, the EIC portion of their refunds would be released. According IRS officials, as the task force neared its end and before the Secretary of the Treasury approved the program, IRS developed a means for using existing data to determine whether each taxpayer likely would meet the relationship or residency test for children they had claimed for EIC for tax year 2002. For relationship, IRS developed a plan to match taxpayers to several databases that show the parents of children. For instance, one database IRS planned to use was the Social Security Administration’s database (which IRS refers to as KIDLINK) that ties parent’s and children’s Social Security numbers for children born after 1998 in U.S. hospitals. For tax year 2003, IRS had planned to match 1.6 million, or 10 percent of the approximately 16 million EIC taxpayers with a qualifying child, to the databases. Under this scenario, any taxpayer who was not shown to be the parent of a qualifying child claimed for tax year 2002 would then be part of the population from which IRS would randomly select taxpayers to test for relationship. IRS considered the work of the task force in developing a comparable means for using available data to identify those who have met the residency criterion. The task force had analyzed data from the 1999 compliance study and information in other reports. It found that residency errors related to qualifying children were often correlated with the taxpayer’s relationship to the child and the taxpayer’s filing status and gender. The analysis showed that, overall, parents who filed married filing jointly were the most compliant when compared to taxpayers filing single or head of household in claiming a qualifying child who meets the residency test. Married filing jointly parents had the fewest qualifying child residency errors—1.5 percent—compared to any other combination of taxpayers by relationship to the child, gender, or tax filing status. Among taxpayers who file single or head of household, mothers were the most compliant (see figure 1). The task force also found other reports that reinforced the results of its analysis. Specifically, an independent study of low-income households in three urban areas estimated that children resided with biological mothers 90 percent of the time. Another study estimated that 89 percent of children in low-income households lived with both parents or their mother. IRS used this information to propose a process for identifying taxpayers to include in the population that would be subject to the residency certification requirement. IRS proposed that the 1.6 million taxpayers, or 10 percent of the 16 million taxpayers with a qualifying child, would be matched to the FCR database. IRS officials considered the FCR to be the most useful database for identifying those meeting the residency requirements. This database compiles court and other records that indicate who is the custodian for a child (which could be a parent or nonparent). IRS assumes that children live with the custodian of record. According to IRS, the FCR database contains custodial information for about 40 percent of the EIC population. If a taxpayer matched as the custodian of the child claimed for the EIC for 2002, the taxpayer would not be among those needing to certify. When the FCR database showed someone was the custodian of a child other than the EIC taxpayer who had claimed that child for the EIC in 2002, those taxpayers would be among the group from which the residency certification sample would be drawn. When the FCR contains no information about the child a taxpayer had claimed for EIC in 2002, the IRS would attempt to establish the relationship of taxpayers to qualifying children by comparing information in several databases. Those taxpayers IRS could identify from databases as the child’s mother would be excluded from the sample, if they filed married filing jointly, single, or as head of household. Mothers would be excluded on the basis of the task force analyses showing mothers to be among the most compliant on the residency criterion. Also excluded from the sample would be fathers who filed as married filing jointly. Otherwise, all males who were shown to be a child’s father filing single or head of household would be included in the group from which the certification sample would be drawn because of the data showing a high level of noncompliance on the residency criterion for these taxpayers. Finally, all nonparents who are not shown in the FCR to be the custodian would go into the group from which taxpayers would be selected for residency certification, also due to information showing nonparents to be among the less compliant taxpayers on the residency criterion. The selection processes for relationship and residency would have, therefore, yielded a group of taxpayers that would include some needing to certify for relationship only, some for residency only, and some to certify for both. Since adopting the EIC task force recommendations in August 2002, IRS has made key changes to the qualifying child certification program in response to input received and additional analyses done. Some of these changes include (1) postponing relationship certification for an undetermined period of time, (2) delaying program implementation, and (3) reducing the test sample from 45,000 to 25,000. However, these changes create additional challenges for IRS and taxpayers. Despite these challenges, the process for selecting taxpayers, what taxpayers will receive from IRS, and what taxpayers will be required to provide remains basically the same as originally planned. According to officials, the same factors were considered when setting the new sample size, which is still designed to allow IRS to achieve the same goals as the original sample size, albeit to a lesser extent. In addition, IRS has emphasized that program expansions, if any, will depend on the results of this year’s test. Concerns we identified in our report on recertification were considered and taken into account by IRS in designing the new qualifying child certification program. IRS took the broad charge from the EIC task force and designed the qualifying child certification program. Its focus was to decrease the EIC overclaim rate while striving to maintain the high rate of participation and minimize taxpayer burden. Initially, IRS decided that the certification program would involve testing of 45,000 taxpayers for both relationship and residency beginning in July 2003, and immediately expanding the certification program for relationship to 2 million taxpayers in 2005 and to both relationship and residency in substantial numbers in future years. However, as IRS obtained input on the program, it modified these plans. Since initially formulating plans for the qualifying child certification, IRS has made multiple changes to the program. First, IRS postponed relationship certification for an undetermined period for a number of reasons. IRS had developed a draft form for certifying relationships and obtained input on the form from external and internal stakeholders. Some stakeholders raised concerns about whether taxpayers would be able to provide some of the types of documentation IRS planned to request, such as marriage certificates, within the time envisioned. IRS officials said that testing the relationship certification this year was postponed, in part, because these concerns were unresolved. The officials also noted that Treasury studies have shown relationship requirements to be a lesser compliance issue than residency, and taxpayers that were found to be noncompliant with relationship requirements were also often noncompliant due to residency errors. Since both residency and relationship requirements have to be met, if taxpayers fail certification on residency there would be no need to test on relationship. Consequently, officials gave a higher priority for testing residency certification. Second, IRS has changed the start date of the test twice. Originally, IRS planned to start the test in July 2003, but postponed implementation until mid-August. As we were finalizing this report, IRS announced in August that it now plans to begin the test in December 2003, in conjunction with the 2004 tax filing season. (Appendix IV shows key milestones from 2002 through 2005.) According to IRS officials and documents, implementation was postponed from July to August to allow time to conduct focus group testing, request and obtain public comments during a 30-day period, and make changes to the program as a result of those efforts. Thereafter, IRS postponed implementation a second time from August to December to ensure (1) taxpayers have better access to tax practitioners since many only operate during the filing season and (2) more time for outreach and education. However, as a result of the delays, taxpayers will be providing proof of residency documentation during the filing season and not “precertifying” before the filing season as originally envisioned. This change is important and creates additional challenges for both IRS and taxpayers, as follows: Taxpayers will no longer have the opportunity to provide proof of qualifying child residency, correspond with IRS in advance of the filing season, and resolve any potential issues before filing their tax returns. Because all correspondence will take place during the filing season, selected taxpayers could experience a delay in receiving the EIC portion of any refund, if the EIC portion is frozen because of any problems until certification is successfully completed. IRS will no longer be able to spread out its workload and processing may be slower since certification will occur during the filing season--- IRS’s busiest time of year. IRS will not have the opportunity to assess taxpayers’ ease or difficulty in obtaining required documentation in advance of the filing season and whether taxpayers would do so. This is important because taxpayers may be given the opportunity to certify in advance of the filing season in future years. Third, IRS reduced the number of taxpayers included in the test from 45,000 to 25,000, in part, in response to comments received during the 30- day public period. According to IRS officials, despite reducing the number of taxpayers included in the test, the sample size should still allow IRS to make statistically valid measurements of results in addition to helping IRS meet its desired goals of protecting revenue and testing the process for conducting the certification program. In addition, the smaller sample should help mitigate the challenge related to processing the certification forms during the filing season. Despite these changes, how IRS selected taxpayers for the test, what taxpayers will receive from IRS, and what taxpayers will be asked to provide as proof of residency for qualifying children will remain fundamentally the same. IRS’s process for selecting the taxpayers for the test is shown in figure 2. Using this process, IRS selected 25,000 taxpayers in August. The 25,000 represents about 0.16 percent of the approximately 16 million EIC claimants with a qualifying child in tax year 2002 and about 0.13 percent of the approximately 20 million EIC recipients overall. According to agency officials, IRS will now send the 25,000 taxpayers forms and instructions about the program in December instead of this summer. IRS plans to send Notice 84-A, a letter informing them about the new program; Form 8836, “Qualifying Children Residency Statement;” Publication 3211M, “Earned Income Tax Credit Question and Answers;” and Publication 4134, “Free/Nominal Cost Assistance Available for Low Income Taxpayers.” However, officials are changing these documents based on the public comments received. Appendix V has the most current copies of these documents. Once taxpayers receive this information from IRS, they would obtain documentation to prove the qualifying child’s residency and send it back to IRS. IRS examiners would review the documentation and send a letter back to the taxpayer either accepting or rejecting the claim, as shown in figure 3. IRS currently envisions that the 25,000 taxpayers selected for certification will be required to provide proof that the qualifying child meets residency requirements before getting the EIC portion of their refund. IRS officials say that taxpayers who are able to establish eligibility when filing their tax return should receive their refunds more expeditiously than those who do not. Taxpayers selected for certification but who are not able to provide the necessary documentation will be treated essentially the same as taxpayers undergoing a correspondence audit. The EIC portion of their refund—if they are to get one—will be frozen until proof of eligibility is established. According to IRS’s draft evaluation plan for the certification test and our discussions with officials, three factors were considered in setting the original sample size of 45,000: (1) show that certification would “protect revenue,” (2) determine whether the test will succeed, and (3) test its processes and systems. According to IRS officials, the smaller sample size of 25,000 is designed to allow IRS to achieve the same goals as the original sample size, albeit to a lesser extent. One factor considered by IRS for the certification test was to stop as large an amount of EIC overclaims due to ineligible qualifying children as possible during the 2003 tax year. To determine how many taxpayers to include in the certification test to achieve this goal, officials said they determined the maximum number of staff that could be assigned to and adequately supported by the planned central unit in Kansas City that would be responsible for the certification program. Based on the maximum number of staff that could be assigned and assumptions about how many cases staff could handle, IRS calculated that 45,000 taxpayers could be included in the test. IRS estimated that $114.5 million in protected revenues could be realized from including 45,000 taxpayers in the test. Based on the revised sample size of 25,000, IRS now estimates that $63.6 million in protected revenues could be realized. A second factor considered was to have a large enough sample to support analyses of whether the test succeeds. For instance, IRS is interested in how many taxpayers provide the information needed for IRS to determine qualifying child eligibility, whether taxpayers in the sample population who are actually qualified to claim the EIC do not do so with their 2003 tax return and why, and whether taxpayers found the certification process burdensome. IRS’s draft plan for evaluating the certification test notes that the original 45,000 sample size was much larger than needed to obtain statistically valid measures of test results. The draft plan indicated that a sample size of about 3,600 taxpayers, which would have provided an estimate at 95 percent confidence levels plus or minus 5 percent, was the number of taxpayers needed for IRS to determine qualifying child eligibility. According to the draft plan, the 45,000 sample size would allow very precise estimates for the population as a whole and should provide statistically valid information about sub-sets of claimants. Despite the reduction to 25,000, IRS officials still believe that this sample size will allow for precise estimates for the universe as a whole and smaller subsets as well. Finally, a third factor in selecting both the 45,000 and 25,000 sample sizes was to have a large enough sample to test the processes and systems that would be required if IRS were to expand certification in the coming year. IRS had been preparing to work on approximately 25,000 certification cases during the filing season under its original plan for 45,000 taxpayers. It based the 25,000 on worst case assumptions about how many of the 45,000 would not opt to submit proof of eligibility in advance of the filing season and, instead, would have submitted their documentation during the filing season. In addition, for this goal, the draft plan preceded IRS’s current thinking that the certification program likely will not be expanded as rapidly next year, if expanded at all. However, according to IRS officials, based on the number of cases IRS estimates can be worked on and what it plans to achieve under this goal, the 25,000 sample size is appropriate to help test the systems and processes. Although IRS has consistently referred to the certification effort as a test, officials recently have stressed this point. For example, officials have recently referred to the efforts for this year as a “pilot or proof of concept.” Furthermore, as a result of the most recent changes, the program will no longer take place in advance of the filing season, but instead, during the filing season. IRS officials told us that it is unlikely that the certification program will be expanded to cover 2 million claimants in the summer of 2004, as originally anticipated. Instead, IRS officials plan to assess the program’s overall effectiveness and make any necessary modifications before expanding it to additional EIC claimants in the future. Thus, particularly in light of IRS’s most recent announcement and according to IRS officials, the program may be expanded more slowly, if at all, depending upon the evaluation results. Officials also said that test results will contribute to a future decision about whether certification, if continued, will precede the filing season or be part of the filing season as it will be this year. On the basis of stakeholder input, focus groups, and other input, IRS has made several changes to the planned certification test in addition to those discussed previously. IRS held informal meetings with external and internal stakeholders, focus group meetings with taxpayers and paid preparers, and one-on-one interviews with third parties to share the certification letters, forms, and/or instructions and obtain views on aspects of the new process. In response, IRS took several actions, including revising the forms. As of August 2003, IRS was evaluating comments received during the 30-day public comment period, which IRS officials said may result in additional changes. IRS held several informal meetings with external and internal parties with an interest in the qualifying child certification program. In March 2003, the Stakeholder Partnership, Education, and Communication Organization and the National Taxpayer Advocate held four informal meetings with various external stakeholders, such as representatives of the National League of Cities, the Boston EIC Coalition, and the American Institute of Certified Public Accountants. Similarly, IRS officials coordinated the certification initiative with internal stakeholders, such as representatives from the Compliance unit, Wage and Investment operating division, Small Business/Self Employed (SB/SE) operating division, and Forms and Publications unit. The purpose of these meetings was to discuss the EIC certification proposal and share the drafts of the two new certification tax forms—Form 8836, “Qualifying Children Residency Statement,” and Form 8856, “Qualifying Child Relationship Statement.” Officials told us they received comments from these groups of stakeholders and revised and improved the forms based on the feedback received. For example, for the residency form IRS added a list of community-based organizations and a list of acceptable third parties from which IRS would accept affidavits. After incorporating the recommendations from these informal meetings, officials said they felt comfortable with testing the Form 8836, its instructions, and the accompanying letter in other ways, including focus groups, one-on-one interviews, and a 30-day public comment period. In June 2003, a contractor conducted nine focus groups, five with taxpayers who claimed EIC in tax year 2002, and four with tax preparers who had prepared returns for taxpayers claiming EIC. In addition to the focus groups, the contractor also conducted nine one-on-one interviews with a cross section of the third parties listed on Part IV of the Form 8836 (the participants were landlords, employers, and child care providers). The goal of the testing was to determine whether individuals understood the documents and thought they could obtain the requested supporting documents and whether the suggested third parties would be willing to sign the affidavit. The focus groups and interviews were held in Philadelphia, Chicago, Dallas, and Los Angeles. These cities were selected because of their high EIC population. The participants were selected using screening guidelines developed by IRS in conjunction with the contractor. Taxpayers were selected on the basis that they claimed the EIC for tax year 2002 with a qualifying child. Similarly, preparers were selected on the basis that they worked as a tax preparer on federal tax returns for 2002 and prepared tax returns for clients claiming EIC with a qualifying child. Those selected for one-on-one interviews represented a cross section of the types of individuals IRS deemed credible to provide affidavit information about the EIC claimant. In total, 816 people were contacted and 109 agreed to participate in the focus group testing. Of the 109, 88 persons arrived for the testing and 81 actually participated in the focus groups. For the one-on-one interviews, 12 individuals were qualified to participate and 9 actually participated in the interviews. Key IRS officials were on site during the focus groups and one-on-one interviews to observe the participants’ comments. A variety of comments were received and some changes were made. For example, IRS highlighted where taxpayers and third parties were to sign the forms. Although the contractor’s report of these meetings was not available before the public comment period, IRS officials who attended the meetings concluded that they had not received any feedback that would preclude moving forward with getting comments from the public. During the comment period, anyone could write or go to IRS’s Web site and provide any comments or opinions about the qualifying child certification program, including the form IRS expected to use and the data it planned to request to prove eligibility. According to IRS, during the 30-day public comment period, IRS received about 200 communications containing comments. Any other comments about the certification program are due by December 31, 2003. In addition, individuals can comment on the certification process during the filing season until April 15, 2004. As of August 2003, IRS officials were reviewing the comments received and anticipated making additional changes to the forms and publications shown in appendix V. IRS officials told us they considered the recommendations in our recertification report when planning their certification program. We agree that our applicable recommendations have been considered. Whether the strategies IRS adopted to deal with the concerns that led to our recertification report recommendations are successful will not be known until IRS evaluates the certification test. Our recertification report described three aspects of the recertification process that caused problems for taxpayers. Specifically, one form used for recertification was of questionable value to IRS and another form was potentially confusing to taxpayers; taxpayers were asked to submit information that was difficult for them to obtain or inconsistent with what many IRS examiners considered acceptable; and IRS examiners’ inconsistent assessment of documentation submitted by taxpayers could result in different recertification decisions for taxpayers in similar circumstances. IRS has taken steps to deal with all of these concerns in designing the certification process. Regarding the problems with recertification forms, the form that was of questionable value to IRS, which was essentially a means for taxpayers to tell IRS that they wished to be considered for recertification, is not applicable to the certification program. The other recertification form told taxpayers what they had to submit to establish their eligibility for the EIC. We found that this form could confuse taxpayers into believing they had to show that a qualifying child was also their dependent, a criterion not applicable to EIC eligibility. We also found that the form provided insufficient guidance to taxpayers on what information they needed to provide to prove that qualifying children met the EIC eligibility requirements. We made several recommendations, including that IRS should clarify taxpayers do not need to demonstrate that qualifying children are also dependents, help taxpayers better understand what documentation they need to provide to establish their relationship with any qualifying children, eliminate a requirement that statements from child care providers be notarized, and encourage taxpayers to submit more than one type of documentation. Regarding our concerns about taxpayers who were recertifying being asked to submit documentation that was difficult for them to obtain and that tax examiners did not all accept, we found, for example, that EIC taxpayers’ living arrangements could make providing various documents difficult. We also found taxpayers did not always understand that school records were for a calendar year and therefore needed to cover the spring and fall of separate school years. We also found situations in which IRS examiners would not accept a document even though the recertification form listed the document as being acceptable. This concern overlapped with our finding that IRS examiners’ were inconsistent in their assessment of whether documentation provided by the taxpayers was sufficient to establish their qualifications for the EIC. Regarding our concerns about inconsistent documentation, IRS again took actions intended to deal with our concerns in developing the certification program. By introducing a new option—obtaining an affidavit affirming that a qualifying child resided with the taxpayer for more than half the year—IRS intended to give taxpayers another means of showing that the residency requirement is met, which would prevent the taxpayer having to obtain the other types of documents that the draft certification form lists. Although IRS did not, as we recommended, create a new form to be used by taxpayers when seeking school records, it did follow our alternative recommendation that IRS clearly remind taxpayers that they need records for part of 2 school years. The information is included in the certification form’s instructions, which contain an example where a taxpayer must provide records from 2 school years. Finally, by centralizing the EIC certification processing in one location—Kansas City—and providing training to those who will be involved, IRS is seeking to ensure a higher level of consistency in how tax examiners judge whether taxpayers adequately establish that qualifying children meet the residency criterion for EIC. Whether the manner in which IRS took our recertification recommendations into account when designing the certification program will be successful will not be known until IRS evaluates the certification test. The certification program appears to be adequately developed to potentially improve EIC compliance with consideration for minimizing taxpayer burden so that testing should proceed, particularly in light of IRS’s recent announcement further delaying the program’s start and reducing the sample size. For example, the EIC task force and IRS have taken steps that directly minimized the number of taxpayers who will be burdened by the certification program. That is, the certification proposal is based on analyses of the leading sources of EIC errors detected in earlier studies, thus focusing attention and burden on the subset of taxpayers making those errors, as opposed to all EIC recipients. In addition, IRS has taken steps to address the burden taxpayers will experience as participants in the certification test this year. Although IRS has made and is continuing to make progress in defining its plan to evaluate the certification test, the plan is incomplete. For example, the draft plan does not indicate how and when some information that will be needed to evaluate whether certification achieves its objectives will be obtained and analyzed. However, officials recognize the draft plan needs to be further developed and the importance of doing so quickly. In initially designing and subsequently modifying the EIC certification program, officials took into account the burden that taxpayers may experience while attempting to improve compliance. Officials designed the program to include, and thus burden, only the taxpayers most likely to make the errors that contribute most to the EIC’s overclaim rate. By focusing on these noncompliant taxpayers, IRS expects to improve EIC compliance. In addition, officials took a number of steps, such as obtaining input from external and internal stakeholders that resulted in changes and delaying the program while considering comments received during the comment period, which should reduce the burden on those taxpayers who are identified to certify. To help improve compliance, the task force focused on known sources of noncompliance including claiming nonqualifying children, filing status, and misreporting income. To deal with errors attributable to claiming nonqualifying children, the task force proposed a program for certifying the eligibility for qualifying children and envisioned targeting taxpayers most likely to make those errors. In contrast, other benefit programs that we reviewed generally require all applicants to provide documentation before receiving assistance. For example, to receive Supplemental Security Income, an individual must visit a Social Security office, meet with a representative, and provide documentation including birth certificates and payroll information. The Social Security Administration then matches this information to determine eligibility in advance of benefits being received. IRS’s certification effort, even if fully implemented, would require only a subset of all EIC taxpayers to provide documentation to support their eligibility and only when IRS is unable to verify eligibility from other sources of information. After the proposal was formally adopted, IRS took a number of steps in developing plans for implementation that have been intended at least in part to minimize the burden that taxpayers actually asked to certify would experience, including the following. IRS has undertaken more activities than usual to ensure the residency form and other explanatory documents related to the certification program have been reviewed by those who would use them. IRS sought feedback from focus groups and stakeholders on various aspects of the certification test and the draft letter, form, and instruction proposed for the residency test, such as whether taxpayers will be able to obtain and provide documents within the time available, and made some changes to the proposed form due to that feedback. As previously described, IRS held focus groups with taxpayers, paid tax preparers, and other parties to obtain feedback on certification. Officials also interviewed a small number of third parties who would be called upon to provide requested documents. IRS also held a 30-day open period to receive comments from any interested party and expects to revise certification materials due to comments received. Finally, IRS officials say they will again revisit, among other things, the appropriateness of the forms and explanations going to taxpayers after evaluating the results of this year’s test of certification. IRS considered the issues we raised in our report about the recertification program when planning for certification. For example, as discussed previously in this report, IRS developed a standard form that includes an affidavit, which taxpayers can provide to third parties, such as an employer, as an alternative to obtaining other documents to prove residency. We also noted in our report that examiners inconsistently accepted or declined supporting documentation for recertification purposes. To address this concern, IRS officials conducted special training and have all certification examiners in one location, Kansas City, where EIC claims will be processed. IRS provided taxpayers with a variety of documentation choices in order to prove eligibility for their qualifying children. To certify for residency, taxpayers will need to provide Form 8836, “Qualifying Children Residency Statement,” with one or more of the following supporting documents: school records, medical records, day care provider records, leases, or social service agency records that show the parent’s name and the child’s name and address, and the dates that the child lived with the parent; or a letter on official letterhead for a qualifying child from the child’s school, health care provider, landlord, or member of the clergy that shows the parent’s name and the child’s name and address, and dates that the child lived with the parent; or a third party affidavit from a clergy member, community-based organization official, health care provider, landlord or property manager, school official, or day-care provider. IRS dropped for an undetermined period of time, its plan to ask taxpayers to certify their relationship to qualifying children. IRS officials do not know whether or if they will test certification of relationships in the future. Various external stakeholders had expressed concerns about whether taxpayers would be able to provide the type of documents, such as marriage and birth certificates, which IRS had planned to request to document relationships to qualifying children on time. Also, IRS officials said that the relationship portion of the program was dropped for other reasons, including that (1) studies that have shown relationship requirements to be less of a compliance issue than residency and (2) taxpayers found to be noncompliant because of relationship requirements often were also noncompliant due to residency errors. As a result, certification will only include residency this year. As part of its effort to balance burden with ensuring compliance, IRS made the changes listed above. As we drafted this report, it had not yet determined what additional changes it would make to the forms on the basis of comments received during the 30-day public comment period, but officials said more changes will likely result. As previously discussed, the initial design of the residency form was responsive to concerns we raised in our earlier report on IRS’s recertification program. Additional changes, especially dropping relationship recertification, were responsive to the concerns that stakeholders raised before the public comment period. Accordingly, the current draft residency certification form addresses many burden concerns. Although IRS has made and is continuing to make progress in defining its plan to evaluate the certification test, the plan is not yet complete. From its inception, the certification program was intended to: (1) reduce the EIC’s overclaim rate, (2) minimize burden on taxpayers and (3) maintain the EIC’s relatively high participation rate. Although there are many ways to organize an evaluation, determining whether the major objectives of a program are accomplished should help policymakers determine whether and how to proceed with the program. The draft plan is not explicitly organized to show whether certification’s objectives are achieved, but does present some information on how IRS would evaluate these objectives. However, the plan proposed potential options for identifying how and when some critical data will be obtained and analyzed, but does not provide further details on when decisions will be made on specific data that will be collected, how, and by whom. Officials recognize that the draft plan needs to be further developed and the importance of doing so quickly. They have, for instance, developed preliminary drafts identifying additional data needed and have begun considering how to use contractors to gather the data. Because evaluating these objectives will depend in part on actions of EIC certification participants that will now occur as part of next year’s filing season, IRS appears to have some time before it must make final decisions on how it will determine whether the objectives were met. Although an evaluation plan may not have to completely identify all issues that need to be evaluated and precisely how they will be evaluated before a program begins, the more completely such a plan is developed before a program is implemented, the more likely that the evaluation will be sufficient to support future decisions. For example, identifying key questions that need to be answered before a project’s implementation increases the chances that necessary data will be collected to answer those questions. IRS’s Internal Revenue Manual recognizes the desirability of having evaluation plans in place before a project is implemented. For instance, it requires such plans before reorganizations. IRS has been preparing an evaluation plan for the certification test and has a draft plan, dated April 22, 2003. That draft describes how IRS expects to evaluate the program and the process IRS used to select taxpayers for the test. The draft plan identifies one “threshold” question for evaluating the certification program: whether the claimant selected for the test provided the required information to allow IRS to determine the eligibility of qualifying children regardless of whether the claimant was determined to be eligible or not. The plan lists data that are to be gathered throughout the certification program to answer this question. The threshold question is part of what must be included in determining whether certification for residency helps lower EIC overclaims, but additional information is needed. Although not tying methodologies or planned data collection specifically to whether certification lowers the EIC overclaim rate, the draft plan has a combination of approaches that should contribute to answering this question. For example, the draft plan identified data that IRS would gather throughout the test on how many taxpayers in the test sample certify or fail to do so in advance of the filing season as well as how many prove that children meet the EIC residency test during the filing season. The extent to which certification may reduce overclaims due to qualifying children not meeting the residency requirement, however, will depend significantly on why some taxpayers will not attempt to certify and why some will fail to claim the EIC, or as much EIC, for tax year 2003 as they did for 2002. The draft plan takes into account that some taxpayers may receive the certification materials, determine that a child does not meet the residency test, and therefore not attempt to certify and not claim the EIC with their 2003 tax return or claim EIC on another basis. IRS expects to use available data to help assess whether the taxpayer had a filing requirement and whether the taxpayer may have been eligible to claim the EIC (e.g., did the taxpayer’s income fall within the appropriate range?). In addition, the draft plan proposes that IRS use a contractor or another third party to gather information from taxpayers about why they did not claim the EIC. A subsequent document cites ideas for the types of questions that could be asked. Regarding the burden certification imposes on taxpayers who participate, the plan is not organized to show how this will be evaluated, but it recognizes that participants’ burden should be evaluated. For example, the data IRS plans to collect throughout the process will have some utility in answering this question. IRS will keep track of the number of communications back and forth between IRS and the taxpayer before a tax examiner makes a final certification decision. IRS’s plan also recognizes that some information on burden will need to be collected directly from taxpayers. The plan includes a general description of a potential opinion survey that would gather burden-related information from certification participants. Little or no detail is provided on how taxpayers would be selected for such a survey, what types of questions would be asked, and when the survey would be done; however, some of this information is shown in a subsequent draft document. Because taxpayers will not have completed their certification experience until sometime next filing season, IRS has some time to decide whether to do such a survey and how to define its parameters. Regarding the objective of maintaining the EIC’s relatively high participation rate, the draft plan proposes to obtain information from those taxpayers who are asked to certify, do not, and then fail to claim the EIC. The plan proposes to use a contractor or other third party to gather information from these taxpayers about why they did not claim the EIC. The plan does not amply describe how and when final decisions would be made about selecting contractors or another third party to do this, when the contractor would contact taxpayers, or what data they would attempt to obtain from the taxpayers. Because the population IRS will need to contact for these surveys will not be known until during and after the spring of 2004, IRS has a number of months to further develop and implement an approach. Recognizing that its plans need to be further developed, IRS officials have continued to explore how the evaluation will be done. For example, officials have drafted ideas for the type of survey questions a contractor or other party would ask of EIC taxpayers to help IRS assess why taxpayers take, or do not take, various actions (such as why they may stop claiming the EIC after being asked to certify eligibility) and to assess taxpayers’ experiences under the certification program, including the burdens they experience. In addition, officials have begun identifying potential contractors who would perform the surveys and considering contracting options. According to IRS officials and documents, some discussions have been held with potential contractors to gain a better understanding of ways to test the survey instruments, techniques available to ensure the best possible response rate, and the number of taxpayers needing to be contacted to have useful results. Finally, because IRS would like to undertake some version of the qualifying child program next year, possibly including certification during the latter part of 2004, the timely production of evaluative data for this year’s test will be critical for supporting decisions about what form future efforts will take. IRS is aware of the tight schedule. Officials note that while they will not have complete information on which to base some decisions about whether and how to continue with implementation in 2004 before those decisions must be made, they expect to have preliminary data in a timely fashion. For example, IRS will not be able to completely answer whether, and if so, why, taxpayers who are legitimately qualified to receive the EIC do not claim it when they file their 2003 tax return until the end of the 2003 tax filing season, or later if taxpayers request a filing extension. IRS does expect that its contractor will have contacted many, if not most, of the taxpayers who file returns before the end of the filing season and do not claim EIC. Thus, IRS expects to know during the fall of 2004 why many taxpayers in the certification test stop claiming the EIC. We did not evaluate some implementation issues because they were outside the scope of our review, still under development, or the Treasury Inspector General for Tax Administration had audits planned in these areas. Nonetheless, implementation issues could affect whether IRS is able to fully implement the certification test and ultimately improve compliance. We did not assess (1) whether IRS assigned an appropriate number of staff to assist taxpayers with questions and process the forms and documents relating to certification, (2) the adequacy of training materials for staff or the procedures put in place to help examiners consistently accept or decline taxpayers’ supporting documentation, (3) the design or reliability of the databases that will be used to capture and evaluate program information, and (4) supporting tools, which examiners will use to do their job. IRS has developed broad plans for processing the certification workload. Officials identified about 30 different offices that will be affected by the new certification program. As a key part of its processing strategy, IRS plans to dedicate employees at its Kansas City campus to process cases, answer a special toll-free number, and make updates to a certification database based on responses from the test of the 25,000 taxpayers. The Kansas City site will have about 180 staff, the bulk of whom will come on- board between September and December 2003. Approximately 40 staff took initial training between April and June 2003. Given the persistently high EIC overclaim rates, that the certification program is a test, and that IRS has taken key steps to address burden issues and focus the test on individuals least likely to meet the qualifying child residency requirements, we believe IRS has struck a reasonable balance between preventing unreasonable burden on EIC taxpayers and balancing the need to obtain information on whether certification can be a useful approach to improving EIC compliance. In addition, with the recent program changes announced in August, it appears that IRS is taking even more steps to be mindful of these concerns. Although certification during the 2004 filing season gives IRS somewhat more time to modify the forms and take other actions to potentially further reduce the burden on taxpayers subject to the test, it also creates new challenges for IRS. The test will no longer be a direct test of the original concept of certifying taxpayer eligibility in advance of the filing season. Instead, testing will occur during the filing season—IRS’s busiest time of year—and gives IRS only indirect evidence on how well certification may work before the filing season as originally envisioned. Further, because IRS currently plans for taxpayers to have to successfully provide proof of eligibility when they file their individual income tax return or have a refund frozen until they do, a greater portion of the taxpayers chosen for the test may have their refunds delayed than if certification had been done before the filing season. Finally, like virtually all aspects of the qualifying child certification program, IRS’s future plans have yet to be determined and are largely dependent of the results and subsequent evaluations of this test. For various reasons, we did not review in detail some implementation issues, such as staffing and procedures for handling taxpayer responses, which could affect whether IRS is able to successfully implement the certification test. Thus, our opinion on whether IRS is ready to proceed is based only on whether it has adequately developed the test to prevent unreasonable burden and to improve compliance. Although the balance IRS has struck supports proceeding with the test, IRS’s plan for evaluating the certification program test is incomplete. IRS recognizes the need to evaluate the test and is developing its plan to do so. For some key test objectives, IRS has preliminarily identified some data that it believes must be collected to determine whether certification’s objectives are achieved and has broadly identified when and how that information will be collected. Because the data are related to taxpayers’ actions that will occur later this year or next spring, IRS appears to have some time to finalize its evaluation plan. Given that the qualifying child certification program is a key part of IRS’s plans for reducing EIC overclaims and that certification is intended to help reduce overclaims while minimizing the burden on taxpayers and maintaining the EIC’s participation rate, the Commissioner of Internal Revenue should, to the extent possible, accelerate development of the evaluation plan for the test. The plan should demonstrate how each of the certification’s objectives will be evaluated, including milestones for such critical steps as defining the specific data that will be collected, who will collect the data, and how the data will be analyzed in time to support decisions about the future of the program. While not explicitly agreeing with our recommendation, in his September 22, 2003, letter, the Commissioner of Internal Revenue said that IRS would be including the components we suggested in their evaluation plan and said that IRS is working to incorporate these components well before the certification test begins. The Commissioner said that our discussion of the evaluation plan is essentially accurate, but provided an enclosure to his letter that noted supplemental information on the plan. We are aware of the information described in the enclosure to the Commissioner’s letter, and considered it when drafting our report. The Commissioner also raised concerns about the comparability of EIC error rates to the error rates in taxpayers’ reporting of certain types of income. We concurred that, by-and-large, the compliance data on reporting of these types of income are not comparable to the EIC error rate. As a result, we no longer show those comparisons in our final report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Joanna Stamatiades, Assistant Director. Other major contributors are acknowledged in appendix VII. If you have any questions about this report, contact Ms. Stamatiades at (404) 679-1900 or me on (202) 512-9110. The IRS has compliance data on some taxpayer groups such as individuals and small businesses and some tax items such as income and credits. By- and-large, the compliance data IRS currently has are not comparable to the EIC. IRS is implementing its National Research Program (NRP), which will provide new compliance data in 2004. In the meantime, IRS is using its Strategic Planning and Performance Management process to prioritize compliance issues. The compliance data that IRS has available for some taxpayer groups and tax items are largely based on the Taxpayer Compliance Measurement Program (TCMP), which was last conducted in 1988. However, these data cannot be compared to the EIC overclaim or error rates, in part because these data are 15 or more years old and reliable inferences cannot be drawn because much of the tax system and the economy have changed during that time. In addition, the methods used to calculate compliance rates for TCMP are different than those used to calculate EIC. In late 2002, IRS began implementing its new NRP, a detailed study of individual taxpayers’ compliance. As part of NRP, IRS has identified a random sample of approximately 47,000 returns from tax year 2001 and is in the process of verifying the information on the returns through reviews of IRS and third-party data. Where necessary to confirm the accuracy of taxpayer-reported information, IRS is conducting either correspondence or face-to-face examinations. IRS intends to conduct additional NRP reviews of additional types of taxpayers, such as small corporations, and use the NRP periodically to measure compliance of individual taxpayers. The NRP sample of 47,000 returns includes about 7,300 EIC returns. These EIC returns are subject to the same processes as the other returns in the sample, and will include a review of the taxpayers’ eligibility for the EIC. In order to determine whether the NRP review of these returns will yield results methodologically similar to those of the 1999 EIC compliance study, IRS is also comparing the results of the 1999 compliance study with NRP by putting a sample of returns from the 1999 study through NRP processes (not including examinations). According to IRS officials, this should allow them to see the impact of the methodological differences between the compliance study and NRP review. IRS expects the results of the comparison study by September 2003. IRS plans to have preliminary NRP results in May 2004 and final results in November 2004. Until better compliance measurement data are available, IRS’s organizational divisions use the Strategic Planning Budgeting and Performance Management process to prioritize the compliance problems IRS faces. Through this process, IRS says that it (1) identifies and explores critical trends, issues, and problems, (2) develops operational priorities and improvement projects to address existing or emerging problems, (3) explores drivers of program resources in order to develop resource allocation targets for carrying out the proposed strategies, and (4) enables division commissioners and the senior leadership teams to prioritize the strategies and projects and determine the resource requirements to apply to each strategy, operational priority, and improvement project. Based on managers’ judgments made during this process, the Small Business/Self Employed (SB/SE) operating division, for example, identified its top six compliance priorities for fiscal year 2003 and 2004: high income nonfilers (income greater than $100,000), abusive offshore financial transactions, promoter investigations (those selling tax schemes to others), abusive tax avoidance transactions, high income taxpayers (income greater than $1 million), and returns with a high probability of unreported income. SB/SE, which conducts few examinations of EIC claims, did not consider EIC in this prioritization exercise since EIC has its own dedicated appropriation. Because IRS used different means to identify and prioritize these potentially noncompliant taxpayer groups, their identification as SB/SE priorities does not mean their noncompliance rate is comparable to noncompliance rates established for EIC or rates, which will be determined through the current or future NRPs or other EIC compliance studies. In addition to the data we complied on IRS’s EIC and qualifying child certification program, we also compiled overclaim rate and administrative cost data, as well as information on the eligibility verification processes, for nine other federal or state benefit programs. We selected the nine benefit programs because each requires some type of certification for benefits, similar to the EIC, and because the EIC task force reviewed the same programs. We did not do a comprehensive analysis to determine which programs, if any, are most comparable to the EIC, nor did we determine whether the information reported is comparable across programs. The overclaim rates, administrative costs, and the eligibility verification processes for the EIC and the other nine benefit programs—- Unemployment Insurance, Supplemental Security Income, Social Security Disability Insurance, the National School Lunch program, the Food Stamp program, Housing and Urban Development rental assistance, Medicaid, Medicare, and Temporary Assistance for Needy Families—-are shown in table 4. Overclaim rates for programs other than the EIC for which data were available ranged from 0.2 to 10.7 percent. These overclaim rates reflect the percentage of total dollars paid out in error, not, for example, the percentage of claimants who made errors. To calculate the overclaim rates, most of the nine agencies selected a sample of program participants and conducted a detailed analysis of the cases. This can involve collection of additional supporting documentation, personal contacts with employers and other third parties, or home visits to program recipients. A description of how the overclaim rates were calculated is in table 5. Administrative costs range from $123 million to $11.9 billion for the nine programs. Administrative costs reported by federal agencies are likely not comparable across programs and may not include all of the costs involved in administering the programs. For example, various agencies and entities at the federal, state, and local levels have administrative responsibilities under the National School Lunch program. However, while the federal budget provides funds separate from program dollars to pay for administrative processes at the federal and state level, officials at the local level pay for administrative costs from program dollars that include federal and state funding and student meal payments. The process used to determine and validate eligibility varies significantly. Some programs, such as the school lunch program, rely primarily on self- reported information and verification is limited. Other programs, such as the Food Stamp program, require program staff to conduct extensive verification. To the extent known, how the overclaim rates are calculated, for the nine benefits program we reviewed, including EIC, is shown in table 5. We were asked to respond to 12 questions about IRS’s certification program, as shown in table 6. In consultation with our requesters’ offices, we grouped these questions into three objectives, as follows: (1) describe the design and basis for the EIC qualifying child certification program as proposed by the EIC task force, (2) describe the current status of the program, including significant changes since program approval, and (3) assess whether the program is adequately developed to (a) prevent unreasonable burdens on EIC taxpayers and (b) improve compliance so that the test should proceed. In addition, we were asked to provide readily available information on (1) significant noncompliance rates other than for the EIC and (2) the overclaim rates and administrative costs of comparable benefit programs administered by states or the federal government and any verification process used by these programs. To respond to all of the questions, we reviewed and analyzed relevant IRS and other documentation, such as compliance reports, EIC task force reports, draft letters and forms, testing and focus group records, implementation plans, evaluation plans, and our prior products, and interviewed Department of the Treasury and IRS officials involved in the EIC certification program, including the Assistant to the Commissioner; the National Taxpayer Advocate; Research, Analysis, and Statistics officials; and members of the qualifying child certification implementation team. We did not verify the accuracy of the data shown in the various reports that we reviewed. Rather, we reviewed the steps IRS had taken to implement the certification program and determined, to the extent possible, how IRS ensured that the program had been adequately developed to prevent unreasonable burden and improve compliance. We did not evaluate whether IRS’s preparations for implementing the certification test, such as staffing and training, were sufficiently developed to support proceeding with the test, because they were outside the scope of our review, still under development, or the Treasury Inspector General for Tax Administration had audits planned. The first objective includes, in order, our response to questions 10, 4, and 6. To determine the current EIC error rates and whether any studies had been done on the impact of recent statutory changes on error rates, we reviewed IRS’s most recent compliance study, the Treasury Inspector General for Tax Administration reports and our previous reports, and interviewed IRS officials. In addition, we reviewed the legislative history of recent statutory changes—effective since 1999—that pertained to EIC. We analyzed these data and IRS and Treasury reports to determine whether an analysis on the impact of the legislative changes on EIC error or overclaim rates had been conducted. To determine the range of alternatives considered by the task force, we reviewed documents and interviewed members of the EIC task force. To determine the correlation between overall EIC error rates, filing status, and gender, we interviewed officials from Research, Analysis, and Statistics and analyzed their data, and reviewed the EIC task force reports and Treasury’s past compliance studies. The second objective included, in order, our response to questions one, five, two, and seven. To determine the status of the EIC certification program, including the number and types of taxpayers to be contacted, we interviewed IRS and Treasury officials and reviewed documents showing timelines and key milestones. We reviewed plans for the certification program, such as IRS’s Concept of Operations and the 2004 Increment Evaluation Plan, in conjunction with IRS’s current process for evaluating EIC eligibility. To calculate the percentage of EIC claimants subject to certification in 2004, we divided the planned sample size by the number of EIC claimants with qualifying children. To obtain information on IRS’s testing of letters, forms, and documents for understandability, we observed the focus group testing that IRS conducted in Dallas, Tex., with EIC taxpayers, tax preparers, and other parties to understand how IRS assured itself that such persons understood the forms and thought they could obtain the required documentation. For whether items of concern we found within the recertification program could have similar concerns in the new initiative, we analyzed our prior reports on IRS’s recertification program and IRS’s progress in implementing our recommendations, then we compared our analysis to the certification plans. The third objective includes, in order, our responses to questions eight and three. To make our determination as to whether the program had been adequately developed to improve compliance with minimal burden to taxpayers, we asked IRS officials to describe and provide documentation to support the steps they took to assure that the program was adequately developed. This included interviews and a high-level review of key steps and decisions found in various documents, such as the EIC task force reports, the Concept of Operations, staffing plans, training materials, and the evaluation plan. To determine the potential extent of the burden on taxpayers, we reviewed reports from outside groups that analyze programs and policies for low-income groups. We obtained the opinions of IRS officials and discussed those of outside stakeholders, such as representatives from the Annie E. Casey Foundation, low income taxpayer clinics, and large tax preparation organizations, that IRS had met with about any problems taxpayers might have in complying with the documentation requirements to establish EIC eligibility. We also interviewed IRS officials and reviewed EIC task force documents to learn about the range of alternatives taxpayers have available to obtain similarly reliable documentation, if they were unable to comply with the certification documentation requirements. Our responses to questions 11 and 12 are in appendixes I and II, respectively. To determine the error rates of non-EIC taxpayer groups having significant compliance issues, we reviewed compliance research reports, interviewed officials about IRS’s National Research Program, and reviewed information contained in the Strategy and Program Plan. We discussed our analysis with key IRS officials, including representatives of the Assistant to the Commissioner. To determine the overclaim rates, administrative costs, and verification process of comparable benefit programs administered by states or the federal government, we researched our prior reports and contacted our staff knowledgeable about the selected programs. We selected nine programs to review: Unemployment Insurance, Supplemental Security Income, Social Security Disability Insurance, school lunch, food stamps, Housing and Urban Development rental assistance, Medicaid and Medicare, and Temporary Assistance for Needy Families. We chose these programs largely because they were the same programs the EIC task force reviewed and because each of them had some sort of precertification program. We did not do a comprehensive analysis to determine which programs, if any, were most comparable to the EIC, nor did we determine whether the information reported for each program was consistent and could be compared across programs. We did not do additional analyses to determine how administrative costs compared to program outlays. Our response to question nine is in the background section of this report. To determine the current process for determining EIC eligibility, we reviewed relevant IRS documents and our prior reports. We verified the accuracy of this information in interviews with IRS officials. We conducted our work in Atlanta, Ga., Dallas, Tex., and Washington, D.C., from May 2003 through September 2003 in accordance with generally accepted government auditing standards. Key milestones for the certification program for fiscal years 2002 through 2005, are shown in figure 4. According to agency officials, IRS will send each of the 25,000 taxpayers subject to precertification four documents, including (1) Notice 84-A, a letter informing taxpayers about the new program; (2) Form 8836, “Qualifying Children Residency Statement;” (3) Publication 3211M, “Earned Income Tax Credit Question and Answers;” and (4) Publication 4134, “Free/Nominal Cost Assistance Available for Low Income Taxpayers.” Copies of these documents, current as of September 2003, follow. In addition to those named above, Tiffany Brown, Evan Gilman, Veronica Mayhand, Kathryn Larin, David Lewis, Donna Miller, Libby Mixon, Cheryl Peterson, and Tom Short made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
The Earned Income Credit (EIC), a tax credit available to the working poor, has experienced high rates of noncompliance. Unlike many benefit programs, EIC recipients generally receive payments without advance, formal determinations of eligibility; the Internal Revenue Service (IRS) checks some taxpayers' eligibility later. IRS estimated that tax year 1999 EIC overclaim rates, the most recent data available, to be between 27 and 32 percent of dollars claimed or between $8.5 billion and $9.9 billion. To address overclaims, IRS plans to test a new certification program. Because IRS's plans have garnered much attention, Congress asked us to (1) describe the design and basis for the EIC qualifying child certification program, (2) describe the current status of the program, including significant changes, and (3) assess whether the program is adequately developed to prevent unreasonable burden on EIC taxpayers and improve compliance so that the test should proceed. The Assistant Treasury Secretary and IRS Commissioner convened a task force to identify ways of reducing EIC overclaims while minimizing taxpayer burden and maintaining the EIC's relatively high participation rate. In August 2002, the Secretary approved a recommendation to certify taxpayers' eligibility to claim EIC qualifying children. The proposal is based on analyses of the leading sources of EIC errors, thus focusing attention and burden on the subset of taxpayers most likely to make those errors. Since August 2002, IRS has made key changes to the certification program, including concentrating on residency certification and postponing relationship certification, delaying program implementation until later this year, and reducing the test sample from 45,000 to 25,000. Despite the changes, the process for selecting taxpayers, what taxpayers will receive from IRS, what taxpayers are required to provide, and the program's goals remain fundamentally the same as originally planned. In addition, IRS has emphasized that program expansions, if any, will depend on the results of this year's test. The process would involve three key stages. These changes, including the most recent, help achieve a better balance between preventing unreasonable taxpayer burden and addressing the EIC's high overclaim rate and support IRS's plans to test the certification program. However, IRS's plan for evaluating the test is incomplete, presenting only some information on how IRS would evaluate whether certification would reduce the EIC overclaim rate, minimize burden, and maintain a relatively high participation rate. The plan proposes potential options for identifying how and when certain critical data will be obtained, but does not provide further details on when decisions will be made or on the specific data that will be collected. Officials have developed preliminary drafts identifying data to be obtained and have begun considering how to use contractors to gather the data. Because the data relate to taxpayers' actions that will occur next spring, IRS appears to have some time to finalize its evaluation plan.
Plans for the new convention center were initiated in 1993 by the District’s Hotel and Restaurant Associations, the Convention and Visitors Association, and the District of Columbia government. The Washington Convention Center Authority Act of 1994 (1994 Act) authorizes WCCA to construct, maintain, and operate the new convention center, as well as maintain and operate the existing convention center. The current design calls for a total of 2.1 million gross square feet, which includes approximately 730,000 square feet of prime exhibit space compared to the existing convention center which has a total of 800,000 gross square feet, including 381,000 gross square feet of prime exhibit space. The proposed new convention center is projected to rank sixth, based on the gross square feet of prime exhibit space, in the United States when completed, and the size of the proposed new convention center should remain highly marketable into the 21st century. According to WCCA officials, the proposed new convention center is intended to allow the District to compete for larger conventions and trade shows. A 1993 feasibility study by Deloitte & Touche, commissioned by the local hospitality industry, stated that even though the District is viewed as a desirable location, the existing convention center is small compared to the convention centers of other cities, such as Atlanta, New York, Chicago, and Philadelphia. The current master plan calls for constructing a new convention center at Mount Vernon Square, the legislatively preferred site, located at Ninth Street and Mount Vernon Place, Northwest. In the 1993 feasibility study, eight potential sites were identified and evaluated against certain criteria such as physical and location characteristics, historic preservation, parking, and cost, including land acquisition and construction. As a result of this analysis, the Mount Vernon Square site was determined to be the preferred site due to its close proximity to the District’s downtown businesses and because the District owns the majority of the land, thus minimizing the cost of land acquisition. On September 25, 1997, WCCA obtained site approval and preliminary building design approval from the National Capital Planning Commission (NCPC). NCPC approved Mount Vernon Square as the site for the new convention center, which is about two blocks north of the current center. However, NCPC did not grant final approval of the building design but instead made several recommendations to improve the aesthetics of the building. WCCA anticipates that final design approval will be obtained from NCPC by early September 1998. To determine the new convention center project’s estimated costs, proposed financing arrangements, and site selection process, we held discussions with and obtained information from various D.C. Council members and officials of the District government, WCCA and its consultants and advisors, NCPC, U.S. General Services Administration, the Washington Metropolitan Area Transit Authority (WMATA), the Committee of 100, the Hotel Association of Washington, D.C., the Washington D.C. Convention and Visitors Association, the Restaurant Association of Metropolitan Washington, Moody’s Investors Service, Standards and Poor’s, and Coopers & Lybrand LLP. We compared the cost estimates for the project as of June 19, 1998, with the estimates in our September 1997 report to the Subcommittee. We reviewed budget documents and held discussions with WCCA officials to obtain reasons for variations from the previous estimates. We also identified project cost components not included in the GMP and who would be responsible for those costs. As you requested, we asked the General Services Administration to review the proposed GMP amendment to the construction management services agreement for the convention center project. We reviewed financial records and current balances to determine the amount of dedicated taxes reported as collected and transferred to WCCA. We reviewed the legislation for the proposed new tax structure for the convention center and obtained forecasts from the District government of future collections under the proposed new tax structure. We interviewed officials of WCCA, the District government, and the lockbox trustee vendor regarding operation of the lockbox since its inception. We obtained information on WCCA’s financing plan for the new convention center project, and we reviewed the assumptions to determine whether they are reasonable. To evaluate the reasonableness of the dedicated tax revenue forecast, we reviewed the District’s and Coopers & Lybrand’s methodology and assumptions for the dedicated taxes. In addition, we reviewed the auditor’s workpapers of the reported taxes collected and deposited for the convention center project to determine whether the District government properly calculated and transferred dedicated taxes to WCCA. To determine how the site selection process was conducted, we reviewed the environmental impact study that was prepared for NCPC approval by a consultant hired by WCCA. We reviewed historical information on studies performed on alternative site consideration by the District and independent consultants and WCCA’s comparative analysis of costs to construct the new convention center at the Mount Vernon Square site and the alternative Northeast No. 1 site. We conducted our review from May through mid-July 1998 in accordance with generally accepted government auditing standards and considered the results of previous work. We requested comments on a draft of this report from WCCA and the District of Columbia Government. Since we last reported to this Subcommittee, the estimated costs for building the new convention center have increased. Table 1 compares current cost estimates with the estimates included in our September 1997 report. Project costs increased $58 million, from $650 million to $708 million, and nonconstruction reserves have increased the financing-related costs by about $51 million, from $87 million to $138 million for a current total funding requirement of $846 million. As of May 31, 1998, WCCA had spent about $27 million, primarily for contractual services ($22 million), such as for the program manager and design fees, acquiring the additional land at the Mount Vernon site ($2 million), and administrative expenses ($2 million). While WCCA has maintained a $650 million budget, a number of changes have been made among the budget components, with some components increasing and some decreasing. A few project components have been taken out of the budget. The following changes have been made within the $650 million budget: Building and site estimated costs have increased by $83.1 million based on a proposed GMP amendment. Predevelopment costs increased by $39.5 million largely as a result of shifting “Other Construction Costs,” estimated to cost $35.8 million, to the predevelopment cost category. Fixtures/furnishings/equipment decreased by $17.7 million in anticipation of negotiating arrangements with vendors to provide such equipment. Soil remediation and hazardous materials removal costs decreased by $6 million as a result of refined estimates. Section 106 mitigation cost increases of $5 million reflect some additional requirements not included in previous estimates. The Metro station upgrade, previously estimated to cost $22.3 million, has been taken out of the budget in anticipation of federal funding. The project contingency decreased by $45.9 million. Considering the $10 million contingency in the GMP, the decrease is $35.9 million. The following estimated project costs, when added to WCCA’s $650 million budget, result in total estimated project costs of $708 million: Portion of utilities relocation costs that are not included in the building and site costs for which WCCA anticipates $10 million of federal funding. Metro station upgrade for which WCCA anticipates $25 million of federal funding. Anticipated vendor provided equipment of about $17.7 million. Project administrative costs of $5 million, which have not been shown in the budget. As part of the prospective financing arrangements, some of the reserves have been increased and others established for a strengthened financial arrangement for an overall increase of $51 million. Making up the largest portion of the 1998 estimated project costs are the costs associated with the GMP (building and site). The $500.6 million GMP is 71 percent of the $708 million estimated project cost. Under the terms of the construction management services agreement between WCCA and the construction manager, the construction manager submitted a GMP proposal to WCCA. The proposal provides the basis for WCCA and the construction manager to negotiate the final price. Once the price and its basis (the terms, conditions, assumptions, and related drawings and plans) are approved by WCCA, these will be set forth in the GMP amendment to the agreement. To become final, the amendment must be approved by the D.C. Financial Responsibility and Management Assistance Authority (Authority). Under the GMP proposal, the contractor is to perform all necessary work to construct the project so that it is complete and a fully functioning, first-class convention center. The construction management services agreement and the GMP amendment will allocate the costs for the project between WCCA and the construction manager. Any increases in the cost of items allocated to the construction manager will be the responsibility of the construction manager. Any increases in the costs of items allocated to WCCA (or an increase in cost items allocated to the construction manager resulting from a change order issued by WCCA) are the responsibility of WCCA. The contract and proposed amendment also provide an incentive for the construction manager to complete the project for less than the GMP by giving the construction manager 25 percent of cost savings up to $9.5 million. The percentage is adjusted up or down depending on the construction manager’s success in meeting the established goals for using local, small, or disadvantaged business enterprises. There is a penalty of $50,000 a day for failure to meet the completion date. The GMP proposal (building and site costs in table 1) is $83.1 million greater, or about 20 percent more than the 1997 estimate of $417.5 million. A WCCA official attributed the higher cost to a 175,000 increase in square footage to accommodate support and public space, retail, and parking areas, design changes, inflation, and the $10 million construction contingency. Table 2 shows the components of the GMP. Site work, concrete, and steel account for $233 million or 47 percent of the GMP. Mechanical and fire protection, electrical work and security, and design allowances account for $117 million or 23 percent. The GMP proposal specifies that WCCA is responsible for the following costs: WCCA is responsible for costs associated with soil remediation and hazardous materials removal. WCCA officials told us that it could assume the liability for soil remediation and hazardous materials removal at less cost than including it in the GMP because the construction manager would require a significant contingency amount for this line item. Since the time of our last review, WCCA has had testing performed at the Mount Vernon Square site. Based on the results to date, WCCA officials expressed confidence that soil remediation and hazardous materials removal costs will not significantly exceed their estimates. WCCA has decreased the 1997 estimate of such costs from $11 million to $5 million. They have informed us that the $11 million estimate was an attempt to make an adequate estimate before the site tests were completed. If costs exceed the budgeted $5 million, WCCA plans to offset this increase with funds from the project contingency. WCCA is responsible for the additional costs incurred if the construction manager encounters subsurface or soil conditions that materially differ from those indicated in information provided by WCCA. The GMP proposal is based on certain lump sum allowances and quantity and unit price assumptions. WCCA is responsible for the additional costs that may be incurred if actual costs exceed the allowances and assumptions specified in the proposal. For example, the proposal contains 24 design allowances, including light fixtures, street lighting, signage, and various finishes, totaling $35.1 million. The GMP amount will be adjusted upward if actual costs exceed the allowances. WCCA is responsible for additional costs resulting from any change orders to the contract. In addition to the potential for incurring building and site costs in excess of the GMP amount, WCCA is responsible for the remaining project costs that are not covered by the GMP, which are noted in table 1. The $207 million estimated costs for these components decreased $25 million from the 1997 estimates, offsetting some of the increased 1998 estimate of the building and site costs. WCCA has omitted from its project budget estimated costs of $25 million for the Mount Vernon Square-UDC Metro station upgrade and $10 million for the utilities relocation work in anticipation of these costs being paid from federal grants. In the case of the Metro upgrade, the President’s fiscal year 1999 budget to the Congress includes $25 million to be paid directly to the Washington Metropolitan Area Transit Authority, which would be responsible for the work. The utility relocation work would be financed with Community Development Block Grants (CDBG) funds made available to the District by the U.S. Department of Housing and Urban Development (HUD). Until these initiatives are approved, there is a financial risk to the project budget. WCCA has reduced the project costs for the fixtures, furnishings, and equipment component by about $17.7 million in anticipation of negotiating arrangements with vendors to provide certain equipment and services, such as a heating and cooling plant, communications, and food services equipment. This arrangement technically takes the costs out of the budget without reducing project costs. According to WCCA officials, this arrangement has been done at other convention centers. WCCA is at risk for these costs until contracts have been executed with vendors. The project budget includes two contingency amounts: (1) a $10 million construction contingency contained in the GMP earmarked for cost increases that are the responsibility of the construction manager and (2) a $30 million project contingency earmarked for cost increases both inside and outside the GMP for which WCCA is responsible. The $40 million contingency is a decrease of $35.9 million from the $75.9 million in the 1997 budget although it now covers a larger project cost. According to WCCA’s Managing Director of Development, given that WCCA has successfully negotiated a GMP with the contract manager and has completed many preconstruction activities, an 8 percent ($40 million)contingency is considered reasonable. WCCA’s current financing plan for the project calls for total funds of about $846 million. About $616 million, or 73 percent, is expected to be derived from revenue bonds supported by dedicated taxes. In addition, WCCA anticipates using $110 million from dedicated tax revenue collections through July 1, 1998; about $62.7 million from interest earned on the bond proceeds; $35 million from the federal government to fund the Metro upgrade and utility relocation; $18 million from vendors to fund furniture, fixtures and equipment; and $5 million from the operating subsidy to cover administrative costs. Assuming the estimated project costs are substantially accurate, the financing plan projections, including the projected growth in dedicated tax revenues, seem reasonable; however, until the federal funding is approved in an adopted 1999 budget, and until WCCA signs contracts with vendors, there is a risk to the financing plan of about $53 million. WCCA’s CFO stated that any additional funding needs would require a reevaluation of the budget and financing plan assumptions. If WCCA were to seek additional funding from the District, it would require approval by the District’s City Council, the Mayor, and the Authority. Table 3 shows the May 1997 and the current (May 1998) financing plans. Since May 1997, WCCA has proposed several changes to its financing plan. As the table indicates, the current financing plan assumes a lower interest rate, an increase in the annual dedicated tax revenues to support the bond financing, and an increase in the term of the bonds from 30 to 34 years. These changes would allow WCCA to borrow more money to finance the project. In addition, since the amount of cash available from dedicated taxes and bond proceeds has increased, the amount estimated for construction fund earnings has also increased from the original plan. Finally, the current financing plan includes funding for financing costs and reserve requirements. Since WCCA is exposed to rising interest rates until the bond financing is finalized, WCCA considered a scenario with a higher bond interest rate of 5.85 percent, or 25 basis points higher than its preferred financing plan interest rate assumption of 5.6 percent. Based on this scenario, the increased interest rate does not materially change the financing plan. Since the majority (73 percent) of the funds to finance the project are expected to come from bonds supported by dedicated taxes, our review of the financing plan considers some of the key factors rating agencies use to rate dedicated tax financing for convention center projects: (1) breadth of the tax base, (2) historical performance of the revenue stream, (3) the underlying strength of the economy, and (4) the absence of legislative risk. Breadth of the tax base. According to the rating agencies, taxes levied on a broader range of goods, services, and population are stronger and less volatile than those derived from narrower bases. The current tax structure that was established in fiscal year 1995 to provide financing for predevelopment activities for the proposed new center is comprised of the hotel occupancy tax, hotel sales tax, corporate franchise, unincorporated franchise, restaurant meals, alcoholic beverages, and automobile rental taxes.20, 21 On June 16, 1998, the D.C. City Council approved a change to the dedicated tax structure that will be available effective October 1, 1998, to guarantee the repayment of revenue bonds issued to finance the construction of the new center. After the Council changes become effective, WCCA will essentially rely on taxes levied on hospitality industries to finance the construction costs of the project. The District’s hotel sales tax rate was increased from 13 percent to 14.5 percent, of which WCCA will receive 4.45 percent. WCCA’s existing rate is 2.5 percent. WCCA will continue to receive 1 percent of the 10 percent tax rate on restaurant sales, alcoholic beverages, and automobile rentals. WCCA will no longer receive a portion of the corporate franchise and the unincorporated business taxes. The hotel occupancy tax will be repealed. The Council change requires the Mayor to impose a surtax on the hotel sales tax if additional funds are required to cover debt service and operations costs. Based on the Washington Convention Center Authority Act of 1994, the existing dedicated taxes to WCCA include the following: 2.5 percent of the 13 percent sales and use tax on hotel room charges; 1.0 percent of the 10 percent sales and use tax on restaurant meals, alcoholic beverages consumed on premises, and automobile rental charges; $1.50 hotel occupancy tax per hotel room per overnight stay. (WCCA receives 40 percent of the $1.50 per hotel room per overnight rate and the D.C. Committee to Promote Washington and the Washington Convention and Visitors Association receives 60 percent); 2.5 percent surtax on the 9.5 percent corporate franchise tax; and 2.5 percent surtax on the 9.5 percent unincorporated business franchise tax. According to the District, the automobile rental tax comprised a small portion since most of the car rental companies are located in Virginia and Maryland, near the airports. Under the current tax structure, about 79 percent of the revenue WCCA has received comes from the hotel sales, and restaurant and automobile rental taxes. As a result of the change to the dedicated tax structure, 71 percent of WCCA’s total dedicated tax revenues are expected to come from the hotel sales tax and the remaining 29 percent from the restaurant sales and automobile rental taxes. Convention centers in Dallas, Baltimore, and New Orleans have been funded primarily by a hotel sales tax. Historical performance of the revenue stream. The rating agencies analysis also includes a review of multiyear historical data for collections of the dedicated revenues. According to the rating agencies, 5 years of historical data are usually a good indicator of how the tax is likely to perform in the future. Table 4 shows the collection history of the taxes currently dedicated to the project. Table 4 shows that during fiscal years 1993-1997, total tax collections grew by an average annual rate of about 8.7 percent. The hotel sales, restaurant, and automobile rental taxes’ combined annual rate of growth averaged 6.6 percent over the same period. These taxes experienced rate increases in fiscal year 1995. According to the District, the increase in collections also reflects improvement in the District’s economy, including the increase in the number of tourists to the District. WCCA’s tax receipt data for fiscal year 1998 shows receipts being about $4.6 million higher through May 1998 than for the same period of time last year. Underlying economic strength. Since economic conditions are an important factor for the future stream of dedicated taxes, rating agencies evaluate the underlying strength of those parts of the economy most relevant to the dedicated taxes. For example, if the dedicated tax is mainly generated by local residents, such as the restaurant tax, the evaluation would focus on the strength of the local economy. If the tax is mainly generated by visitors, such as the hotel sales tax, the evaluation would include the economies of the areas from which visitors come. In cities like Washington, D.C., which attracts visitors from all over the country and the international community, the future stream of hotel sales tax is likely to be influenced by the strength of the national economy. WCCA’s financing plan assumes 1 percent growth in dedicated tax revenues under the new dedicated tax structure beginning in fiscal year 1999. According to WCCA’s chief financial officer, the 1 percent growth assumption is conservative when compared to historical trends in collections. To evaluate the reasonableness of WCCA’s tax projection, we reviewed the historical trends, as discussed above, the underlying strength of the economy most relevant to the dedicated taxes and the District of Columbia government’s and Coopers & Lybrand’s methodologies and assumptions used in projecting the future stream of revenues. The U.S. economy is projected to grow at a moderate rate over the next decade. For example, the Congressional Budget Office estimates the average annual growth in real gross domestic product (GDP) will be 2.2 percent during 1998-2008, and over the same period, the consumer price index (CPI) is projected to increase at an average rate of 2.7 percent. The WEFA Group, a macroeconomic forecasting firm, expects real GDP to grow at 2.3 percent per year during 1998-2007 and the CPI to be at 2.6 percent during this period. The projections for the Washington, D.C., population and economy reflect improvements over the next 3 to 5 years. Standard & Poor’s DRI projects that the population during 1998-2007 will remain virtually unchanged. This contrasts with a decline of an average annual rate of over 2 percent per year during the previous 3 years. Standard & Poor’s DRI projects that, during 1998-2007, the District’s personal income will increase at an inflation adjusted average annual rate of 0.9 percent. During 1995-1997, personal income annual average growth rate was close to zero. Figure 1 shows combined annual percentage change for the hotel, restaurant, alcoholic beverages, and automobile rental taxes based on the District’s and Coopers & Lybrand’s forecast under the new tax structure from fiscal year 1999 through fiscal year 2007. The District forecasts average annual growth in revenues for the hotel and restaurant taxes combined over fiscal years 1999 to 2007 to be about 2.1 percent, while Coopers’ forecast reflects combined average annual growth of about 3.4 percent. These estimates are substantially less than the combined average annual growth rate of 6.6 percent in these same taxes between fiscal years 1993 and 1997. To evaluate the reasonableness of Coopers’ projected revenues from the hotel tax, we reviewed its key assumptions and its U.S. lodging industry forecast model. The model estimates future hotel room demand, hotel room starts, and hotel room rate inflation. Key predictors for these variables are real GDP, the inflation-adjusted average daily rate, and the lagged occupancy level, respectively. Based on the WEFA forecasts discussed above, the Coopers’ report assumed that the U.S. economy will continue to grow at about 2.3 percent. It also assumed that the District’s lodging market will not experience significant demand or supply shocks during this period. The Coopers’ lodging industry forecast model for the District is driven by the U.S. forecast model. Coopers estimates that hotel room demand between 1999-2001 in the District will increase at or slightly above the U.S. industry trend growth. Beyond the year 2001, hotel room demand growth is expected to decrease to levels at or slightly below those for the U.S. lodging industry as a whole. The growth in total hotel room revenue to the District during 1998-2007 is expected to average 4.7 percent annually. Based on an analysis of historical data, Coopers determined that the number of District households and the District’s gross income can predict restaurant taxable sales. They estimate that during 1999-2007, restaurant sales will increase at an annual rate of about 1.6 percent, lower than the projected rate of inflation. The estimate is based on the assumption that the District’s gross income will grow slightly in nominal terms. Coopers assumed that the number of households will continue to decline but at a lower rate than the previous years. Coopers’ assumptions are generally consistent with the outlook provided by independent sources discussed above. Accordingly, the projected rate of growth in dedicated taxes appears to be reasonable. The District’s Office of Tax and Revenue (OTR) projections are derived from a time-series forecasting model. Using a standard time-series model, OTR estimates future hotel and restaurant tax revenues using monthly historical tax collection data. These estimates were then fine-tuned using projected inflation and personal income under the assumption that these variables will influence the future trend of tax revenues. Based on the DRI forecasts, OTR assumed an annual inflation of 2.4 percent and an annual growth in personal income of 2.5 percent during 1997-2002. OTR assumed no major changes in the District’s economic circumstances and in the future success of collection practices. Overall, OTR’s methodology and assumptions appear to be conservative as reflected in their estimated projected growth rates of the hotel and restaurant taxes discussed above. Based on our analysis of trends in collections of hotel sales, restaurant and automobile rental taxes, the national/local economic outlook, the District’s and Coopers & Lybrand’s assumptions, WCCA’s growth assumption of 1 percent to support the bond financing seems conservative. Assuming that the District’s forecast of annual average growth of about 2 percent holds true, WCCA stands to gain an additional $63 million for fiscal years 1999 through 2007, which could be used to retire the bonds earlier than their stated maturity dates. Absence of legislative risk. According to the rating agencies, for dedicated tax-secured debt to be rated investment grade, the revenues must not be subject to annual appropriation and the authority to levy the tax must not be subject to revocation by the legislature within the life of the debt. Section 212 of the Washington Convention Center Authority Act of 1994, as amended, provides that the District pledges to the Authority that the District will not limit or alter rights vested in the Authority to fulfill agreements made with holders of the bonds, or in any way impair the rights and remedies of the holders of the bonds until the bonds, together with interest and all costs and expenses in connection with any action or proceedings by or on behalf of the holders of the bonds, are fully met and discharged. Also, Section 490 (f) of the Home Rule Act, as amended,makes dedicated tax revenues pledged to secure revenue bonds generally available without requiring further appropriations. While the financing plan does not reflect legislative risks associated with the dedicated taxes, WCCA assumes one-time federal funds of about $35 million for Metro expansion and utility relocation at the Mount Vernon Square site. The President’s fiscal year 1999 budget to the Congress includes $25 million for WMATA to expand the Mount Vernon Square-UDC Metro station. The $10 million anticipated for utility relocation work is expected to be financed with a Community Development Block Grant made available to the District by the U.S. Department of Housing and Urban Development. However, until the federal budget is adopted, it is uncertain whether federal funds will be available. Therefore, there is a legislative risk to the financing plan of about $35 million. If these grants are not received, WCCA’s $30 million project contingency outside of the GMP would not be sufficient to cover these estimated costs. The other components of the financing plan include cash-on-hand from dedicated tax collections, construction fund earnings, and reserve requirements. WCCA Cash-on-Hand. WCCA’s financing plan includes $110 million from dedicated tax collections. WCCA’s financing plan was predicated on entering the bond market on July 1, 1998; therefore, WCCA expects that it would have already used $37 million of dedicated tax collections on preconstruction activities and have about $73 million of the collection on hand to satisfy reserve requirements at that time. WCCA now anticipates entering the bond market in September 1998. Due primarily to the delay in entering the market and because tax revenue collections through May 1998 had been higher than anticipated, WCCA’s available cash for use could be about $15 million higher than the $110 million assumed in the financing plan. Based on dedicated tax collections and expenditures as of May 31, 1998, if WCCA’s projected expenditures are substantially accurate, by September 1, it would have spent an additional $15 million—for a total of about $52 million—on preconstruction activities and would have $73.6 million cash-on-hand to satisfy the reserve requirements. Construction Fund Earnings. Based on a preliminary construction draw schedule, the financing plan assumes that about $62.7 million can be generated in interest earnings on bond proceeds of about $550 million. The bond proceeds would be deposited in a construction fund. During the construction period, funds that are not drawn from the account would earn interest at a rate of 5 percent. The interest rate assumption appears reasonable when compared to the rate of earnings on WCCA’s dedicated tax revenue investments over the past 3 years. When analyzing this type of funding mechanism, the rating agencies’ primary concerns are whether the construction estimate is reasonable and what source of funds would be used to address the shortfall should the earning assumption fall short. According to WCCA’s underwriter, at the time of bond issuance, WCCA plans to obtain a fully flexible investment agreement, which will include guaranteed investment earnings and conditions that stipulate that there would be no penalty should additional funds be required to satisfy project needs. Reserve Requirements. The financing plan assumes that about 15 percent, or $126 million of the $846 million funds identified, will be used to establish the following reserves: debt service, operations and marketing, renewal and replacement, and rate/revenue stabilization. Reserves are established to strengthen the bond transaction. We spoke with rating agencies’ officials regarding the adequacy of WCCA’s reserves. There appear to be no established guidelines regarding the level of funding for each reserve. The actual character and amount of each reserve is shaped by several factors: the quality of the dedicated tax revenues, anticipated ongoing needs of the facility, economic projections, availability of funds, and bond insurers and rating agency concerns. WCCA’s underwriter states that the estimated reserve requirements are more than adequate and necessary to achieve the lowest possible cost of borrowing. Discussion about building a larger convention center was underway at the time the existing convention center opened in 1983. The site at Mount Vernon Square was identified as a preferred site as early as 1986. Since that time, a number of studies have examined alternative sites in various parts of the city, with the Mount Vernon Square site repeatedly being selected as the most viable site, given its location in the downtown core, amidst the kind of amenities that make possible a thriving convention center, with its associated economic benefits. Research results included in our February 1998 report demonstrated that convention centers generally cannot earn enough direct revenues to cover all of their recurring operating costs or their construction costs. However, cities sponsor convention centers with the expectation for significant direct and indirect economic benefits to the area, from spending by out-of-town convention delegates, that will increase local tax revenues to offset operating losses and construction costs of convention centers. In addition, based on Coopers & Lybrand’s Analysis for the Proposed Washington Convention Center, the proposed convention center is estimated to generate approximately $1.1 billion of total output in year 2002 within the metropolitan D.C. area, increasing to about $1.4 billion in 2006. Further, results indicated that the most important factor to a convention center’s success in attracting a large number of out-of-town delegates who spend a significant amount of money is location in a desirable city and near hotels, restaurants, and shopping. WCCA determined that Mount Vernon Square was the most advantageous site selected because it offered a variety of favorable attributes: proximity to existing hotels, restaurants, and museums, the presence of an existing Metrorail station, and entertainment venues and other tourist points of interest. Section 215 of the Washington Convention Center Authority Act of 1994 (1994 Act), states that Mount Vernon Square was where the new convention center “should be located.” In the same section, it says that the Mayor “may evaluate” other sites and “should specifically evaluate 2 sites:” Northeast No. 1 and the site located at the Anacostia Metro. When the 1994 Act was passed, the latter two sites had already been evaluated (along with nine others) in a financial feasibility study conducted between 1991 and 1993 by Deloitte and Touche. In 1996, Northeast No. 1 and the Anacostia Metro were again evaluated by WCCA along with 14 other sites when it prepared a study regarding the project. WCCA determined that both the Northeast site and the Anacostia Metro site did not compare favorably with Mount Vernon Square, largely because of their isolated locations, distance from hotels, and other amenities. When a final Environmental Impact Statement (EIS) was prepared for the National Capital Planning Commission in April 1997, it rated the Mount Vernon Square site as more acceptable. In June 1997 the federal Commission of Fine Arts gave its preliminary approval to a design for the convention center at Mount Vernon Square. Such approval is a requirement for public buildings sited in the District. In September 1997, the following additional approvals were granted. The federally appointed Advisory Council on Historic Preservation approved the preservation/neighborhood revitalization plan for the Mount Vernon Square site. A Memorandum of Agreement outlining specific historic preservation mitigation measures at Mount Vernon Square was signed by the D.C. Mayor, the D.C. City Council, the D.C. State Historic Preservation Officer, NCPC, and WCCA. The Mount Vernon Square site, building footprints, and preliminary design were approved by NCPC. Before NCPC issued its approvals for building the convention center at Mount Vernon Square, it heard public testimony on the issue. At that time, some community members requested that the District reconsider Northeast No. 1 as a possible site for the new convention center. WCCA then performed an additional study of the Northeast site, including a cost comparison. WCCA concluded that building the convention center at the Northeast site would increase the cost much higher above the cost of building it at Mount Vernon Square and would delay the project substantially. In commenting on this report, the Chief Financial Officer, the General Counsel, and the Managing Director of Development of WCCA as well as the District of Columbia Government’s Chief Economist generally agreed with our presentation of their progress and the data presented concerning construction and financing for the proposed convention center. We are sending copies of this report to the Ranking Minority Member of your Subcommittee and to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations and their subcommittees on the District of Columbia and the Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, Senate Committee on Governmental Affairs. Copies will be made available to others upon request. Major contributors to this report are listed in appendix I. If you or your staff need further information, please contact me at (202) 512-4476. Richard Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Washington Convention Center Authority's (WCCA) efforts to arrange for financing and constructing a new convention center in the District of Columbia, focusing on the: (1) estimated cost of this project, including the guaranteed maximum price (GMP) for constructing the new convention center, and the risk exposure for both the contractor and the District; and (2) financing plan, including proposed changes to the revenue base, history of dedicated tax collections, projections for future revenues, and sufficiency to cover the GMP and other project costs. GAO noted that: (1) WCCA is proceeding with efforts to build a new convention center at Mount Vernon Square at a cost WCCA officials estimate to be $650 million; (2) this estimate has not changed since GAO reported on this project in September 1997; (3) however, GAO's latest review of the project identified an additional $58 million in project costs which--because WCCA expects them to be funded through federal grants or moved into future operating costs--are not included in WCCA's total project costs; (4) these costs raise the project's cost estimate to $708 million, excluding reserve requirements and financing costs of $138 million; (5) the majority of the estimated project costs are covered in a $500.6-million GMP for construction; (6) the GMP lays out 22 different cost components and sets limits on financial risks to the construction manager, Clark/Smoot; (7) areas of risk are not included in the $500.6-million price; (8) an estimated $207 million in other project-related activities will be or have been contracted for separately; (9) WCCA's current financing plan to cover predevelopment, construction, reserves and operation of the convention center calls for about $846 million; (10) seventy-three percent of the funds needed to finance the project are expected to be derived from revenue bonds supported by dedicated taxes; (11) changes from the previous financing plan include increasing the term of the bonds as well as the dedicated taxes to allow WCCA to borrow more money for the project; (12) WCCA received $44 million in dedicated taxes in 1997, and WCCA has projected collections to increase at 1 percent per year over the next several years; (13) these and other factors will be looked at by WCCA's consultants, rating agencies, and bond insurers who will evaluate the financing package and determine its ability to cover the GMP and other project costs; (14) risks associated with the financing package could affect the rating of the bonds and accordingly, the interest rate; (15) although WCCA plans to address an $18-million reduction in its construction budget by negotiating arrangements with vendors to provide equipment and services, to date there are no executed contracts to cover these arrangements; and (16) the site selection process for the convention center has a long history and numerous studies have consistently identified Mount Vernon Square a preferred site.
The energy used to generate our nation’s electricity comes from many different sources. Currently, most electricity in the United States is generated with fossil fuel and nuclear technologies—coal (52 percent), nuclear (20 percent), natural gas (16 percent), and oil (3 percent). Fossil fuels are considered nonrenewable because they are finite and will eventually dwindle or become too expensive or environmentally damaging to retrieve. Wind, however, is one of several sources of energy known as renewable energy. Other forms of renewable energy sources include sunlight (photovoltaics), heat from the sun (solar thermal), naturally occurring underground steam and heat (geothermal), plant and animal waste (biomass), and water (hydropower). To reduce our dependence on nonrenewable energy sources, the United States has promoted the development of renewable resources, such as wind. A key federal program supporting the development of such sources is the federal production tax credit established by the Energy Policy Act of 1992.This law provides a tax credit for electricity generated by renewable energy sources, such as wind turbines. The Economic Recovery Tax Act of 1981 provides an additional incentive for wind power growth. In some cases, this law allows a 5-year depreciation schedule for renewable energy systems. In conjunction with the tax credit, this accelerated depreciation allows an even greater tax break for renewable energy projects, such as wind projects, that have high initial capital costs. Some states also provide incentives for wind power development. One of the strongest drivers is a renewable portfolio standard. Generally, a renewable portfolio standard requires utilities operating in a state to acquire a minimum amount of their electricity supply from renewable energy sources. As of June 2005, 18 states had some form of renewable power requirements capable of being met by wind power. Other common types of incentives for renewable energy development provided by several state and local governments are income tax incentives and property and sales tax exemptions. Many states provide more than one type of incentive. In addition, 25 states have statewide wind working groups that are funded (at least partially) through grants from the Department of Energy (DOE). The purpose of these working groups is to promote more widespread development of wind power. These federal and state programs have helped spur significant wind power development in the last 5 years. At the end of 2004, the total installed capacity from wind power in the United States was 6,740 megawatts (MW), or enough capacity to meet the electricity demand of between 1.5 and 2.0 million average American households (see fig. 1). Between January 2000 and December 2004, installed electric-generating capacity more than doubled, adding over 4,200 MW of capacity. Although wind power generates less than 1 percent of the nation’s electricity, with an average annual growth rate of over 24 percent, it is the fastest growing source of electricity generation on a percentage basis. Because wind energy is a function of wind speed, the best locations for turbines are areas that have frequent strong winds to turn the blades of the power-generating turbines. See figure 2 for areas of the United States with high wind potential. According to DOE, 36 of the 48 continental states have wind resources that would support utility-scale wind power projects (i.e., projects that generate at least 1 MW of electric power from 1 or more turbines annually for sale to a local utility). A DOE goal for wind power is to generate 5 percent of the electricity generated in the United States by 2020; the American Wind Energy Association has a similar goal. To reach this goal, the association estimates that about 100,000 MW of installed capacity will be needed—approximately 15 times the current installed capacity. On the basis of the average MW size of wind turbines commonly being installed today (1.5 MW), more than 62,000 additional turbines will need to be added to the existing 16,000 turbines already constructed in the United States to meet such a goal. Most of the wind power development in the United States has occurred in 10 western and midwestern states—California, Colorado, Iowa, Minnesota, New Mexico, Oklahoma, Oregon, Texas, Washington, and Wyoming. In fact, these 10 states have over 90 percent of the total installed wind power capacity nationwide. Only recently have developers begun to build wind energy facilities in the eastern United States. As shown in figure 2, wind power potential in this geographic area is best along mountain ridges, primarily the Appalachian Mountains, and along the coast of the northeastern United States. Wind power is considered a “green” technology because, unlike fossil fuel power plants, it does not produce harmful emissions, such as carbon dioxide, nitrogen oxides, sulfur dioxide, mercury, and particulate matter, which can pose human health and environmental risks such as acid rain. However, it is now recognized that wind power facilities can adversely affect the environment in other ways, specifically in impacting wildlife such as birds and bats. Wind power facilities located in migratory pathways or important habitats may harm the wildlife living or passing through the area by killing or injuring them or by disrupting feeding or breeding behaviors. But wind power is not alone in its impacts on wildlife. Millions, or perhaps billions, of wildlife are killed every year in the United States through a myriad of human activities. While sources of bat mortality are not as well known, FWS estimates that some of the leading sources of bird mortality, per year, are collisions with building windows—97 million to 976 million bird deaths, collisions with communication towers—4 million to 50 million bird deaths, poisoning from pesticides—at least 72 million birds, and attacks by domestic and feral cats—hundreds of millions of bird deaths. Human activities also result in the destruction or modification of wildlife habitat; habitat loss and fragmentation are leading threats to the continued survival of many species. Recent studies and interviews with experts reveal that the impacts of wind power facilities on birds and other wildlife vary by region and by species. Specifically, studies showing raptor mortality in California and bat mortality in Appalachia have elicited concerns from scientists, environmental groups, and regulators because of the large number of kills in these areas and the potential cumulative impact on some species. Thus far, documented bird and bat mortality from wind power in other parts of the country has not occurred in numbers high enough to raise concerns. However, gaps in the literature make it difficult to develop definitive conclusions about the impacts of wind power on birds and other wildlife. Notably, only a few studies have been conducted on strategies to address the potential risks wind power facilities pose to wildlife. Our review of the literature and discussions with experts revealed that, thus far, concerns over direct impacts to wildlife from wind power facilities have been concentrated in two geographic areas—northern California and Appalachia. (For a discussion on how we selected these studies, see app. I.) While bird and bat kills have been documented in many locations, biologists are primarily concerned about mortality in these two regions because of the numbers of wildlife killed and the species affected. Wind power facilities in northern California, specifically in the Altamont Pass Wind Resource Area about 50 miles east of San Francisco, have been responsible for the deaths of numerous raptors, or birds of prey, such as hawks and golden eagles, and, as a result, these deaths have elicited concern from wildlife protection groups, biologists, and regulators. Studies conducted in the last two decades have documented large numbers of raptor deaths in this area. One study in our review found estimates as high as over 1,000 raptor deaths per year. Such large numbers of raptor kills due to wind power are not seen elsewhere in the United States. A 2001 summary that examined raptor mortality rates from studies in 10 states estimated that over 90 percent of the raptors killed annually in the United States by wind power turbines occurred in California. Several unique features of the wind resource area at Altamont Pass contribute to the high number of raptor deaths. First, California was the first area to develop wind power in significant numbers and thus has some of the oldest turbines still in operation in the United States. Older turbines produce less power per turbine, so it took many turbines to produce a certain level of energy; today, newer facilities producing the same amount of energy would have much fewer turbines. For example, Altamont Pass has over 5,000 wind turbines—many of which are older models—whereas, newer facilities generally have significantly fewer turbines (see figs. 3 and 4). Some experts told us that the sheer number of turbines in Altamont Pass has been a major reason for the high number of fatalities in the area. Secondly, some scientists believe that the design of older generation turbines, like those found in Altamont Pass, are more fatal to raptors. Specifically, early turbines were mounted on towers 60 feet to 80 feet in height, while today’s turbines are mounted on towers 200 feet to 260 feet in height. Experts told us that the older turbines at Altamont Pass have blades that reach lower to the ground, and thus can be more hazardous to raptors as they swoop down to catch prey. Experts also reasoned that the relative absence of raptor kills at newer facilities with generally taller turbines supports the notion that these turbines are less lethal to raptors. Third, the location of the wind turbine facilities at Altamont Pass may have contributed to the high number of raptor deaths. Studies show that there are a high number of raptors that pass through the area, as well as an abundance of raptor prey at the base of the turbines. In addition, the location of wind turbines on ridge tops and canyons may increase the likelihood that raptors will collide with turbines. Some experts note that one reason why other parts of the country may not be experiencing high levels of raptor mortality is partly because wind developers have used information from Altamont Pass to site new turbines in hopes of avoiding similar situations. Recent studies conducted in the eastern United States in the Appalachian Mountains have found large numbers of bats killed by wind power turbines. A 2004 study conducted in West Virginia estimated that slightly over 2,000 bats were killed during a 7-month study at a location with 44 turbines. More recently, a 2005 report that examined wind resource areas both in West Virginia and Pennsylvania estimated that about 2,000 bats were killed during a much shorter 6-week study period at 64 turbines. Lastly, a study conducted of a small 3-turbine wind facility in Tennessee estimated that bat mortality was about 21 bats per turbine, per year, raising concerns about the potential impact on bats if more turbines are built in this area. Various species of bats have been killed at these wind power facilities and experts are concerned about impacts to bat populations if large numbers of deaths continue. For example, one expert noted that “it is alarming to see the number of bats currently being killed coupled with the proposed number of wind power developments” in these areas. He explained that bats live longer and have lower reproductive rates than birds, and, therefore, bat populations may be more vulnerable to impacts. In addition, there are proposals for hundreds of new wind turbines along the Appalachian Mountains. A recent report from Bat Conservation International estimated that if all ridge-top turbines are approved and the mortality rates continue at their current rate, these turbines might kill tens of thousands of bats in a single season. Although none of the bats killed by wind power to date have been listed as endangered species, FWS— recognizing the seriousness of the problem—has initiated a study with the U.S. Geological Survey to study bat migration and to develop decision tools to provide assistance in identifying locations for wind turbines and communication towers. Results from studies on bird and bat mortality from wind power conducted in areas other than northern California and Appalachia have not caused the same degree of concern as in these two locations. Our review of studies conducted in areas other than the Appalachian Mountains showed bat fatality rates ranging from 0 to 4.3 bats per turbine, per year—compared with rates as high as 38 bats per turbine, per a 6-week study period, in the Appalachian Mountains (see app. II). Raptor fatalities outside Altamont Pass ranged from 0 to 0.07 raptors per turbine, per year, whereas, rates in Altamont Pass ranged from 0.05 to 0.24. Our review of studies found that overall bird fatalities from wind power ranged from 0 to 7.28 birds per turbine, per year. In addition, a 2004 National Wind Coordinating Committee fact sheet shows that an average of 2.3 birds per turbine, per year are killed at facilities outside of California. However, it is important to also look at the number of turbines and the vulnerability of the species affected when interpreting these rates. For example, the high rate of 7.28 overall bird fatalities per turbine was found at a facility of only 3 wind turbines. Therefore, if no additional turbines are built in this area, the overall impact to the bird populations may be minimal; whereas, a lower fatality rate may cause impacts if there are many turbines in that particular area. In addition, comparing study findings can be difficult because researchers may use differing metrics and many areas of the country remain unstudied with regard to avian and bat impacts from wind power. While interpreting these statistics can be complicated, the experts we spoke with agreed that outside of California and Appalachia at the current level of wind power development, the research to date has not shown bird or bat kills in alarming numbers. While the studies we reviewed showed relatively low levels of mortality in many locations, there are also indirect impacts to wildlife from wind power facilities. For example, construction of wind power facilities may fragment habitat and disrupt feeding or breeding behaviors. According to FWS, the loss of habitat quantity and quality is the primary cause of declines in most assessed bird populations and many other wildlife species. However, this review focuses on the direct impacts of avian and bat mortality. While experts told us that the impact of wind power facilities on wildlife is more studied than other comparable infrastructure, such as communication towers, important gaps in the research remain. First, relatively few postconstruction monitoring studies have been conducted and made publicly available. It appears that many wind power facilities and geographic areas in the United States have not been studied at all. For example, a bird advocacy group expressed concern at a recent National Wind Coordinating Committee meeting that most of the wind projects that have been monitored for bird impacts are in the west. The American Wind Energy Association reports that there are hundreds of wind power facilities currently operating elsewhere in the country. However, we were able to locate only 19 postconstruction studies that were conducted to assess direct impacts to birds or bats in 11 states. Texas, for example, is second only to California in installed wind power capacity, but we were unable to find a single, publicly available study investigating bird or bat mortality in that state. Lack of comprehensive data on bird and bat fatalities from wind turbines makes it difficult to make national assessments of the impact of wind turbines on wildlife. A 2001 analysis of studies estimated that wind turbines in the United States cause roughly 33,000 avian deaths per year. However, the authors noted that making projections of the potential magnitude of wind power-related avian fatalities is problematic, in part, because of the lack of long-term data. The authors further noted that the data collected at older sites may not be representative of newer facilities with more modern turbine technology. In addition, FWS considers this estimate to be a “minimum” to “conservative” estimate due to problems of data collection and uneven regional representation. In addition to limiting assessments of national impacts, a lack of data on actual mortality impacts siting decisions for new facilities. Specifically, the conclusions of postconstruction studies are often used when making preconstruction predictions about the degree of harm to wildlife that is likely expected from proposed facilities. If there are no local postconstruction studies available, predictions of future mortality at a proposed site must be based on information from studies conducted in areas that may have different wildlife species, topography, weather conditions, climate, soil types, and vegetative cover. A second important research gap is in understanding what factors increase the chances that turbines will be hazardous to wildlife. For example, it can be difficult to discern, among other things, how the number, location, and type of turbine; the number and type of species in an area; species behavior; topography; and weather affect mortality and why. Drawing conclusions about the degree of risk posed by certain factors—such as terrain, weather, or type of turbine—is difficult because sites differ in their combination of factors. For example, according to experts, data are inadequate about what turbine types are most hazardous and to what species. This is partly because most wind power facilities use only one turbine type. Therefore, even if one facility proved more hazardous than another, it would be difficult to attribute the difference to turbine type alone because other variables, such as topography or migratory patterns, are also likely to vary among the sites. Additionally, comparisons between studies are difficult because researchers may use different study methodologies. Therefore, even if two sites had similar bird populations, topography, and weather characteristics but different turbines, it would be difficult to isolate the effect of the turbine if the scientists collecting the information used differing methodologies. Altamont Pass, however, has the potential to allow researchers to determine which turbines are more hazardous because it contains many different types of turbines in one place. However, even this analysis has been complicated by confounding variables. For example, according to experts, at one time it was commonly thought that turbines with lattice towers killed more birds than turbines with tubular towers in Altamont Pass; however, some studies have reached the opposite conclusion. One study noted that although the authors found higher mortality associated with lattice towers, this relationship might be explained by factors such as the fact that lattice towers were found to be in operation more frequently than were other towers, including tubular towers, rather than the difference in the design of the towers. Complicating matters still, some factors may be more hazardous for some species than others. One study found that red-tailed hawk fatalities occurred more frequently than expected at turbines located on ridgelines than on hillsides. The authors found the reverse to be true for golden eagles, demonstrating the difficulty of understanding interactions between turbines and bird mortality from bird mortality estimates alone. A third research gap is the lack of complete and definitive information on the interaction of bats with wind turbines. As previously noted, bats have collided with wind turbines in significant numbers in some parts of the United States, but scientists do not have a complete understanding regarding why these collisions occur. Bats are known to have the ability to echolocate to avoid collision with objects, and they have been able to avoid colliding with comparable structures such as meteorological towers. Therefore, their collision with wind turbines remains a mystery. The few studies that have been conducted show that most of the kills have taken place during the migratory season (July through September), and this suggests that migrating bats are involved in most of the fatalities. In addition, one study showed that lower wind speeds were associated with higher fatality rates. However, experts admit that much remains unknown about why bats are attracted to and killed by turbines and about what conditions increase the chances that bats will be killed. One expert noted that there is still very little known about bat migration in general and about the way in which bat interactions with turbines are affected by weather patterns. This expert further noted that there still has not been a full season of monitoring bat mortality from which patterns can be identified. Although scientists still do not know why bats are being killed in large numbers by wind power turbines in some areas, several hypotheses have been offered. One hypothesis states that the lighting on turbines attracts insects, which in turn attracts bats, but studies have not demonstrated differences in fatalities between lit turbines and unlit turbines. Other hypotheses include the notions that bats may be investigating wind turbines as potential roosting sites, that open spaces around turbines create favorable foraging habitats, and that migrating bats do not echolocate and thus are less able to avoid collision. One thing bat experts agree on is the need for more research. In addition to these research gaps regarding bird and bat interactions with turbines, very little is known about bird and bat populations in general, such as their size and migratory pathways. An FWS official told us that data are available regarding the migration routes and habitat needs of only about one-third of the more than 800 bird species that live in or pass through the United States each year. In addition, bat researchers stressed to us that very little is known about the pathways and behavior of migratory bats. This lack of information, among other factors, makes it difficult to assess the cumulative impacts from wind power on species populations. One expert noted that many bird populations are in decline in general and additional losses due to wind power may exacerbate this trend. However, it is very difficult to attribute a decline in bird populations to wind power specifically or to get good data on overall populations that span international borders. Our literature search was only able to find one study in the United States that examined the impact of fatalities from wind power on a particular species population—golden eagles—and those results have been described as relatively inconclusive, or mixed, by other scientists. Without this kind of information, it can be difficult to determine the appropriate public policy responses to wildlife impacts due to wind power. Although there are currently several gaps in the study of wind power’s direct impacts on birds and bats, FWS and the U.S. Geological Survey have recently initiated a study of bird and bat migration behaviors to address some of these data gaps. This study will use radar technology to characterize daily and seasonal movements and habitat and landform associations of migrating birds and bats, and will seek to develop decision support tools to provide assistance in identifying locations for wind turbines and communication towers. In addition, Congress has appropriated funds for a National Academy of Sciences study on the environmental impacts of wind power development in the Mid-Atlantic Highlands that will include developing criteria for the siting of wind turbines in this area. Finally, the Bats and Wind Energy Cooperative, a partnership of Bat Conservation International, the American Wind Energy Association, FWS, and the National Renewable Energy Laboratory, continues to sponsor research on bats and wind turbines focusing on acoustic deterrence methods and pre- and postconstruction risk assessment at a planned wind farm in the Appalachian region. Overall, there is much to be learned about mitigation strategies for reducing impacts from wind power facilities on birds and bats, and some strategies that once looked promising are now proving ineffective. Specifically, we found that relatively few studies have examined strategies for reducing the potential impacts of wind power on birds and bats. Some of these studies were based on information collected from birds in a laboratory setting, and, therefore, their conclusions still need to be verified by conducting studies at actual wind power facilities. One study examined the idea of addressing motion smear—the inability of birds to see moving blades—by painting turbine blades to make them more visible. This study indicated that color contrast was a critical variable in helping birds to see objects like moving turbine blades and recommended painting stripes on blades as a way to test whether this could be an effective deterrent. Some developers adopted this strategy; however, a recent study found that turbines with painted blades were ineffective in reducing bird kills. Another laboratory-based study tested bird reactions to noise and sound pressure and suggested that whistles could make blades more audible to birds, while making no measurable contribution to overall noise levels. However, the authors of this study made no predictions about changes in bird flight in response to hearing the noise and noted that field tests would be required to test this hypothesis. Although there have been relatively few laboratory-based experiments on mitigation strategies, some strategies have already been attempted in Altamont Pass. A recent 4-year study conducted by the California Energy Commission in Altamont Pass tested some of these mitigation efforts attempted by industry and suggested possible future mitigation strategies. This study found that some of the strategies adopted by industry, such as perch guards on turbines and rodent control programs that reduce prey availability, were ineffective in reducing kills. Another study compared the differences between turbines painted with ultraviolet reflectant or nonultraviolet reflectant to see whether one would act as a visual deterrent, but the study found no evidence of a difference in mortality between the two treatments. While there is less than adequate information on the effectiveness of mitigation strategies from existing scientific research, the experts with whom we spoke were hopeful about several strategies on the basis of their experience in the field. Some of these experts noted that because birds have been found to collide with electrical wires, wind facilities should bury their transmission lines under ground and avoid using guywires on their meteorological towers; such fixes have generally been adopted. Although some studies have shown that there are no differences in mortality rates for lit turbines versus unlit turbines, some experts argue that, regardless, it is best to use low lighting to avoid attracting birds that migrate at night. In addition, researchers recommended that sodium vapor lights should never be used at or near wind power facilities because they have commonly been shown to attract birds to other structures. They noted that the largest number of birds killed at one time near wind turbines was found adjacent to sodium lights after a night of dense fog. No fatalities have been discovered near these turbines since the lights were subsequently turned off. Some researchers have observed that many bird and bat kills occur during the time of year that has the lowest wind production. For example, most bats are killed during the fall migration season on low wind nights. Consequently, researchers suggested turning off some turbines during these times in order to reduce kills. Perhaps most importantly, many experts have noted that using preconstruction studies on wildlife and their habitats can help identify locations for wind turbines that are less likely to have adverse impacts. Since most wind power development has occurred on nonfederal land, regulating wind power facilities is largely a state and local government responsibility. In the six states we reviewed, wind power development is subject to local-level processes, state-level processes, or a combination of the two. For example, in three of the six states, local governments regulate the development of wind power and generally require wind developers to adhere to local zoning ordinances and to obtain special use permits before construction. The federal role in regulating wind power development is limited to projects occurring on federal lands or those that have some form of federal involvement, such as projects that receive federal funding; to date, there have been relatively few wind power projects on federal land. In these cases, wind power projects must comply with federal laws as well as any relevant state and local laws. State and/or local governments regulate the development and operation of wind power facilities on nonfederal lands. The primary permitting jurisdiction for wind power facilities in many states is a local planning commission, zoning board, city council, or county board of supervisors or commissioners. Typically, these local jurisdictional entities regulate wind projects under zoning ordinances and building codes. In some states, one or more state agencies play a role in regulating wind power development, such as natural resource and environmental protection agencies, state historic preservation offices, industrial development and regulation agencies, public utility commissions, or siting boards. In addition, some states have environmental laws that impose requirements on many types of construction and development, including wind power, that state and local agencies must follow. The regulatory scheme for wind power in the six states we reviewed included all of these scenarios (see table 1). In the six states we reviewed, we found that approval for the construction and operation of a wind power facility is typically provided in permits that are often referred to as site, special use, or conditional use permits or certificates. Such permits often include various requirements, such as “setback” provisions—which stipulate how far wind power turbines must be from other structures, such as roads and residences—and decommissioning requirements that are intended to ensure that once a wind power facility ceases operation, its structures are removed and the landscape is restored according to a specific standard. State and local regulations may require postconstruction monitoring studies to assess a facility’s impact on the environment. In one state we reviewed, facilities are required to submit periodic reports on issues related to its operation and impact on the surrounding area. In most of the six states we reviewed, state and local regulations related to wind power are evolving as the industry has developed in the states because government agencies realized that their existing authorities were not applicable to wind power. For example, when wind power began to emerge in Minnesota, an advisory task force held public meetings to determine how to proceed in permitting development. In part based on concerns raised from counties during these meetings, responsibility for permitting larger facilities was given to the state. In addition, West Virginia finalized new regulations for electric-generating facilities in May 2005 that include provisions specific to wind power facilities. Prior to this, the state made decisions on a case-by-case basis. Similarly, the Pennsylvania Game Commission is developing a policy for wind power development on its lands in response to private interest in promoting renewable energy sources on state property. Officials with the state’s Department of Environmental Protection also told us that they are examining a number of options, including developing statewide rules and model ordinances that could be adopted by local authorities. Some state and local regulatory agencies we reviewed generally had little experience or expertise in addressing environmental and wildlife impacts from wind power. For example, officials in West Virginia told us that they did not have the expertise to evaluate wildlife impacts and review studies prior to construction, although such studies are required. Instead, they said they rely on the public comment period while permits are pending for concerns to be identified by others, such as FWS and the state Division of Natural Resources. In addition, Alameda County officials in California told us that they did not have the expertise to assess the impacts of wind facility construction but rely on technical consultants during the permitting stage, and that they are planning to form a technical advisory committee for assistance with postapproval monitoring. In some of the states we reviewed, state agencies were conducting outreach efforts with local governments since wind power development is still a relatively new industry for regulators. These efforts typically focus on educating local regulators about the issues that are often encountered during wind power development and about how permitting can be handled. These efforts may also include providing sample zoning ordinances and permits. California had the most installed wind power in the country, with 2,096 MW of generating capacity as of April 2005 and an additional planned capacity of 365 MW. California was the first state in which large wind farms were developed, beginning in the early 1980s. It is also one of the few states with significant wind power development on federal land, with over 250 MW on land owned by the Bureau of Land Management (BLM). Aside from the facilities on BLM land, the state relies on local governments to regulate wind power. In addition to the local permitting process, the California Environmental Quality Act requires all state and local government agencies to assess the environmental impacts of proposed actions they undertake or permit. This law requires agencies to identify significant environmental effects of a proposed action and either avoid or mitigate significant environmental effects, where feasible. We met with officials from Alameda County and Contra Costa County, which are home to the Altamont Pass Wind Resource Area—at one time the largest wind energy facility in the world. In both counties, local land use ordinances allow wind power development on agricultural lands. These counties originally issued conditional or land use permits to various wind power developers in the 1980s that contained approval conditions, including requirements for setbacks from property lines and noise limits. As previously discussed, the Altamont Pass Wind Resource Area was subsequently found to be responsible for the deaths of numerous raptor species. The counties are currently renewing or amending some of the permits for facilities in this area and will add permit conditions in an attempt to reduce avian mortality. Alameda County officials were working with various federal and state agencies, environmental groups, and wind energy companies to agree on specific permit conditions. At the time of this report, Alameda County has recently approved a plan that is aimed at reducing bird deaths at Altamont Pass by removing some existing turbines, turning off selected turbines at certain times, implementing other habitat modification and compensations measures, and gradually replacing existing turbines with newer turbines. In addition, Contra Costa County had completed the permitting for a wind power facility that included a number of conditions to reduce avian mortality. Minnesota had 615 MW of installed wind generating capacity as of April 2005 and an additional planned capacity of 213 MW. Wind power development in Minnesota is subject to either local or state permitting procedures, depending on the size of the project. Local governments generally issue conditional use permits or building permits to wind power developers for facilities under 5 MW. We spoke with officials in Pipestone County, which was the first in the state to adopt a wind power ordinance. This ordinance focuses mainly on setbacks and decommissioning requirements. In southwestern Minnesota—which includes Pipestone County and most of the wind power development in the state—a 14-county renewable energy board is working to adopt a “model” wind power permitting ordinance that would provide uniformity for regulating development in the region. Two factors that officials cited in pursuing such guidance is the recognition that development is likely to occur under the 5 MW threshold for state permitting, and that wind power developers would benefit from uniform regulations. Between 1995 and the first half of 2005, the Minnesota Environmental Quality Board—comprised of 1 representative from the governor’s office, 5 citizens, and the heads of 10 state agencies—was responsible for regulating large wind energy systems that are 5 MW or larger, studying environmental issues, and ensuring state agency compliance with state environmental policy. Effective July 1, 2005, authority for permitting these large wind energy systems was transferred to the Minnesota Public Utilities Commission. The commission requires, among other things, an analysis of the proposed facility’s potential environmental and wildlife impacts, proposed mitigative measures, and any adverse environmental effects that cannot be avoided. Instead of requiring individual wind developers to conduct their own assessments of impacts to wildlife, Minnesota took a different approach. Since much of the wind power development is concentrated in the southwestern part of the state, the state determined that it would be more efficient to conduct one large-scale study, rather than requiring each developer to conduct individual studies. Thus, the state required wind developers to participate in a 4-year avian impact study at a cost of about $800,000 as well as a subsequent 2-year bat study. The studies concluded that the impacts to birds and bats from wind power are minimal. Therefore, on the basis of the results of the state-required studies, state and local agencies in Minnesota are not requiring postconstruction studies for wind power development in this portion of the state. The costs for these studies were charged back to individual wind developers on the basis of the number of megawatts built or permitted within a specified time frame. New York had three operating wind power facilities, with 49 MW of installed wind generating capacity as of April 2005. An additional 350 MW of wind power capacity is planned for the state. According to state officials, local governments permit the development of wind power in the state using their zoning authorities. In addition to this local permitting, the state has an environmental quality review act that requires all state and local government agencies to assess the environmental impacts of proposed actions, including issuing permits to wind power facilities. This law requires that an environmental impact statement be conducted if a proposed action is determined to have a potentially significant adverse environmental impact. Because wind power is still new to the state and there are a significant number of proposed facilities, a state agency focused on promoting energy development is beginning a program for educating local communities about regulating wind power. This program includes examples of zoning ordinances that have been used in other counties. We met with officials from the Town of Fenner—in north-central New York—which has the largest wind power facility in the state. On the basis of complaints about noise from the first facility permitted by the town, the local planning board now requires that turbines be located a certain distance from residences. In order to comply with the state’s environmental law, the town conducted an environmental assessment to determine the potential impacts of the proposed facility and determined that the project would not have any significant adverse environmental impacts or pose a significant risk to birds. However, elsewhere in New York, approval of one wind power project is under review given concerns expressed by environmental groups and the state environmental and conservation agency about potential impacts to migratory birds. Oregon had five large wind projects, with a total of 263 MW of installed wind power generating capacity as of April 2005 (see fig. 5). Several new wind projects and expansions are under way or being planned that would take total capacity in Oregon to more than 700 MW. Similar to Minnesota, wind power regulation in Oregon is subject to either local or state permitting procedures, depending on the size of the project. Local governments issue conditional use permits for facilities capable of generating up to 105 MW peak capacity. For example, in Sherman County, the planning commission approved a 24 MW wind power project near Klondike in north-central Oregon. Under its zoning authority, the county attached various conditions to the project’s permit, including an avian postconstruction study, and decommissioning and removal requirements. If projects exceed 105 MW peak capacity, they are permitted by the Oregon Energy Facility Siting Council, which makes decisions about issuing site certificates for energy facilities. The siting council is a seven-member citizen commission that is appointed by the governor. Wind power projects that are subject to the council’s jurisdiction must comply with the council’s standards and applicable statutes. Some of the standards are specific to wind power, such as design and construction requirements to reduce visual and environmental impacts. The council also ensures that wind power facilities are constructed and operated in a manner consistent with state rules, such as state fish and wildlife habitat mitigation goals and standards, and local agency ordinances. In addition, regulations protect against impacts on the surrounding community by requiring that minimal lighting be used to reduce visual impacts, and protect some bird species by requiring that developers avoid creating artificial habitat for raptors or raptor prey. Also in Oregon, energy development—including wind power—must not adversely impact scenic and aesthetic values and is prohibited in certain areas, such as state parks. Pennsylvania had 129 MW of installed wind generating capacity as of April 2005 and applications for an additional 145 MW to be developed (see fig. 6). In Pennsylvania, wind power is regulated by local governments; no state agency has the authority to specifically regulate wind power development. For example, in Somerset County, which is home to the first wind power facility in the state, the county’s planning commission regulates wind power development through an ordinance that allows for subdividing existing land. This ordinance contains requirements for setbacks and decommissioning. Some county and state officials have suggested that the state should provide a consistent framework for wind power development. The state, through its Pennsylvania Wind Working Group, is currently discussing whether there should be uniform state-level siting guidelines or regulations for wind power development. Pennsylvania was the only state of the six we reviewed that did not have state-level requirements for environmental assessments. However, one state official told us that many developers have done some environmental studies—generally including wildlife, noise, and protection of scenic vistas (i.e., viewshed)—in an attempt to head off criticism or opposition to a proposed project. West Virginia had one operating wind power facility, with 66 MW of installed wind power generating capacity and a planned additional capacity of 300 MW for the state (see fig. 7). The state’s Public Service Commission has been the only agency involved in regulating wind power to date, although state officials noted that local governments could get involved through their zoning authorities. Prior to 2005, West Virginia permitted construction and operation of wind power facilities under laws and regulations designed to regulate utilities providing electrical service directly to its citizens. Wind power facilities are wholesale generators and do not provide service to consumers, and according to commission officials, several provisions of these regulations were not relevant to wind power facilities. As a result, in 2003, the state amended the legislation to specifically address the permitting of wholesale electric generators, such as wind power. West Virginia followed the regulations in place before the legislation was amended to approve construction of the two wind power facilities in the state; one of these facilities has yet to be constructed. During the public comment periods for these facilities, concerns were raised regarding potential impacts to wildlife. As a result, certain conditions were required of the developers, such as prohibiting turbines in certain locations and requiring postconstruction wildlife studies. In May 2005, the state finalized new regulations for wholesale electric-generating facilities that include provisions specific to wind power facilities. For permitting wind power facilities, West Virginia regulations now require spring and fall avian migration studies, avian and bat risk assessments, and avian and bat lighting studies. The federal government’s role in regulating wind power development is limited to projects occurring on federal lands or projects that have some form of federal involvement. While the Federal Energy Regulatory Commission regulates the interstate transmission of electricity, natural gas, and oil, it does not approve the physical construction of electric generation, transmission, or distribution facilities; such approval is left for state and local governments. Certain standards issued by the Federal Aviation Administration apply to wind power facilities and other tall structures, on all lands. These standards are intended to protect aircraft and specify the type of lighting that should be used for structures of a certain height. Since the majority of wind development to date has been on nonfederal land or has not required federal funding or permits, the federal government has had a limited role in regulating wind power facilities. In those cases where federal agencies do regulate wind power, projects must comply both with state and local requirements and with any applicable federal law. At a minimum, these laws will include the National Environmental Policy Act and the Endangered Species Act. These laws often require preconstruction studies or analyses of proposed projects, and possibly project modifications to avoid adverse environmental effects. For example, if the development of a proposed wind power project on federal land could impact wildlife habitat and/or species protected under the Endangered Species Act, permitting of the project would involve coordination and consultation with FWS and/or the National Marine Fisheries Service to determine the potential harm to species and the steps that may be necessary to avoid or offset the harm. To date, BLM has been the only federal agency with wind energy production, with about 500 MW of installed wind power capacity. This wind energy development is located in Southern California in the San Gorgonio Pass and Tehachapi Pass areas, and in the Foote Creek Rim and Simpson Ridge areas of Wyoming. According to BLM officials, as of June 2005, they had authorized 88 applications for wind energy development on their land and had 68 pending applications—most of which are in California and Nevada. Energy development on BLM-administered lands is regulated through its process for granting private parties access to federal lands, which is referred to as granting a “right-of-way authorization.” BLM’s Interim Wind Energy Development Policy establishes the requirements for granting these authorizations to wind energy facilities. This policy requires that all proposed facilities conduct the necessary assessments and analyses required by the National Environmental Policy Act, the Endangered Species Act, and other appropriate laws. In one case, some changes have been made to the location of some wind power turbines because of potential impacts to avian species that were identified during these preconstruction studies. Because of an increased focus on developing energy sources on public lands, BLM has proposed revising their interim policy by developing a wind energy development program that would establish comprehensive policies and best management practices for addressing wind energy development. As a part of this effort, BLM issued a programmatic environmental impact statement in June 2005 that assesses the social, environmental, and economic impacts of wind power development on BLM land. This document also identifies best management practices for ensuring that the impacts of wind energy development on BLM lands are kept to a minimum. While subsequent proposed wind power facilities will still need to conduct some environmental assessments, they can rely on BLM’s programmatic assessment for much of the needed analyses. BLM hopes that the availability of this assessment will enable wind power development to proceed more quickly on its lands, assuming that such development complies with needed requirements. As with any other activity, federal and state laws afford protections to wildlife from wind power. Three federal laws—the Migratory Bird Treaty Act, the Bald and Golden Eagle Protection Act, and the Endangered Species Act—generally forbid harm to various species of wildlife. While each of the laws allows some exceptions to this, only the Endangered Species Act includes provisions that would permit a wind power facility to kill a protected species under certain circumstances. While wildlife mortality events have occurred at wind power facilities, the federal government has not prosecuted any cases against wind power companies under these wildlife laws, preferring instead to encourage companies to take mitigation steps to avoid future harm. Regarding state wildlife protections, all of the six states we reviewed had statutes that can be used to protect some wildlife from wind power impacts. However, similar to FWS, no states have taken any prosecutorial actions against wind power facilities where mortalities have occurred. The primary federal regulatory framework for protecting wildlife from impacts from wind power includes three laws—the Migratory Bird Treaty Act, the Bald and Golden Eagle Protection Act, and the Endangered Species Act. (See table 2.) FWS is primarily responsible for ensuring the implementation and enforcement of these laws. In general, these laws prohibit various actions that are deemed harmful to certain species. For example, each law prohibits killing or “taking” a protected species, unless done under circumstances that are expressly allowed by statute and authorized via issuance of a federal permit. The Endangered Species Act may also prohibit actions that harm a protected species’ habitat. In addition, each federal agency that takes actions that have or are likely to have negative impacts on migratory bird populations are directed by Executive Order 13186, “Responsibilities of Federal Agencies to Protect Migratory Birds,” to work with FWS to develop memorandums of understanding to conserve those species. While the executive order was signed on January 10, 2001, no memorandums have yet been signed. Wildlife species that fall outside the scope of these three laws, such as many species of bats, are generally not protected under federal law. However, FWS is not only responsible for ensuring the survival of species protected by specific laws, but also for conserving and protecting all wildlife. All three of the federal wildlife protection laws prohibit most instances of “take,” although each law provides for some exceptions, such as scientific purposes. The Endangered Species Act is the least restrictive of these laws in that it authorizes FWS to permit some activities that take a protected species as long as the take meets several requirements, including a requirement that the take be incidental to an otherwise legal activity. Wind power facilities may seek an incidental take permit under this act for facilities sited on private land or where no federal funding is used or federal permit is required. The Migratory Bird Treaty Act and the Bald and Golden Eagle Protection Act also allow permits for take, but incidental take of migratory birds is not allowed. Under all three statutes, unauthorized takings may be penalized, even if the offender had no intent to harm a protected species. Although not required by these federal laws, in some cases, state or local entities that regulate wind power, or wind power developers themselves, will consult with FWS for information on protected species or advice on how to ensure that wind power facilities will not harm wildlife. For example, in the Altamont Pass Wind Resource Area, Alameda County officials and the companies operating wind facilities there have asked FWS for technical assistance related to renewing permits for existing wind power facilities. FWS officials told us that their technical assistance in Altamont Pass is aimed at avoiding or minimizing potential impacts to threatened or endangered species under the Endangered Species Act. In addition, FWS officials from the New York field office told us that they are asked to provide input on wind power proposals during the state’s environmental review process. These officials noted that they will likely not be able to review all of the wind power development proposals in the state due to staffing constraints. Similarly, FWS officials in five of the six states we reviewed told us that they have not conducted outreach to state or local regulators to inform them of the potential for wildlife impacts from wind power primarily because of workload constraints. If state and local regulators do not consult with FWS during the regulatory process, it can be difficult for FWS to encourage actions that might reduce wildlife deaths before wind turbines are sited. Although FWS investigates all “take” of federal trust species, the government has elected not to prosecute wind energy companies for violations of wildlife laws at this time. In most of the states we reviewed, there were relatively few law enforcement officials, and they told us that they often had higher priority violations of federal wildlife laws than mortality events due to wind power, particularly given the relatively low levels of mortality that have occurred in most wind power locations. In West Virginia, the agent-in-charge told us that most of his time is spent on the commercialization of wildlife, such as the illegal import and export and interstate commerce of protected species; illegal hunting is also a major problem, particularly for bears and eagles. FWS law enforcement officials in all of the six states we reviewed told us that in cases of violations, they prefer to work cooperatively with the owners of wind power facilities to try to get them to take voluntary actions to address impacts on wildlife, rather than pursuing prosecution; however, other cases of wildlife violations, such as illegal trade in protected species, are pursued via prosecution. FWS has been investigating and monitoring avian mortality at Altamont Pass for nearly 20 years, including the mortality of many protected species, such as golden eagles and other raptors. Since that time, FWS has opened investigations and tried to work with the owners of wind power facilities to reduce the level of mortality. In the earlier years, some avian mortality was due to electrocutions along power lines. FWS had been working with electrical utility companies to resolve this problem elsewhere, and several relatively easy “fixes” were known to reduce electrocutions. As a result of official correspondence and conversations between FWS and company officials, many companies implemented these fixes, and avian mortality due to electrocutions has been reduced. However, large numbers of birds, particularly raptors, were still being killed due to actual collisions with wind turbines. On several occasions, FWS expressed concern about these mortalities to wind power companies and Alameda County—the county government with the most wind power development in California. In response, Alameda County and some wind power companies have conducted avian monitoring studies and tested several mitigation measures, including painting turbine blades, installing perch guards on lattice-work towers, and conducting rodent control. However, these actions appear to have no significant impact on reducing avian mortality. Since January 2004, the wind power companies have worked together to develop an adaptive management plan for reducing avian mortality at Altamont Pass. The plan contains various mitigation measures, such as (1) removing old turbines and replacing them with fewer, new turbines and (2) implementing a partial seasonal shutdown of turbines. Over the past 6 years, FWS has referred about 50 instances of golden eagles killed by 30 different companies in Altamont Pass either to the Interior Solicitor’s office for civil prosecution or to the Department of Justice for criminal prosecution. Officials noted that, in general, prosecutions by both the Departments of the Interior and Justice focus on companies that kill birds with disregard for their actions and the law, especially when conservation measures are available but have not been implemented. Despite the recurring nature of the avian mortality in Altamont Pass and concerns from federal, state, and local officials, no prosecutions pursuant to federal wildlife laws have been taken against any wind power companies. Justice has not pursued prosecution in these cases, although they currently have an open investigation on avian mortality in Altamont Pass. As a matter of policy, Justice does not discuss the reasons behind specific case declinations, nor does it typically confirm or deny the existence of potential or actual investigations. However, Justice officials told us that, in general, when deciding to prosecute a case criminally, they consider a number of factors, including the history of civil or administrative enforcement, the evidence of criminal intent, and what steps have been taken to avoid future violations. Regarding the matters that FWS referred for civil enforcement, Interior’s regional solicitor has also not pursued prosecution in any of these cases. Interior’s Office of the Solicitor San Francisco field office declined to pursue the most recent civil referrals because Justice agreed to review turbine mortalities for possible criminal prosecution. Some citizen groups remain concerned about the lack of enforcement of federal and state wildlife protections. For example, in November 2004, the Center for Biological Diversity filed a lawsuit against the wind power companies in Altamont Pass to seek restitution for the killing of raptors. In addition to the avian mortalities at Altamont Pass, significant wildlife mortality has also occurred at wind power locations in the Appalachian Mountains in West Virginia and Pennsylvania in 2003 and 2004. FWS has reviewed high numbers of bat kills; however, these bat species are not protected under federal law. Several studies have been completed or are under way in these regions to better determine the potential causes of the mortality events and how future events might be mitigated. The FWS law enforcement agent-in-charge in West Virginia told us that he has contacted wind power developers of some of the proposed facilities in the state about potential violations of federal wildlife laws should an endangered bat or other protected species be killed. The agent said that he prefers to have early involvement with wind power facilities, rather than wait for violations to occur. FWS law enforcement officials told us that the way they have handled avian mortalities at wind power facilities is similar to how they deal with wildlife mortality caused by other industries. These officials explained that FWS recognizes that man-made structures will generally result in some level of unavoidable incidental take of wildlife and, as a result, FWS reserves a level of “enforcement discretion” in determining whether to pursue a violation of federal wildlife law. Law enforcement officials told us that before FWS pursues civil or criminal penalties, the agency prefers to work with a company to encourage them to take mitigation and conservation steps to avoid future harm. If a company shows a good-faith effort to reduce impacts, FWS will likely not refer such a case for prosecution. If, however, a company repeatedly refuses to take steps suggested by FWS, officials said they are likely to refer it for prosecution. Work that FWS has done with the electric power industry illustrates this approach to resolving impacts to wildlife. FWS began working with the electric power industry in the early 1980s to reduce significant avian mortality due to collisions with and electrocutions at power lines, particularly mortality events involving eagles and other large birds. Pursuant to investigations of avian mortality at power lines and conversations with individual companies, solutions were identified that reduced mortality events. Because these solutions were relatively inexpensive and generally easy to install based on scientific testing—and were known to work—FWS law enforcement officials expected other electric line companies to install them. According to law enforcement officials, the threat of a potential conviction under the Migratory Bird Treaty Act or the Bald and Golden Eagle Protection Act was generally enough to get companies to voluntarily install the fixes without FWS prosecuting them. However, by the late 1980s, some electric companies were aware of mortalities due to electrocutions but were not taking actions to resolve the causes. The federal government in 1998 charged an electric utility cooperative—the Moon Lake Electric Association in Colorado and Utah—with criminal violations of these two laws. This is the first and only instance of a federal criminal prosecution of an electric power line company under any of the three federal wildlife protection laws. Civil cases have been filed and out-of-court agreements have been reached with other electric utilities for similar cases of wildlife mortalities. Even though FWS does generally not have a direct role in determining whether and how wind power facilities are permitted, FWS has been involved for about 20 years with the wind power industry to help avoid and minimize impacts to wildlife from wind power development. FWS’s work has been in the following three main areas—participating on a national wind working group and in technical workshops, and issuing guidance. An FWS senior management official has been a member of the National Wind Coordinating Committee since 1997. The wildlife workgroup serves as an advisory group for national research on wind-avian issues and a forum for defining, discussing, and addressing wind power-wildlife interaction issues. The workgroup has facilitated five national avian-wind power planning workshops to define needed research and explore current issues. The most recent workshop also included discussions of bat-wind turbine interactions. In addition, the working group released a report in December 1999, Studying Wind Energy/Bird Interaction: A Guidance Document, that includes metrics and methods for determining or monitoring potential impacts on birds at existing and proposed wind energy sites. FWS officials have participated in industry-sponsored workshops and conferences. For example, a senior FWS official presented information on cumulative impacts on wildlife from wind power at a 2004 workshop cosponsored by the American Wind Energy Association and the American Bird Conservancy. Another FWS official presented information on the agency’s experience and expectations for regional wildlife issues at a national workshop on wind power siting sponsored by the wind association. FWS also helped to sponsor and organize, and participated in, a 2004 bats and wind power technical workshop attended by both wind industry representatives and researchers. As a result, FWS was instrumental in establishing the Bats and Wind Energy Cooperative discussed elsewhere in the report. In July 2003, in an effort to inform wind power developers about the potential impacts to wildlife and encourage them to take mitigating actions before construction, FWS issued interim voluntary guidelines for industry to use in developing new projects. FWS developed the interim guidelines in response to the Department of the Interior’s push to expand renewable energy development on public lands. The wind power interim guidelines are intended to assist FWS staff in providing technical assistance to the wind energy industry to avoid or minimize impacts to wildlife and their habitats through (1) proper evaluation of potential wind energy development sites, (2) proper location and design of turbines, and (3) pre- and postconstruction research and monitoring to identify and assess impacts to wildlife. The voluntary guidelines were open for public comment for a 2-year period that ended on July 10, 2005. At the time of this report, FWS had received numerous comments from the wind industry on the guidelines. In general, industry representatives thought that the guidelines were overly restrictive—to a degree not supported by the relative risk that wind power development poses to wildlife compared with other sources of mortality. FWS also had received comments from other groups—such as the Ripley Hawk Watch, the Clean Energy States Alliance, the Humane Society of the United States, the Massachusetts and Pennsylvania Audubon, the American Bird Conservancy, Defenders of Wildlife, and Chautaqua County Environmental Management Council—that were generally in support of the guidance or recommended that it be put into regulation. BLM also provided comments and expressed some concerns over the review process outlined in the guidelines. FWS will be reviewing and incorporating the public, industry, and agency comments received on the interim guidelines as appropriate in order to revise and improve them, and will solicit additional public input before disseminating a final version. In addition, FWS recently began developing a template for a letter to be sent to wind power project applicants to alert them to federal wildlife protection laws, FWS’s interim guidance, and FWS’s role in protecting wildlife. FWS officials told us that they hope the letter will assist developers in making informed decisions regarding site selection, project design, and compliance with applicable laws. The availability of a ready-to-use template is important because most field officials told us that working with the wind power industry is just one of many responsibilities in FWS offices that often do not have enough staff, given their workloads. Field officials also noted that if wind power developers, their consultants, or state or local regulatory agencies do not contact them, they may not know about wind power projects until there is a problem with an operating facility. Although federal jurisdiction for migratory birds has not been delegated to the states and primary responsibility for the protection of these birds resides with Interior, all states we reviewed had additional wildlife protections. Responsibility for protecting species and implementing wildlife laws and regulations is typically found in a state’s natural resource protection agency. In some states, however, responsibility is assigned according to the type of species addressed. For example, in some states, agriculture departments address plant issues, while in other states, fish and boat commissions address fish, amphibian, and reptile issues; in these cases, wildlife agencies typically address the remaining species. In all six states, the most common laws related to wildlife protection—and likely the most utilized wildlife laws—are those that govern hunting and fishing. These laws and regulations may include limits on the type and number of species that can be killed and the manner in which they can be taken. In addition to identifying the species that can be hunted or fished, the six states we reviewed identify as threatened or endangered specific species that are at risk for extinction or extirpation in their state. These states also identify “species of concern” or rare species. Such species are identified as a way to provide an early warning signal for species that are not yet endangered or threatened, but could become so in the future. All of the six states we reviewed have laws that provide at least some degree of protection for species that are at risk of extinction or extirpation in their state. These protections generally go beyond what the federal Endangered Species Act provides by protecting more species than are protected under the federal law, although the protections may not be as extensive. In the five states that have specific protections, protection is provided through prohibitions on taking a protected species. In some cases, these protections are only applicable under certain circumstances. For example, in Oregon, protections apply only to state actions or on state-owned or -managed lands. All of the state laws or regulations that include take prohibitions, also include exceptions for when permits can be issued in order to allow the take to occur. Such permits are issued according to prescribed conditions or on a case-by-case basis. Two of the six states also provide protections for habitat. In West Virginia, the primary protection for wildlife, aside from hunting and fishing regulations, is a prohibition on the commercial sale of wildlife and specific protection for bald and golden eagles. Most of the states’ wildlife protection laws for threatened and endangered species include enforcement provisions. In some cases, these laws identify violations as misdemeanor crimes. Similar to FWS law enforcement’s approach to wind power, we found that state agencies had not taken any prosecutorial actions in response to wildlife mortalities at wind power facilities. Instead, many state officials told us that they prefer—like FWS—to work with developers to try to identify solutions to the causes of mortality. For example, in Minnesota, after impacts to native prairie grass caused by a wind power facility were discovered, the state natural resource agency required the facility to purchase additional habitat elsewhere to compensate for the loss. In California, Alameda County has worked with wind power facilities and others, and recently approved a plan that is aimed at reducing bird deaths at Altamont Pass by having wind power companies turn off selected turbines at certain times and replace some turbines with newer turbines. State natural heritage programs serve as key sources of information on wildlife for federal and state wildlife protection agencies. All six of the states we reviewed have natural heritage programs that manage information on natural resources, including threatened and endangered species (all 50 states have such programs). These programs are part of an international effort to gather and share information on biological resources. This effort has slightly different designations and criteria for identifying imperiled species and habitat than the federal Endangered Species Act. In five of the states we reviewed, the natural heritage program is run by the states’ natural resource agencies; in the sixth state, Oregon, it is run by a university. Although West Virginia does not have a state endangered species law and protects only bald and golden eagles, it does identify other imperiled species through its natural heritage program. State natural resource agencies—which typically house the natural heritage programs—are sometimes consulted by a state or local wind power regulator or a wind power developer during the permitting process for help in identifying potentially sensitive species or concerns about possible impacts to wildlife in general. For example, staff from West Virginia’s natural resources agency were involved in reviewing wildlife monitoring studies conducted by the first wind power facility in the state. During the consultation process on another proposed facility in the state, agency staff requested that certain studies be conducted because of concerns about impacts on bat populations. Similarly, in Minnesota, natural resource agency staff requested changes in the location, construction, and operation of certain proposed wind power turbines through the state’s environmental review process. However, in some cases, the process for regulators or wind power developers to consult with natural resource agency staff on wildlife is often an informal one and is not necessarily required by states’ species protections or laws and regulations used to permit wind power. In the context of other sources of avian mortalities, it does not appear that wind power is responsible for a significant number of bird deaths. While we do not know a lot about the relative impacts of bat mortality from wind power relative to other sources, significant bat mortality from wind power has occurred in Appalachia. However, much work remains before scientists have a clear understanding of the true impacts to wildlife from wind power. Scientists, in particular, are concerned about the potential cumulative impacts of wind power on species populations if the industry expands as expected. Such concerns may be well-founded because significant development is proposed in areas that contain large numbers of species or are believed to be migratory flyways. Concerns are compounded by the fact that the regulation of wind power varies from location-to-location and some state and local regulatory agencies we reviewed generally had little experience or expertise in addressing the environmental and wildlife impacts from wind power. In addition, given the relatively narrow regulatory scope of state and local agencies, it appears that when new wind power facilities are permitted, no one is considering the impacts of wind power on a regional or “ecosystem” scale—a scale that often spans governmental jurisdictions. FWS, in its responsibility for protecting wildlife, is the appropriate agency for such a task and in fact does monitor the status of species populations, to the extent possible. However, because wildlife, federally protected birds in particular, face a multitude of threats, many of which are better understood than wind power, FWS officials told us that they generally spend a very small portion of their time assessing the impacts from wind power. Nonetheless, FWS has taken some steps to reach out to the wind power industry by, among other things, issuing voluntary guidelines to encourage conservation and mitigation actions at new wind power facilities. In addition, FWS and the U.S. Geological Survey are initiating some studies to capture data on migratory flyways to help determine where the most potential harm from wind power might occur and to gather data for use in assessing wind power’s cumulative impacts on species. Although these are valuable steps in educating industry and improving science, FWS has conducted only limited outreach to state and local regulators about minimizing impacts from wind power on wildlife and informing them about species that may be particularly vulnerable to impacts from wind power. Such outreach is important because these are the entities closest to the day-to-day decisions regarding where wind power will be allowed on nonfederal land. Given the potential for future cumulative impacts to wildlife species due to wind power and the limited expertise or experience that local and state regulators may have in this area, we recommend that the Secretary of the Interior direct the Director of the FWS to develop consistent communication for state and local wind power regulators. This communication should alert regulators to (1) the potential wildlife impacts that can result from wind power development; (2) the various resources that are available to help them make decisions about permitting such facilities, including FWS state offices, states’ natural resource agencies, and FWS’s voluntary interim guidelines—and any subsequent revisions—on avoiding and minimizing wildlife impacts from wind turbines; and (3) any additional information that FWS deems appropriate. We provided copies of our draft report to the Department of the Interior and received written comments. (See app. III for the full text of the comments received and our responses.) Interior officials stated that they generally agree with our findings and our recommendation in the report. We also sent portions of the report to state and local regulators and state wildlife protection agencies. Many of these entities provided technical comments, which we incorporated as appropriate. Interior also provided technical comments, which we incorporated where appropriate. Interior officials agreed in most part with our recommendation to develop consistent communication to deliver to state and local wind power regulators. However, they stated that because the comment period on the FWS voluntary interim guidelines has closed and final guidelines have yet to be developed, it would be inappropriate to include these in such communication. However, because FWS is currently disseminating the voluntary interim guidelines on wind power to its field offices to share with regulators and developers, we believe that it is appropriate to include reference to this document in communications to local and state regulators. As Interior noted, these voluntary guidelines are currently undergoing review and revision. Therefore, it would be appropriate to draw attention to this fact in any such communication and to provide information about how the most current version might be accessed. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Secretary of the Interior, as well as to appropriate congressional committees and other interested Members of Congress. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-3841. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. On the basis of a June 22, 2004, request from the Ranking Democratic Members—House Resources Committee and the House Appropriations Subcommittee on Science, the Departments of State, Justice, and Commerce and Related Agencies—and of subsequent discussions with their staffs, we reviewed wind energy development and impacts on wildlife. Specifically, we assessed (1) what available studies and experts have reported about the impacts of wind power facilities on wildlife in the United States and what can be done to mitigate or prevent such impacts, (2) the roles and responsibilities of government agencies in regulating wind power facilities, and (3) the roles and responsibilities of government agencies in protecting wildlife from the risks posed by wind power facilities. To determine what available studies and experts have reported about the direct impacts of wind power facilities on wildlife, we reviewed scientific studies and reports on the subject that were conducted by government agencies, industry, and academics. Our review focused on wildlife mortality as opposed to indirect impacts, which include habitat modification and disruption of feeding or breeding behaviors due to wind power facilities. We used several criteria to select studies for review. We chose studies that included original data analyses (rather than summaries of existing literature) conducted in the United States since 1990, and we primarily focused on the impact of wind power on birds and bats and/or ways in which to mitigate those impacts. We did not include preconstruction assessments of wildlife impacts in our review. We excluded studies that had preliminary findings when there was a more recent version available. We located studies using a database search with keywords of “wind power” and “birds,” “bats,” or “wildlife” in the following databases: AGRICOLA, DOE Information Bridge, National Environmental Publications Information, Energy Citations Database, Energy Research Abstracts, Environmental Sciences and Pollution Management, and JSTOR. In addition, we located studies using bibliographies of other studies and through publicly available lists of studies from the National Wind Coordinating Committee, the California Energy Commission, the National Renewable Energy Laboratory, and Bat Conservation International. We shared our list of studies with experts and asked them to identify any studies missing from our list. When studies were not publicly available, we contacted the authors and attempted to obtain copies. Using these methods and criteria, we obtained 31 studies. We reviewed the studies’ methodology, assumptions, limitations, and conclusions for the purposes of excluding studies that did not ensure a minimal level of methodological rigor. We excluded 1 study, leaving 30 studies that are used in this work. In addition to these studies, we also reviewed two summaries of studies produced by the National Wind Coordinating Committee. Generally, we did not directly use these two summary studies, we did use them as a check for our conclusions and findings in relation to the studies we reviewed. We also interviewed experts and study authors from the Department of the Interior’s U.S. Fish and Wildlife Service (FWS), state government agencies, academia, wind industry, and conservation groups and obtained their views on the risks of wind power facilities to migratory birds and other wildlife and on ways in which to minimize these risks. To determine the roles and responsibilities of government agencies in regulating wind power facilities, we identified and evaluated relevant federal laws and regulations for wind power development. We reviewed a nonprobability sample of six states with wind power development— California, Minnesota, New York, Oregon, Pennsylvania, and West Virginia. We selected these states to reflect a range in installed capacity, different regulatory processes, a history of wind power development, and geographic distribution and to reflect our requesters’ interests. For these states, we identified and evaluated relevant state and local laws and regulations for wind power development. We interviewed federal officials from FWS, Bureau of Land Management, and Interior’s Office of the Solicitor as well as officials from the Department of Justice. We interviewed officials from FWS headquarters and from field office locations in the six states that we selected. We also interviewed officials from various state agencies, such as the Oregon State Siting Council and the West Virginia Public Service Commission, and from local and county governments that were responsible for issuing permits or certificates for the development of wind power facilities in their states. Finally, we visited wind power facilities in California, New York, Oregon, Pennsylvania, and West Virginia and interviewed wind industry company officials. To determine the roles and responsibilities of government agencies in protecting wildlife from the risks posed by wind power facilities, we identified and evaluated relevant federal, environmental, and wildlife protection laws and regulations. We interviewed FWS law enforcement officials from headquarters and the six states that we reviewed. For the six states that we selected, we identified and evaluated relevant state and local environmental and wildlife protection laws. We also interviewed officials from state environmental and wildlife agencies in California, Minnesota, New York, Oregon, Pennsylvania, and West Virginia. We conducted our work between December 2004 and July 2005 in accordance with generally accepted government auditing standards, including an assessment of data reliability and internal controls. Table 3 includes only studies where calculating bird or bat mortality was a primary goal. Some studies may contain more than one study location. The following are GAO’s comments on the Department of the Interior’s letter dated September 2, 2005. The Department of the Interior raised one issue with our recommendation that we have addressed in the Agency Comment and Our Evaluation section in the report. We address below the four other points the department raised in its letter. In addition, the department provided technical comments that we have incorporated into the report, as appropriate. 1. We agree that it is important to point out that many of these studies were not scientifically peer-reviewed and have added a footnote to this effect in the body of the report. However, we disagree that in some cases protocols used in the studies were unknown. As we explain in appendix I, we only included studies that were determined to have reasonably sound methodologies. We did not include any study for which we were unable to assess the protocols or methodology. 2. We believe the section on law enforcement reflects continued investigation of “take” of federal trust species by wind turbines and FWS’s and the Department of Justice’s enforcement and prosecutorial discretion, although we have added some clarification on these points. 3. We did not find any instances where state or local agencies that regulate wind power included in our review had incorporated or adopted the interim guidelines into their own jurisdictional requirements for approving wind power facilities. We did, however, find agencies in two states that had used the guidelines to inform either their development of regulations or their monitoring of the wildlife impacts at operating wind power facilities. 4. We did not assess how various local controls provide for protection of individual animals that are interjurisdictional in their life cycles. The section of the report that pertains to state wildlife laws is descriptive in nature and serves to highlight the fact that state laws sometimes provide additional protections to species, beyond federal laws, that may be affected by wind power. We added language to highlight that federal jurisdiction for migratory birds has not been delegated to the states, and that primary responsibility for the protection of these birds resides with the federal government (Interior). In addition to the individual named above, Patricia McClure, Assistant Director; José Alfredo Gómez; Kimberly Siegal; and William Roach made key contributions to this report. Important contributions were also made by Judy Pagano, John Delicath, and Omari Norman. Anderson, R., N. Neumann, J. Tom, W. P. Erickson, M. D. Strickland, M. Bourassa, K. J. Bay, and K. J. Sernka. Avian Monitoring and Risk Assessment at the Tehachapi Pass Wind Resource Area, Period of Performance: October 2, 1996 – May 27, 1998. NREL/SR-500-36416. September 2004. Arnett, E. B., technical editor. Relationships Between Bats and Wind Turbines in Pennsylvania and West Virginia: An Assessment of Bat Fatality Search Protocols, Patterns of Fatality, and Behavioral Interactions with Wind Turbines. A Final Report Submitted to the Bats and Wind Energy Cooperative. Bat Conservation International. Austin, Texas, USA. 2005. Dooling, R. Avian Hearing and the Avoidance of Wind Turbines. NREL/TP-500-30844. June 2002. Erickson, W. P., B. Gritski, and K. Kronner. Nine Canyon Wind Power Project Avian and Bat Monitoring Report, September 2002 – August 2003. Technical report submitted to Energy Northwest and the Nine Canyon Technical Advisory Committee. October 2003. Erickson, W. P., J. Jeffrey, K. Kronner, and K. Bay. Stateline Wind Project Wildlife Monitoring Final Report, July 2001 – December 2003. Technical report peer-reviewed by and submitted to FPL Energy, the Oregon Energy Facility Siting Council, and the Stateline Technical Advisory Committee. December 2004. Erickson, W. P., G. D. Johnson, M. D. Strickland, and K. Kronner. Avian and Bat Mortality Associated with the Vansycle Wind Project, Umatilla County, Oregon 1999 Study Year: Final Report. Prepared for Umatilla County Department of Resource Services and Development. February 7, 2000. Hodos, W. Minimization of Motion Smear: Reducing Avian Collisions with Wind Turbines Period of Performance: July 12, 1999 – August 31, 2002. NREL/SR-500-33249. August 2003. Hoover, S. The Response of Red-Tailed Hawks and Golden Eagles to Topographical Features, Weather, and Abundance of a Dominant Prey Species at the Altamont Pass Wind Resource Area, California: April 1999 – December 2000. NREL/SR-500-30868. June 2002. Howe, R. W., W. Evans, and A. T. Wolf. Effects of Wind Turbines on Birds and Bats in Northeastern Wisconsin. Report submitted to Wisconsin Public Service Corporation and Madison Gas and Electric Company. November 21, 2002. Howell, J. A., and J. DiDonato. Assessment of Avian Use and Mortality Related to Wind Turbine Operations Altamont Pass, Alameda and Contra Costa Counties, California; September 1988 Through August 1989: Final Report. November 7, 1991. Howell, J. A., and J. Noone. Examination of Avian Use and Mortality at a U.S. Windpower, Wind Energy Development Site Montezuma Hills, Solano County, California: Final Report. September 10, 1992. Howell, J. A. Bird Mortality at Rotor Swept Area Equivalents, Altamont Pass and Montezuma Hills California. Transactions of the Western Section of the Wildlife Society. 33: 24-29. 1997. Hunt, W. G., R. E. Jackman, T. L. Hunt, D.E. Driscoll, and L. Culp. A Population Study of Golden Eagles in the Altamont Pass Wind Resource Area: Population and Trend Analysis 1997. Report to the National Renewable Energy Laboratory, Subcontract XAT-6-16459-01. Predatory Research Group, University of California, Santa Cruz. 1998. Johnson, G., W. Erickson, J. White, and R. McKinney. Avian and Bat Mortality During the First Year of Operation at the Klondike Phase I Wind Project, Sherman County, Oregon. Prepared for Northwestern Wind Power. March 2003. Johnson, G. D., W. P. Erickson, M. D. Strickland, M. F. Shepherd, and D.A. Shepherd. Avian Monitoring Studies at the Buffalo Ridge, Minnesota Wind Resource Area: Results of a 4-Year Study. Prepared for Northern States Power Company. September 22, 2000. Johnson, G. D., M. K. Perlik, W. P. Erickson, and M. D. Strickland. Bat Activity, Composition, and Collision Mortality at a Large Wind Plant in Minnesota. Wildlife Society Bulletin 32(4):1278-1288. 2004. Kerlinger, P. An Assessment of the Impacts of Green Mountain Power Corporation’s Wind Power Facility on Breeding and Migratory Birds in Seasburg, Vermont July 1996 – July 1998. Prepared for the Vermont Department of Public Service. March 2002. Kerns, J., and P. Kerlinger. A Study of Bird and Bat Collision Fatalities at the Mountaineer Wind Energy Center, Tucker County, West Virginia: Annual Report for 2003. Prepared for FPL Energy and Wind Energy Center Technical Review Committee. February 14, 2004. Koford, R., A. Jain, G. Zenner, and A. Hancock. Avian Mortality Associated with the Top of Iowa Wind Farm: Progress Report Calendar Year 2003. February 28, 2004. National Wind Coordinating Committee. Wind Turbine Interactions with Birds and Bats: A Summary of Research Results and Remaining Questions. Fact Sheet: Second Edition. November 2004. Nicholson, C. P., R. D. Tankersley, Jr., J. K. Fiedler, and N. S. Nicholas. Assessment and Prediction of Bird and Bat Mortality at Wind Energy Facilities in the Southeastern United States. Prepared for the Tennessee Valley Authority. 2005. Orloff, S., and A. Flannery. Wind Turbine Effects on Avian Activity, Habitat Use, and Mortality in Altamont Pass and Solano County Wind Resource Areas 1989-1991: Final Report. Prepared for Planning Departments of Alameda, Contra Costa, and Solano Counties and the California Energy Commission Grant #990-89-003. March 1992. Orloff, S., and A. Flannery. A Continued Examination of Avian Mortality in the Altamont Pass Wind Resource Area. Prepared for the California Energy Commission. January 1996. Osborn, R. G., K. F. Higgins, C. D. Dieter, and R. E. Usgaard. Bat Collisions with Wind Turbines in Southwestern Minnesota. Bat Research News. Vol. 37, No. 4. Winter 1996. Osborn, R. G., K. F. Higgins, R. E. Usgaard, C. D. Dieter, and R. D. Neiger. Bird Mortality Associated with Wind Turbines at the Buffalo Ridge Wind Resource Area, Minnesota. The American Midland Naturalist. 143:41-52. Schmidt, E., A. J. Piaggio, C. E. Bock, and D. M. Armstrong. National Wind Technology Center Site Environmental Assessment: Bird and Bat Use and Fatalities – Final Report, Period of Performance: April 23, 2001 – December 31, 2002. NREL/SR-5000-32981. January 2003. Smallwood, K. S. and Thelander, C.G. Developing Methods to Reduce Bird Mortality in the Altamont Pass Wind Resource Area. Final Report by BioResource Consultants to the California Energy Commission, Public Interest Energy Research – Environmental Area, Contract No. 500-01-019; L. Spiegel, Program Manager. August 2004. Smallwood, K. S., and L. Neher. Repowering the APWRA: Forecasting and Minimizing Avian Mortality Without Significant Loss of Power Generation. The California Energy Commission, PIER Energy-Related Environmental Research. CEC-500-2005-005. December 2004. Thelander, C. G., K. S. Smallwood, and L. Rugge. Bird Risk Behaviors and Fatalities at the Altamont Pass Wind Resource Area, Period of Performance: March 1998 – December 2000. NREL/SR-500-33829. December 2003. Young Jr., D. P., W. P. Erickson, R. W. Good, M. D. Strickland, and G. D. Johnson. Avian and Bat Mortality Associated with the Initial Phase of the Foote Creek Rim Windpower Project, Carbon County, Wyoming, November 1998 – June 2002. Prepared for Pacificorp, Inc., SeaWest Windpower, Inc., and the Bureau of Land Management. January 10, 2003. Young, Jr, D. P., W. P. Erickson, M. D. Strickland, R. E. Good, and K. J. Sernka. Comparison of Avian Responses to UV-Light-Reflective Paint on Wind Turbines: Subcontract Report July 1999-December 2000. NREL/ST- 500-32840. January 2003.
Wind power has recently experienced dramatic growth in the United States, with further growth expected. However, several wind power-generating facilities have killed migratory birds and bats, prompting concern from wildlife biologists and others about the species affected, and the cumulative effects on species populations. GAO assessed (1) what available studies and experts have reported about the impacts of wind power facilities on wildlife in the United States and what can be done to mitigate or prevent such impacts, (2) the roles and responsibilities of government agencies in regulating wind power facilities, and (3) the roles and responsibilities of government agencies in protecting wildlife. GAO reviewed a sample of six states with wind power development for this report. The impact of wind power facilities on wildlife varies by region and by species. Specifically, studies show that wind power facilities in northern California and in Pennsylvania and West Virginia have killed large numbers of raptors and bats, respectively. Studies in other parts of the country show comparatively lower levels of mortality, although most facilities have killed at least some birds. However, many wind power facilities in the United States have not been studied, and, therefore, scientists cannot draw definitive conclusions about the threat that wind power poses to wildlife in general. Further, much is still unknown about migratory bird flyways and overall species population levels, making it difficult to determine the cumulative impact that the wind power industry has on wildlife species. Notably, only a few studies exist concerning ways in which to reduce wildlife fatalities at wind power facilities. Regulating wind power facilities is largely the responsibility of state and local governments. In the six states GAO reviewed, wind power facilities are subject to local- or state-level processes, such as zoning ordinances to permit the construction and operation of wind power facilities. As part of this process, some agencies require environmental assessments before construction. However, regulatory agency officials do not always have experience or expertise to address environmental and wildlife impacts from wind power. The federal government plays a minimal role in approving wind power facilities, only regulating facilities that are on federal lands or have some form of federal involvement, such as receiving federal funds. In these cases, the wind power project must comply with federal laws, such as the National Environmental Policy Act, as well as any relevant state and local laws. Federal and state laws afford generalized protections to wildlife from wind power as with any other activity. The U.S. Fish and Wildlife Service (FWS) is the primary agency tasked with implementing wildlife protections in the United States. Three federal laws--the Migratory Bird Treaty Act, the Bald and Golden Eagle Protection Act, and the Endangered Species Act--generally forbid harm to various species of wildlife. Although significant wildlife mortality events have occurred at wind power facilities, the federal government has not prosecuted any cases against wind power companies under these wildlife laws, preferring instead to encourage companies to take mitigation steps to avoid future harm. All of the six states GAO reviewed had statutes that can be used to protect some wildlife from wind power impacts; however, similar to FWS, no states have taken any prosecutorial actions against wind power facilities where wildlife mortalities have occurred.
The Mutual Educational and Cultural Exchange Act of 1961 authorizes the J-1 Exchange Visitor Program. The program provides foreign nationals with opportunities to participate in exchange programs in the United States and return home to share their experiences, while also encouraging Americans to participate in educational and cultural programs in other countries. Foreign nationals who participate in the program enter the United States with a J-1 visa. The program has grown considerably over the years. Figure 1 shows the number of J-1 visas issued since 1995. The program is comprised of 13 categories of exchanges, which are grouped under private sector programs or academic and government programs. The private sector programs include the Alien Physician, Au Pair, Camp Counselor, Summer Work Travel, and Trainee categories. The academic and government programs include the Government Visitor, International Visitor, Professor and Research Scholar, Short-Term Scholar, Specialist, Student (Secondary School Student, College/University Student), and Teacher categories. The Exchange Visitor Program had about 282,000 exchange visitors in the 13 program categories for fiscal year 2004. State’s Bureau of Educational and Cultural Affairs administers the Exchange Visitor Program through the Office of Exchange Coordination and Designation. This office administers the program through the designation of United States organizations to conduct exchange programs in the various exchange categories. There are 1,457 designated exchange programs. Designated sponsors are responsible for screening and selecting qualified applicants for program eligibility. The office determines which organizations will be designated to administer the international exchange programs on the basis of information provided during the application process in accordance with regulatory requirements. The office also develops and administers policy and regulations for the exchange categories and oversees sponsoring organizations’ compliance. Sponsors may be for profit or nonprofit organizations; businesses; state, local, or federal government agencies; and education-related institutions. Sponsors sometimes contract with overseas organizations—such as student travel agencies—as local partners to help identify and screen exchange program applicants. Some sponsors serve as intermediaries between the exchange visitor and a third party, which engages the exchange visitor in the program activity for the category in which they are being sponsored. For example, for trainees, the third parties are the organizations where the exchange visitors will receive training. Third parties consist of a variety of organizations, which include—but are not limited to—hotels, law firms, restaurants, Internet companies, and other private and public sector businesses and organizations. Sponsors are responsible for overseeing the operations of their overseas partners, and any third parties they work with. Also chief among the sponsors’ responsibilities is that of managing information on the exchange participant in DHS’ Student and Exchange Visitor Information System (SEVIS), which has been in operation since January 2003. SEVIS, which is maintained and administered by DHS’ U.S. Immigration and Customs Enforcement, is an Internet-based system that electronically collects information on all nonimmigrants that enter the United States with student visas and exchange visitor visas, and their dependents. Once a participant’s data are entered into SEVIS by the sponsor, a Form DS-2019 is issued by the system in the applicant’s name. This identifying information in SEVIS can be reviewed by a consular officer at the time of the visa interview and during the processing of the exchange visitor at the port of entry. Upon the arrival of the Exchange Visitor Program participant, the sponsor is required to document their participation in their program activity and record information on the location of the visitor’s employment or training and U.S. residence address. Figure 2 describes the sponsoring organizations’ roles and responsibilities. The Summer Work Travel Program is among the largest of the 13 categories of exchanges, with about 89,453 participants in 2004. This program is designed to achieve the educational objectives of international exchange by involving bona fide foreign college or university students directly in the daily lives of U.S. citizens through travel and temporary work opportunities. The Trainee Program, with about 27,475 participants in 2004, provides foreign nationals the opportunity to enhance their skills in their chosen career field through participation in a structured training program. Summer Work Travel sponsors help the participants obtain jobs and provide pre-arrival information, an orientation to life in the United States, and contact information in the event of problems. Sponsors of trainees are also required to provide pre-arrival information to the trainees and are directly responsible for all aspects of the trainees’ program, including the selection, orientation, training, supervision, and evaluation. About 52 Summer Work Travel and 170 Trainee sponsors operated programs in 2005. Thousands of employers participate in the Summer Work Travel and the Trainee programs. Typical Summer Work Travel employers include amusement parks, resorts, hotels, and restaurants. The jobs generally include ride operators, waiters, lifeguards, receptionists, and guides. The types of organizations that utilize trainees include corporations, architectural firms, hotels, restaurants, development organizations, airlines, investment and financial services entities, and manufacturing companies. The participants are placed in locations wherein they receive training in engineering, drafting and design, biomedical technology, agricultural technology, hospitality administration and management, marketing, agricultural and food products processing, culinary arts and chef training, financial management, and many other careers. Throughout the existence of the Summer Work Travel Program, there has been a geographic shift in which countries provide the most program participants. In the past, Western Europe had the largest numbers of participants. However, more recently, the largest numbers of Summer Work Travel participants have been citizens of Eastern European countries, including Poland, Bulgaria, Czech Republic, and Romania. Table 1 lists the 10 countries with the largest numbers of Summer Work Travel and Trainee participants. State has not exerted sufficient management oversight of the Summer Work Travel and the Trainee programs to guard against abuse of the programs. State primarily ensures compliance with program regulations through a paperwork review, and there is inconsistent monitoring by sponsors. Moreover, some sponsors believe that the program regulations need updating, and State officials have expressed concerns about the enforceability of the sanction/revocation process provided for in the Exchange Visitor Program regulations. Sponsors have also expressed concern about their communication with State. State has acknowledged problems, is establishing a compliance unit to monitor program activities, and is revising the regulations; however progress has been slow. State’s monitoring efforts largely consist of reviewing written information provided by sponsors, with minimal efforts of verifying such information through program visits. State relies on sponsors to provide written information, primarily through annual reports, which describe the number of individuals a sponsor has placed and a brief narrative on program activities, difficulties encountered, and their resolution. Exchange program regulations require that sponsors promptly notify State about any serious problem or controversy that could cause State or the sponsor’s Exchange Visitor Program notoriety or disrepute. When a sponsor reports a problem, State officials follow-up by telephone, e-mail, fax or letter, according to sponsors and State officials. However, State rarely visits sponsors to observe program activities and verify the information that they provide, although such visits are a good internal control practice for ensuring that management’s directives are being carried out. We found that in the past 4 years, State officials made visits to only 8 of its 206 Summer Work Travel and/or Trainee sponsors. Additionally, information on problematic incidents provided by the sponsors and any notes or correspondence on the matter are filed as part of the material that is examined when the sponsor applies for redesignation, which is required every 2 years. In their applications for redesignation to State, all sponsors are required to estimate the number of exchange visitors their organization would like to place and provide information on their legal, financial, and managerial resources to manage exchange programs in compliance with federal regulations. State reviews these applications along with the annual reports and other documentation collected over the 2-year period and determines whether it should grant a 2-year designation to a sponsor and the number of participants the sponsor will be authorized to include in their exchange program activity. The vast majority of sponsors who apply for redesignation are approved. State officials said that their staffing levels and lack of travel funds do not allow for intensive monitoring of the Exchange Visitor Program sponsors. The Office of Exchange Coordination and Designation has five Program Designation officers who serve as the point of contact and are responsible for the day-to-day administration and management of the 13 exchange programs. Exchange Visitor Program regulations require sponsors to effectively administer and monitor their Exchange Visitor Program and ensure exchange participants engage in activities consistent with the appropriate exchange category. Nevertheless, recent discoveries by consular officers overseas suggest that some sponsors do not consistently carry out their oversight and monitoring responsibilities. For example: A trainee applicant submitted a training offer that included the name and organizational information of a legitimate financial services company, but listed as a contact an individual with a noncompany e-mail address. When the consular officer checked the contact information, he learned that the contact person was not affiliated with the financial services company. The sponsor admitted to only spot-checking the viability of third party organizations and training plans. When a consular officer at another post checked a company’s Web site to verify the job offer of a foreign student applying for the Summer Work Travel Program, he discovered that the company was a topless bar. In another case, consular officers noticed a number of trainee applicants who said they were going to the United States to work as kitchen help and wait staff. When State contacted the U.S. sponsor about these applicants, the sponsor stated that it relied on another organization to help it select and place the trainees and admitted that it had not followed up directly with each trainee and employer to ensure that its standards were being satisfied. Sponsors are required to take all reasonable steps to ensure that third parties know and comply with applicable provisions of the Exchange Visitor Program regulations. However, State does not offer any guidance on how the sponsors should carry out their monitoring and oversight responsibilities. Two of the sponsors we met with said that, after they discovered that trainees whom they had sponsored were not receiving any training, they established new practices to visit at least 10 percent of their exchange program participants and all employers and exchange participants where there were problems. During our review, several sponsors we met with raised concerns about the clarity of the program regulations and also complained that varying interpretations of the regulations make it difficult for them to implement the program. Moreover, State officials believed that the sanctions provided in the regulations are not adequate to control the activities of sponsors who incorrectly implemented the program. State has been revising the regulations but has not finalized the changes. Six of the 13 sponsors that we met with described a range of problems pertaining to the regulations, particularly regarding the Trainee Program. Their comments included the following: The Trainee regulations lack specificity and are open to differing interpretation by State and the sponsors. Dealing with State has become more complicated for sponsor organizations because State has at times changed rules outside of the formal regulation-setting process. The Trainee regulations are so cumbersome that it is unclear what should be included in a training plan. Training plans vary widely among sponsors, and there is no guidance on what constitutes a good plan. The regulations should include a separate category for interns. A past OIG report also cited problems with the regulations and recommended in particular that, for the Trainee Program, the training regulations should more clearly define what is not considered training. Six of the sponsors we met with and the representative of an association of sponsors expressed concern with State’s interpretation of certain provisions of the regulations, while seven sponsors and the association representative also said that State does not consistently disseminate its interpretations or guidance on the regulations to the sponsors. For example, the regulations state that the maximum period of the participation in the Exchange Visitor Program for a trainee, with the exception of flight training, shall not exceed 18 months. Some sponsors said they interpreted this provision to mean that an individual could come to the United States one or more times as a trainee as long as the combined duration of the visits did not exceed 18 months. In 2002, however, some sponsors said they were told that the Trainee Program was restricted to a one-time training session not to exceed 18 months. This clarification in regulations has surprised some sponsors, according to one of the sponsors who was notified about this change through a fax from State. A State official explained that all Trainee sponsors were sent a message through SEVIS in October 2003, explaining State’s policy on this matter. According to the documentation, the message was to be addressed to all sponsors that conduct Trainee programs and all responsible officers and their alternates. Moreover, a few sponsors said guidance or interpretations of various provisions of the regulations have been communicated inconsistently. For example, representatives of two sponsoring organizations said that State is no longer requiring that sponsors recruit no more than 10 percent of repeat Summer Work Travel participants, as specified in the regulations, but State did not make any formal effort to inform the sponsors of the change. One of the sponsors said that the organization learned about the change from the U.S. Embassy in Warsaw. A State official explained that policy changes are announced in the Federal Register or by written notice to all sponsors. According to the official, two cables were sent to all overseas posts clarifying the requirements of the Summer Work Travel Program and the removal of the 10 percent rule regarding repeat participants. In addition, copies of these cables were sent, by e-mail, to all Summer Work Travel sponsors, according to the official. State is also reviewing the sanctions provided in the regulations. State officials, including the Principal Deputy Assistant Secretary of the Bureau of Educational and Cultural Affairs, said the current sanction/revocation process that State may use to limit the activities of sponsors that do not comply with the regulations—or remove a sponsor from the program altogether—is difficult to enforce. The sanctions range in severity from a letter of reprimand to an action to revoke a sponsor’s designation. One sanction that State uses is to deny the sponsor’s request for certification forms until the compliance issue has been resolved. For example, State officials described a recent case in which State’s attempt to revoke a sponsor’s designation was challenged by the sponsor in district court. In this case, State received complaints from two trainees who alleged they were not receiving the training that they had expected and were displeased with their training assignments and compensation. The sponsor in this case had placed individuals in positions in the hospitality industry. State investigated the sponsor’s operation, interviewed a number of program participants, and concluded that the sponsor was not placing participants in management training positions but was operating a work program, in violation of the regulations. State’s in-house revocation board agreed and supported the sanction of revocation. However, in remanding the matter to the board for further proceedings, the district court concluded that State's investigation had been too limited and had not produced evidence adequate to support the severe punishment of revocation. After a second hearing, the board overturned the revocation, thereby enabling the sponsor to resume its program. State concluded that the sanctions’ provisions needed streamlining and tightening for more effective and assured application. State Slow to Revise Regulations State has acknowledged that the regulations need revising; however, more than 3 years after revisions were suggested by the exchange industry they are still in draft form. According to the Acting Director of the Office of Exchange Coordination and Designation revisions are in process. For example, State is revising the Trainee regulations to create a separate Intern category to accommodate younger, less experienced applicants, such as students or recent university graduates seeking to gain practical work experience. The exchange community generally supports the creation of a new Intern category and, in November 2001, an association of organizations that sponsor exchange students submitted its proposals for revising the regulations to State. The Acting Director of the Office of Exchange Coordination and Designation said that the sections of the regulations on interns and trainees are still being reviewed at State and attributed the delay in completing the regulations to the review process. He also stated that State’s Office of Legal Advisor was given the responsibility to develop new sanctions for the program 2 years ago. According to the Assistant Legal Advisor for Public Diplomacy and Public Affairs, revised sanctions regulations were drafted and shared with the Bureau of Educational and Cultural Affairs for review and comment. The Assistant Legal Advisor stated that recent guidance from the Bureau’s Principal Deputy Assistant Secretary suggested that further revisions were warranted to best suit evolving program needs. Once State has completed its review, the Department of Justice will be consulted prior to publication of the revised regulations. She stated that while this process could take an additional 2 to 6 months to complete, the timing should fall well within the time lines and milestones set forth in a 24-month corrective action plan proposed by the Bureau in March 2005. According to GAO guidance on internal controls, managers should ensure that effective external communications occur with groups that impact its program, projects, operations, and other activities. According to seven of the sponsors we met with and the organization representing sponsors, communication with State is a problem. For example, one sponsor described State as reactive rather than proactive in its communication practices. Representatives of four of the sponsoring organizations complained that program officers were not always responsive to their inquiries or were difficult to reach. Some sponsors attributed the difficulties to an insufficient number of State staff. Some sponsors also complained about a lack of feedback from a study commissioned by State on SEVIS compliance and other issues, which was completed in December 2003. The report concluded that in some cases the sponsors’ staff was not adequately trained, that the sponsors did not maintain required records, and did not fully provide oversight of their foreign partners, employers, third-party organizations, or the exchange participants. The report made a number of recommendations on how State should further clarify the program regulations and help train the sponsors on both State regulations and their role in maintaining SEVIS data. We met with six sponsors that participated in the study, and none had received feedback from State on the results of the study. One said the Acting Director of the Office of Exchange Coordination and Designation made some general comments about the study at a meeting of an industry association. Communication between the Office of Exchange Coordination and Designation and the overseas posts has also been cited as a concern. For example, one sponsor said there is no mechanism to ensure consistent interpretation of the regulations by the overseas posts, while another said the posts are the last to know about program changes. For example, a sponsor said that even though the office has told them that applicants can have only one trainee placement, one post was still telling applicants in 2004 that they can participate more than once. Further, a representative of an exchange industry organization stated that the Office of Exchange Coordination and Designation knows too little about what is really going on in the field. This situation may have been the reason ineligible students from at least one country, Ireland, were allowed to participate in the Summer Work Travel Program for over 30 years before an apparent misinterpretation of the regulations was discovered. In 2002, a consular officer in Dublin asked for clarification on the eligibility of students who had completed their course work but had not formally graduated, according to an embassy official. In response, State instructed the posts not to change their selection procedures for the 2003 season. However, according to the Acting Director of Office of Exchange Coordination and Designation, State ultimately confirmed that such students were not eligible for the program, unless they could demonstrate enrollment in another degree program, and sent a cable to the posts with its decision in 2004. According to the local program representatives, such students had constituted up to a third of the participants from Ireland. State has been slow to implement the OIG’s 2000 recommendation that it devote the necessary resources to perform more rigorous oversight. The Principal Deputy Assistant Secretary of the Bureau of Educational and Cultural Affairs and the Acting Director of the Office of Exchange Coordination and Designation acknowledged that State has been slow to establish a compliance unit. The Acting Director of the Office of Exchange Coordination and Designation said funding to establish a compliance unit has been requested for several years without success. The Bureau requested funding in fiscal years 2002 and 2003 for five additional positions for the compliance unit, but State did not approve the request for submission to the Office of Management and Budget (OMB), according to a State official. The Principal Deputy Assistant Secretary of the Bureau of Educational and Cultural Affairs stated that the unit was not funded initially because of a lack of senior managerial support. State has since supported the request for funding because of congressional interest and the increased emphasis on strengthening program management, according to the Principal Deputy Assistant Secretary. Subsequent requests for funding for the unit for fiscal years 2004 to 2006 were approved by State but not approved by OMB. However, the officials told us they realized that the program regulations have been ignored by some exchange program sponsors, agreed that a compliance unit was needed, and are in the process of the establishing the unit by diverting existing positions. According to State officials, the unit as it is currently conceived would initially consist of one Foreign Service officer and two program analysts and will report to the Principal Deputy Assistant Secretary. The officials stated that the compliance unit will rely on structured reviews conducted by contractors. They said State will utilize the system currently in place at DHS, which uses contractors for on-site reviews of schools to verify their eligibility to register foreign students into the SEVIS system. The contractors will verify the information that the sponsors submit as part of the redesignation process and may check the names of the responsible and alternate responsible officers—and possibly board members—through law enforcement databases, according to the State officials. They said that as part of the new compliance effort, sponsors for all nonacademic exchanges will be required to contract for and submit annual audits of their activities. The compliance unit will develop a management template to guide the audits. Currently only sponsors of the Au Pair Program are required to submit such audits. It is too early to determine whether implementation of the planned compliance unit will resolve all of the issues identified by GAO and OIG. For example, using contractors to visit the sponsors will address OIG’s criticism that State does not visit the sponsors. However, State has not determined what information the contractors need to review to assess how well the sponsors monitor employers and third parties. Moreover, State has not provided a written plan describing in detail how the compliance unit will operate. Funding also remains an issue. State initially plans to cover the cost of the new compliance unit by redirecting current appropriations and obtaining—as agreed upon by OMB—about $450,000 of the SEVIS fees collected from exchange program sponsors by DHS. Further, State will again request funding for the compliance unit in the 2007 budget request. GAO internal control standards instruct agencies to identify risks that could impede the efficient and effective achievement of its objectives and assess their impact. A number of potential risks are associated with the Summer Work Travel and the Trainee programs, including the risk of participants using the programs to remain in the United States beyond their authorized time. Exchange participants may also use the programs as a means to fraudulent immigration. There is also the potential for the Trainee Program being misused as an employment program, although State does not have data on the extent that such abuse occurs. The exchange participants could be exploited by employers or other third parties. While State investigates complaints, it does not know the extent to which participants have negative experiences because it does not systematically document and analyze complaints made by program participants and reports of serious problems reported by the sponsors. Although exchange program participants are expected to return to their country following completion of their program, there is information indicating that some participants remain beyond the time that they were authorized to stay. Information on overstays is available from DHS’s arrival and departure database and the results of returnee validation studies conducted by overseas posts. For example, we asked DHS to check the total number of J-1 visa recipients in all exchange categories who had completed their program since January 2003 against its arrival and departure database to determine which visa holders had departed the country. The results showed that about 362,000 exchange participants had concluded their program during this period. The data showed that about 36 percent of the participants had departed the United States by the end of their program and about 24 percent were potential overstays. However, the full extent of overstays is uncertain because DHS could not find information on an additional 40 percent of the J-1 visa holders because they may have entered the United States before the current entry system was operational. DHS’ primary method for estimating overstays is to match the arrival portion of a form collected by DHS when the visitor enters the United States to the departure portion of the form generally collected by the airlines when the visitor departs. One of the weaknesses in this system is that the departure portion of the form is not always submitted to airline staff. Further, data entry errors by DHS contractors also make it difficult to match the forms. The U.S. government is phasing in a more comprehensive entry-exit system, US-VISIT, to correct the weaknesses in its current system. Some overseas posts periodically conduct validation studies to determine if visa applicants who received J-1 visas at their post have returned to that country. These studies generally consist of telephone inquiries to the visa recipient, while some posts have required J-1 visa recipients to report to the post upon their return. The results vary. For example, one post in the former Soviet Union conducted a study of its 2004 Summer Work Travel season that showed that, as of January 20, 2005, 26 percent of the 2004 participants from that country remained in the United States. This post reported overstay rates of 29 percent for 2003, 25.6 percent for 2002, and 27 percent for 2001. Other posts have reported lower overstay rates. For example, a post in a Western European country conducted a validation study of its 2004 Summer Work Travel Program that showed that all of the visa applicants in the sample who were issued J-1 visas returned home, as required. The posts can use the results of such studies to guide their decision making when they adjudicate visas. For example, the post with a 26 percent overstay rate in 2004 developed additional selection factors for the 2005 Summer Work Travel season to assess the potential for individuals to return to their country after their programs are completed. However, these studies are not always statistically valid. Moreover, the Acting Director of Exchange Coordination and Designation said the post validation studies are not useful to program officials because the posts do not follow a standardized methodology. J-1 visitors who remain in the United States beyond their program do not necessarily stay for long periods of time, and not all of them remain in the United States illegally. According to consular officials we talked with at one post, some participants in the Summer Work Travel Program from their country who remain in the United States beyond their program date might typically overstay anywhere from a few days to no more than a few months. The consular officials said the participants sometimes remain in the United States longer if they have not earned sufficient money to cover their expenses. Others change their visa status to another nonimmigrant category. For example, the 2004 validation study by one post in the former Soviet Union country showed that 39 percent of those who had remained in the United States beyond the completion of the program had either changed their visa status, got married, or were seeking asylum. Changing status from a J-1 visa to another visa category is permitted under U.S. immigration laws in certain circumstances, but Bureau of Consular Affairs and other officials have stated that it is not the intent of the program. DHS data show that a growing number of J-1 visa holders had applied for political asylum between 1995 and 2004, about 6,300 individuals who entered the United States with J-1 visas applied for asylum. According to DHS, the numbers of asylum applications from J-1 visa holders from former Soviet Union countries have more than doubled since 2000. These countries include Russia, Belarus, Armenia, and Ukraine. The officials are concerned that some of these claims are fraudulent, as they doubt that J-1 visa holders—who have to be students in good standing in their countries—are being persecuted. The recent increase in numbers of such claims and the similarities of the stories—which indicate coaching, are also indicators of fraud, according to DHS officials. DHS’ Fraud Detection and National Security Unit and State’s Bureau of Consular Affairs Office of Consular Fraud Prevention are monitoring this issue. These units would turn over any suspected fraud to U.S. Immigration and Customs Enforcement for investigation. According to a DHS official, U.S. Immigration and Customs Enforcement is conducting asylum fraud investigations in Los Angeles and Cleveland. Among the targets of those investigations are individuals who were admitted with J-1 visas, as well as several other classes of nonimmigrants. The potential exists for the Trainee Program to be misused as an employment program. Regulations strictly prohibit the use of the trainee category for ordinary employment purposes, stating in particular that sponsors may not place trainee participants in positions that are filled or would be filled by full-time or part-time employees. State and the overseas posts provided some information describing Trainee Program abuses, but they did not have information on the extent of the problem. In one example, a sponsor learned that an organization it had contracted with to help select and place trainees had placed the participants with employers that had contracts to provide H-2B temporary workers. The organization had participated in the Trainee Program because the H-2B visa category was at its limit and it was looking for an alternative way to receive foreign workers. In another example, a staffing agency recruited about 650 electrical engineers, primarily from Slovakia, Bulgaria, and Romania, to come to the United States as trainees. Ostensibly, the plan was to train the engineers in the United States because they would be working on projects in Europe that required knowledge of U.S. standards. However, the organization served as an electrical contractor, placing the trainees on U.S. construction projects as electricians. Because it was not designated to sponsor trainees, the staffing agency approached a few designated sponsors to issue the DS-2019 forms. State received complaints from the trainees and the electrician’s union and investigated this case. To correct the situation, the sponsors and the union eventually found appropriate placements for some of the engineers. Others returned to their countries. According to a State official, the staffing agency went out of business, and the principal party was indicted on criminal charges. The Acting Director of the Office of Exchange Coordination and Designation described agricultural training programs as problematic because of the potential for fraud. He said the abuses are not hidden and that there is not even an attempt to represent jobs as training, and that employers refer to the participants as employees rather than trainees. For example, a 2004 DHS investigation involved four Chinese nationals brought to the United States by an individual to participate in an agricultural program sponsored by a Florida university. The trainees were placed with a dairy farm that had an agreement with the university. DHS found that only one trainee had a firm grasp of English, which called into question the trainees’ eligibility for a J-1 visa as well as whether the trainees were receiving training or simply employed at the farm. Upon further investigation DHS found that the individual had brought 17 trainees to the United States to participate in the university’s training program. The trainees were placed at four participating dairies. DHS also found that only one of the dairies reported having a structured training plan. According to DHS, the university violated regulations concerning sponsoring organizations for exchange trainees by failing to ensure that the J-1 trainees were properly compensated and possessed the required language ability to participate in an English language-based training program. DHS further stated that the dairies were exploiting the J-1 trainees for cheap labor and in most cases were not concerned with actually training them beyond what was necessary to perform their work. The OIG’s 2000 report also described abuse of the Trainee Program. In two of the cases that the OIG investigated, U.S. workers complained that they were replaced by trainees. Despite such misuses, Labor officials stated that it is not likely that the exchange programs will have any effect on the U.S labor market because of the small number of J-1 exchange visitors (about 283,000 in fiscal year 2004) relative to the U.S. workforce. However, the U.S. government does not collect data to assess any potential effect of exchange programs on the U.S. labor market. Labor officials said that a monthly household survey, the Current Population Survey, reviews a sample of households to compile labor statistics on foreign-born workers. However, the numbers of exchange program participants is so small that even if they were captured by these surveys there is no way to separate out the effect of their labor participation from that of the other foreign-born workers. Also, it is possible that exchange participants would not be captured in the survey because of their housing arrangements. For instance, if they do not reside in traditional housing units and live in dormitories or resort-provided housing, they may not be included in the survey sample. The Summer Work Travel and Trainee participants generally have positive experiences in the United States, according to the sponsors and participants we met with. Some sponsors and overseas representatives survey their participants on their experiences. One of the sponsors said its research showed that about 85 percent of its participants were satisfied with their placement. All of the Summer Work Travel and Trainee participants whom we met with described their overall experiences as positive. When the participants do complain, the complaints are generally minor, usually involving disappointments with their placement, housing, or location. The Office of Exchange Coordination and Designation investigates all complaints that are brought to its attention, according to State officials. In addition, the regulations require the sponsors to inform State of any serious problems or controversy that could be expected to bring State or the sponsors’ exchange programs into disrepute. However, we found that although State may follow up on such reports, it does not systematically document or analyze them. Such an analysis could be used to identify program weaknesses. State acknowledged that is does not have procedures for recording and maintaining all complaints in all categories and stated such procedures are currently being prepared and will be incorporated into the Foreign Affairs Manual. Occasionally, the exchange participants have negative experiences as a result of exploitation by a third party. A State official described an incident in which about 45 current Summer Work Travel participants were placed in substandard housing. Apparently, the housing was leased by an employee of the sponsoring organization. The sponsor subsequently placed the students in better housing, and the OIG is investigating the incident at the Exchange Visitor Program office’s request. State officials described another situation in which they fear a third party might have exploited the exchange participants. In this case, Bulgarian Summer Work Travel participants were approached by an employee of the local program representative while they were still in Bulgaria and told that the jobs arranged for them in New Jersey by the U.S. sponsor were no longer viable and that they were instead to report to jobs in Florida. As a result, when the Bulgarians arrived in the United States, they refused to continue on to New Jersey with the sponsor’s representative, who met them at the airport. Instead, they had tickets to Florida and went there to work for a third-party organization that provided cleaning services to hotels. State and DHS are currently investigating this case. When such situations receive negative media attention, it can further undermine the purpose of the program. For example, a July 13, 2005, article in a New Hampshire newspaper reported on the plight of three students from Romania who arrived in the United States on July 5, 2005, to find that the jobs they were promised no longer existed. News of these situations may even reach the foreign media. For example, a June 2004 United Press International article concerning an incident affecting Russian students stated the incident was also reported by the Moscow Times. As a result of such situations at least one foreign government has discussed its concerns with U.S. embassy officials in their country, according to a Bureau of Consular Affairs official. The Summer Worker Travel and Trainee Exchange Visitor Program categories are important components of U.S. public diplomacy efforts. However, State has not exercised sufficient management oversight of the programs to ensure that they operate as intended and are not abused. State has taken some action to address identified deficiencies, such as beginning efforts to revise the regulations; but progress has been slow. Also, State has been slow in establishing a compliance unit, which would bolster oversight efforts, despite recommendations from State’s OIG to do so. GAO guidance on internal control standards instructs agencies to identify risks that could impede the efficient and effective achievement of program goals. Once these risks are identified, they should be analyzed for possible effect. For example, an analysis of complaints and problems that participants and sponsors report could uncover program weaknesses, and these results could be used to guide the efforts of the compliance unit. Assessing such risks is a necessary step to mitigating the risks. But State has not taken action to assess the risks associated with the Summer Work Travel and the Trainee programs, in part because limited data are available. This report recommends that the Secretary of State take the following three actions to enhance the overall management and monitoring of the Summer Work Travel and the Trainee programs: fully implement a compliance unit to better monitor exchange program activities and address deficiencies; update and amend the regulations where necessary; and develop strategies to obtain data, such as information on overstays and program abuses, to assess the risks associated with the program, and use the results of its assessment to focus its management and monitoring efforts. State provided written comments on a draft of this report. These comments are reprinted in appendix II. State and DHS also provided technical comments, which we have incorporated into the report as appropriate. In its comments, State acknowledged weaknesses in its oversight and administration of the Exchange Visitor Program and reported that it has designated program oversight and administration as a weakness under the Federal Managers Financial Integrity Act. State described a number of actions it is taking to implement each of our recommendations, including developing a corrective action plan, establishing a compliance unit, working to revise program regulations, and working with DHS to gather data related to the tracking of overstays. As we agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will then send copies of this report to interested congressional committees, and the Secretaries of State, DHS, and Labor. We will also make copies available to others upon request. In addition, the report will be available at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. To examine how State manages the Summer Work Travel and the Trainee programs to ensure that only authorized activities are carried out under the programs, we reviewed previous GAO and OIG reports, program regulations and guidance, cables, sponsors’ annual reports, and other information pertaining to the programs; reviewed program files maintained by the Bureau of Educational and Cultural Affairs, Office of Exchange Coordination and Designation, for selected Summer Work Travel and Trainee sponsors to gain an understanding about the kinds of problems that the sponsors report to the Department of State (State) and how State has addressed them; interviewed State officials, including the Bureau of Educational and Cultural Affairs and the Bureau of Consular Affairs, and the Department of Homeland Security (DHS) to discuss how the U.S. government manages the programs; interviewed Department of Labor (Labor) officials about the impact of the programs on the U.S. labor market, and the Social Security Administration on issues related to Social Security cards for exchange visitors; and met with nine sponsors of the Summer Work Travel Program that accounted for 75 percent of the participants in 2004 and 13 sponsors of the Trainee Program that accounted for 54 percent of the participants in 2004, as well as an official of an association of exchange program sponsors to discuss how sponsors carry out their monitoring and oversight responsibilities. We relied on data from DHS’ Student and Exchange Visitor Information System (SEVIS) to identify sponsors and the number of participants. The SEVIS database is used by exchange program managers, sponsors, and U.S. government agencies to keep track of individuals who enter the United States on exchange visitor and student visas. We reviewed SEVIS data and discussed its reliability with State and DHS officials and sponsors, based on the descriptions of the database, procedures for entering and managing the data, and the internal checks that are part of the system. We believe these data are sufficiently reliable for our purposes. We relied on data from the sponsors to identify their overseas representatives, exchange participants, employers, and host companies. We met with 28 exchange participants to discuss their views of the programs. We reviewed data provided by the sponsors to identify exchange participants who were in their programs during our fieldwork and who were in nearby locations. We visited exchange participants in Boston, Massachusetts; Bolton Valley and Smugglers’ Notch, Vermont; and Washington, D.C. We used the results of our interviews for illustrative purposes and not to generalize to all participants. We also visited three overseas posts-–Minsk, Belarus; Warsaw, Poland; and Dublin, Ireland––where we observed visa interviews of exchange program applicants, reviewed post validation studies and other exchange program guidance, and discussed the exchange programs with consular and embassy officials. We also met with the local representatives of the U.S. sponsors to discuss how they recruited and screened Summer Work Travel and Trainee applicants. We selected Belarus because of consular concerns about overstays and the integrity of the overseas partners. Belarus is also one of the top sending countries for the Summer Work Travel Program. We selected Poland because it was the number one sending country for Summer Work Travel participants and within the top-10 sending posts for trainees in 2004. We selected Ireland because it is also within the top-10 sending countries for both the Summer Work Travel and the Trainee programs and because of the decreased participation in the Summer Work Travel Program after State’s ruling concerning the eligibility of certain students. We used the results of our fieldwork for illustrative purposes and not to generalize to all posts. To examine the potential risks of the programs we reviewed past GAO and OIG reports, and examined post validation studies, cables, and other documents. We followed up on information we obtained in Belarus on asylum seekers with J-1 visas with (1) the Bureau of Consular Affairs’ Office of Consular Fraud Prevention; (2) DHS’ Citizenship and Immigration Services, Asylum Division; and (3) DHS’ Fraud Detection National Security Unit. We also requested data on overstays from the National Security Investigations Division of DHS’ Office of Investigation, U.S. Immigration and Customs Enforcement. The overstay data that DHS provided were primarily obtained from DHS’ arrival and departure information system. Past GAO reports have discussed the weaknesses in the U.S. government’s methods of collecting overstay data. However, despite the lack of precision of both the DHS and post data on overstays, these data are sufficiently reliable to indicate overstays are a cause for concern. We conducted our review from November 2004 to August 2005 in accordance with generally accepted government auditing standards. In addition to the individual named above, John Brummet, Assistant Director; Joseph Brown; Joseph Carney; Maria Oliver; and La Verne Tharpes made key contributions to this report.
Exchange programs, which bring over 280,000 foreign visitors to the United States annually, are widely recognized as an effective way to expose citizens of other countries to the American people and culture. Past GAO and the Department of State (State) Office of Inspector General reviews have reported that some exchange visitors have participated in unauthorized activities and cited problems in the management and oversight of the programs. Strong management oversight is needed to ensure that the programs operate as intended and are not abused. This report examines how State manages the Summer Work Travel and the Trainee programs to ensure that only authorized activities are carried out under the programs and identifies potential risks of the programs and the data available to assess these risks. State has not exerted sufficient management oversight of the Summer Work Travel and the Trainee programs to guard against abuse of the programs and has been slow to address program deficiencies. State attempts to ensure compliance with program regulations through its processes of approving and annually reviewing the organizations that sponsor exchange visitors. These processes, however, are not sufficient to ensure that visitors participate only in authorized activities because the procedures consist primarily of document reviews, and State rarely visits the sponsors or host employers of the exchange visitors to make sure they are following the rules to investigate complaints. Moreover, some sponsors have asserted that the program regulations need updating. Further, State officials believe that the sanctions provided in the regulations are difficult to enforce. State acknowledged that it has been slow to address identified deficiencies and update the regulations, but had indicated that it is beginning to revise the regulations and is establishing a unit to monitor exchange activities. However funding of the unit has not been secured. A number of potential risks are associated with the programs, including that exchange visitors might use it to remain in the United States beyond their authorized time. There is also the potential for the Trainee Program to be misused as an employment program. Further, negative experiences for exchange participants could undermine the purpose of the programs. However, State has little data to measure whether such risks to the program are significant. As a result, State cannot determine if additional management actions are needed to mitigate the risks.
NRC is an independent agency established by the Energy Reorganization Act of 1974 to regulate the civilian use of nuclear materials. NRC is headed by a five-member commission, with one commission member designated by the President to serve as chairman and official spokesperson. The commission as a whole formulates policies and regulations governing nuclear reactor and materials safety and security, issues orders to licensees, and adjudicates legal matters brought before it. Security for commercial nuclear power plants is addressed by NRC’s Office of Nuclear Security and Incident Response. This office develops policy on security at nuclear facilities and is the agency’s security interface with DHS, the intelligence and law enforcement communities, DOE, and other agencies. Within this office, the Threat Assessment Section assesses security threats involving NRC-licensed activities and develops recommendations regarding the DBT for the commission’s consideration. The DBT for radiological sabotage applied to nuclear power plants identifies the terrorist capabilities (or “adversary characteristics”) that sites are required to defend against. The adversary characteristics generally describe the components of a ground assault and include the number of attackers; the size of a vehicle bomb; and the weapons, equipment, and tactics that could be used in an attack. Other threats in the DBT include a waterborne assault and the threat of an insider. The DBT does not include the threat of an airborne attack. However, according to NRC officials, NRC regulations do require nuclear power plants to implement readily available measures to mitigate against the potential consequences of such an attack. In its publicly available regulations governing the licensing of nuclear power plants, NRC has issued a general description of the DBT—for example, requiring sites to defend against an attack by several well-trained and dedicated individuals armed with hand-carried weapons and equipment and assisted by a knowledgeable insider who participates in a passive or active role. In April 2003, NRC issued orders to nuclear power plant licensees containing a more detailed description of the revised DBT, which NRC considers safeguards information. NRC requires nuclear power plants to have and implement a security plan that describes their strategy for defending against an attack having the characteristics of the DBT. Nuclear power plant sites are responsible for installing barriers and intrusion detection equipment, hiring security officers, and implementing other measures in accordance with their security plans. NRC then inspects the sites’ compliance with the plans and ability to defend against the DBT. After revising the DBT, NRC required sites to submit new plans by April 29, 2004, for NRC’s review and approval and to implement the security described in their new plans by October 29, 2004. The plans contain information about the sites, including a description of sites’ physical layout, such as barriers and buildings, and a description of any environmental features important to the effective coordination of response operations; the minimum number of security officers defending the vital areas (the areas containing equipment needed to ensure the safe shutdown of the reactor and protection of spent fuel pools); and a description of the protective strategy that sites will enact in response to an attack or threat defined in the DBT, such as an external land-based assault, a vehicle bomb, a waterborne assault, or an insider threat. NRC’s performance-based means for testing the effectiveness of nuclear power plant security programs is through force-on-force inspections. These inspections, which consist of 350 hours of on-site inspection activity, are intended to demonstrate how well a nuclear power plant might defend against a real-life threat. In a force-on-force inspection, a professional team of adversaries attempts to reach specific “target sets” within a nuclear power plant that would allow them to commit radiological sabotage. These target sets represent the minimum pieces of equipment or infrastructure an attacker would need to destroy or disable to commit radiological sabotage resulting in an elevated release of radioactive material to the environment. Force-on-force exercises do not directly test the response of outside agencies, such as local law enforcement. However, sites simulate actions they would take to notify local law enforcement and other outside agencies. In addition, according to NRC officials, sites routinely conduct liaison activity with local law enforcement and emergency response agencies. While the adversary characteristics terrorists might use in an actual attack are uncertain, the DBT provides parameters for the conduct of force-on- force inspections. For example, the mock adversary force is constrained to using the specific number of attackers, amount of explosives, and weapons and tactics included in the DBT. According to NRC officials, the commission recently approved an option to conduct force-on-force inspections using adversary characteristics that go beyond those in the DBT. This option would be available on a voluntary basis to nuclear power plant licensees that are clearly successful in defending against the first two mock attacks of the force-on-force inspection, which typically includes three mock exercises over 3 days. NRC also conducts baseline inspections at nuclear power plants to determine that licensees have established measures to deter, detect, and protect against the DBT for radiological sabotage. Security inspectors in NRC’s four regional offices conduct the inspections. NRC’s policy is to conduct a baseline inspection at each site every year, with the complete range of baseline inspection activities conducted over a 3-year cycle. One element of a baseline inspection is evaluating the site’s protective strategy—for example, by conducting tabletop drills (simulated attacks using a model of the site) to gain a better understanding of the strategy. Inspectors also examine areas such as officer training, fitness for duty, positioning and operational readiness of multiple physical and technical security components, and the controls the licensee has in place to ensure that unauthorized personnel do not gain access to the protected area. According to NRC officials, agency inspectors spend a total of 136 hours annually at a site for a baseline inspection, and the 3-year baseline inspection cycle involves more than 400 hours of inspection activity. For both force-on-force and baseline inspections, licensees are responsible for immediately correcting or compensating for any deficiency in which NRC concludes that security is not in accordance with the approved security plans or other security orders. According to its inspection manual, NRC has 45 days to send a licensee a report on the results of an inspection, including any findings and the licensee’s corrective actions. DHS has overall responsibility among federal agencies for assessing the vulnerability of the nation’s critical infrastructure to terrorist attacks and coordinating efforts to enhance security. Nuclear power plants represent one sector of the critical infrastructure. Other sectors include such things as agriculture, chemical facilities, and transportation systems. In 2005, DHS began a series of visits to nuclear power plant sites to conduct comprehensive security reviews in order to assess the risks and consequences of various types of events and to provide better information on the most effective allocation of federal resources to improve security at critical infrastructure sites. DHS conducts the comprehensive reviews with relevant agencies such as the FBI and, in the case of nuclear power plants, NRC. According to DHS, the comprehensive reviews for nuclear power plants focus primarily on the security of the sites “outside the fence”—the aspects of security outside the responsibility and control of the nuclear power plant licensees. DHS relies on NRC to regulate the security of nuclear power plants “inside the fence.” DHS officials told us that the nuclear power sector is one of the few critical infrastructure sectors in which the federal government has the authority to regulate the security of sites. According to DHS, as of December 2005, the agency had completed 14 comprehensive reviews at nuclear power plant sites. The process that NRC used to revise its DBT for nuclear power plants was generally logical and well defined. In particular, the process included an analysis of intelligence and law enforcement information on terrorist capabilities and consultation with DOE, which also has a DBT for its facilities that are potential targets for terrorists seeking to cause radiological sabotage. Using this process, NRC produced a revised DBT that usually corresponded to the original recommendations of NRC’s threat assessment staff. However, certain elements of the revised DBT, such as the weapons that attackers could use against a plant, do not correspond to the staff’s original recommendations for two reasons. First, the NRC threat assessment staff charged with reviewing intelligence information made changes to its recommendations after receiving feedback from stakeholders, including the nuclear industry. Given the high degree of judgment involved in assessing threat information, the process NRC used to obtain stakeholder feedback created the appearance that changes were made based on industry views rather than an assessment of the terrorist threat. Second, the NRC commissioners made changes to the staff’s recommendations on the basis of what is reasonable for a private security force to defend against but did not identify explicit criteria for such policy judgments. NRC made its 2003 revisions to the DBT for nuclear power plants as part of a process that the agency has used since first issuing the DBT in the late 1970s. In this process, NRC staff trained in threat assessment use reports and secure databases provided by the intelligence community to monitor information on terrorist activities worldwide. The staff analyze this information both to identify specific references to nuclear power plants and to determine the capabilities that terrorists have acquired and how they might use those capabilities to attack nuclear power plants in the United States. The staff normally summarize applicable intelligence information and any recommendations for changes to the DBT in semiannual reports to the NRC commissioners on the threat environment. In addition, the threat assessment staff promptly report changes in the threat to the commissioners and coordinate with the intelligence agencies to help ensure that the staff are aware of all pertinent intelligence information. In 1999, the NRC staff began developing a set of criteria—the adversary characteristics screening process—to decide whether to recommend particular adversary characteristics for inclusion in the DBT and to enhance the predictability and consistency of their recommendations. According to the NRC staff, the adversary characteristics screening process, which they used to develop the April 2003 revised DBT, begins with a thorough review of intelligence reports and application of initial screening criteria to evaluate adversary characteristics. The staff use the initial screening criteria to exclude from further consideration certain adversary characteristics, such as those that are already in the DBT or those that would more likely be used by a foreign military than by a terrorist group. For adversary characteristics that pass the initial round of screening, the threat assessment staff apply additional screening factors. Examples of such factors include the following: The type of terrorist group that demonstrated the characteristic. For example, the staff consider whether an adversary characteristic has been demonstrated by transnational or terrorist groups operating in the United States, or by terrorist groups that operate only in foreign countries. The location and level of social stability where the characteristic was demonstrated. For example, the staff consider whether the adversary characteristic has been demonstrated in North America and other countries with a high level of social stability or in countries with an active insurgency or civil war. NRC considers that terrorists planning to attack a nuclear power plant in the United States would face greater operational security and logistical challenges than terrorists operating in countries where there is an internal insurgency. The frequency with which the characteristic has been demonstrated and its availability. For example, the staff consider the availability of an adversary characteristic on the open or the black market. The type of target the characteristic has been used against, the tactical use of the characteristic, and the motive behind its use. For example, the staff consider whether the adversary characteristic has been used against a target with a level of security similar to that at nuclear power plants or against targets with less security, such as the October 2002 attack on a Moscow theater by Chechen rebels. Depending on the results of this analysis, the threat assessment staff may interact with intelligence and other agencies to obtain additional information and insights about the adversary characteristics. Finally, on the basis of their analysis and interaction with other agencies, the staff decide whether to recommend that the commission include the adversary characteristics in the DBT for nuclear power plants. NRC’s Office of Nuclear Security and Incident Response, which includes the Threat Assessment Section, reviews and endorses the threat assessment staff’s analysis and recommendations. Since issuing the revised DBT in April 2003, NRC has continued to use the adversary characteristics screening process to consider additional changes—for example, to consider new intelligence information on weapons not included in the revised DBT. In addition, the Energy Policy Act of 2005 directed NRC to undertake a rulemaking to revise the DBT for nuclear power plants. While the detailed description of the April 2003 DBT is safeguards information and thus has not been made available to the public, the rulemaking, which is under way, presents the DBT in less detail so that it can be made available to the public and includes a notice and opportunity for public comment. The act directed NRC to consider the events of September 11, 2001; the potential for an attack on facilities by multiple, coordinated teams of a large number of individuals; the potential for suicide attacks; and other factors. The April 2003 DBT already includes some (but not all) of the adversary characteristics listed in the Energy Policy Act, such as attackers who are willing to commit suicide, the potential for a waterborne assault, and the use of explosive devices. NRC officials told us that, as part of the current rulemaking, they would consider all of the factors listed in the Energy Policy Act, including those not currently in the DBT. Terrorist attacks have generally occurred outside the United States, and intelligence information specific to nuclear power plants is very limited. As a result, one of the NRC threat assessment staff’s major challenges has been to decide how to apply this limited information to nuclear power plants in the United States. For example, one of the key elements in the revised DBT, the number of attackers, is based on NRC’s analysis of the group size of previous terrorist attacks worldwide. According to NRC threat assessment staff, the number of attackers in the revised DBT falls within the range of most known terrorist cells worldwide. Furthermore, the threat assessment staff told us they considered but decided against an even larger number of attackers in the draft DBT because a larger cell would face an increased potential of detection before it could successfully carry out a terrorist attack in the United States. The staff also concluded that multiple cells along the lines of the September 11, 2001, attacks would not necessarily target a single nuclear power plant. Intelligence and law enforcement officials we spoke with did not have information contradicting NRC’s interpretation regarding the number of attackers (or other parts of the NRC DBT) but did point to the uncertainty regarding the size of potential attacks and the relative lack of intelligence on the terrorist threat to nuclear power plants. NRC staff recommendations regarding other adversary characteristics also reflected the staff’s interpretation of intelligence information. For example, the staff considered increasing the vehicle bomb in the revised DBT to a range of sizes and ultimately recommended a size that was based on an analysis of previous terrorist attacks using vehicle bombs. One of the largest vehicle bombs ever detonated was used in the 1996 bombing of the U.S. military residence in Saudi Arabia, and the maximum size of a vehicle bomb used in the United States—the 1995 bombing of the federal building in Oklahoma City—consisted of the equivalent of 4,800 pounds of TNT. Additional examples of NRC’s interpretation of intelligence information and recommendations for the revised DBT included the following: The threat assessment staff recommended a maximum weight of equipment and explosives per attacker. The staff based this weight on the experience and professional knowledge of NRC staff and contractors with security backgrounds. In developing these limits, the staff evaluated the degree to which attackers would rely on speed of movement rather than be encumbered by large amounts of equipment. They also considered that a relatively small amount of explosives could cause a large amount of damage. The NRC staff recommended including a waterborne assault with a bomb size based on available intelligence on waterborne terrorist bombs. In addition, according to NRC, watercraft found near nuclear power plants would generally be constrained in terms of payload. Furthermore, the bomb size recommended by the staff was considered sufficient to significantly damage a nuclear power plant’s water intake structure. The staff considered that a larger bomb would add little to the potential damage to the intake structure. The NRC staff supported the inclusion of equipment that is readily available through commercial sources but recommended against weapons with limited use by terrorists. The staff recommended against including infiltration into a nuclear power plant by air because their review of terrorist attacks did not demonstrate significant use of such tactics against a fixed site. Table 1 summarizes, by adversary characteristic, the key changes to the DBT recommended by the NRC staff and the final changes approved by the NRC commissioners. According to the NRC staff’s report on recommended changes to the DBT for nuclear power plants, NRC has a long-standing commitment to work closely with DOE in an effort to maintain comparable protection for comparable facilities. Thus, as part of the process for revising the DBT for nuclear power plants, NRC monitored and exchanged information with DOE, which also has a DBT for comparable facilities that process or store radiological materials and are, therefore, potential targets for radiological sabotage. However, while certain aspects of the two agencies’ DBTs for radiological sabotage are similar, NRC generally established less rigorous requirements than DOE—for example, with regard to the types of equipment that could be used in an attack. Additional information regarding key adversary characteristics found in both agencies’ DBTs includes the following: Number of attackers. Both DOE and NRC based the number of attackers on intelligence on the size of terrorist cells. According to DOE officials, it is challenging to find intelligence on terrorist activities that can be considered equivalent to a ground assault on a fixed facility such as a nuclear power plant or DOE site. However, DOE officials said they used similar intelligence as NRC to derive the number of attackers. Vehicle bomb. DOE and NRC officials provided us with similar analyses of intelligence information on previous terrorist attacks using vehicle bombs. In particular, DOE and NRC officials told us that most vehicle bombs used in terrorist attacks are smaller than the size vehicle bomb in NRC’s revised DBT. DOE officials also said that site-specific characteristics affect the size of vehicle bomb that sites are capable of defending against. Weapons. The DOE DBT includes a number of weapons not included in the NRC DBT. Inclusion of such weapons in the NRC DBT for nuclear power plants would have required plants to take substantial additional security measures. Furthermore, DOE included other capabilities in its DBT that are not included in the NRC DBT. As discussed below, NRC staff considered some of the weapons in DOE’s DBT for inclusion in the DBT for nuclear power plants but removed them while drafting the DBT. DOE established an even more stringent DBT for its sites that store nuclear weapons (or material that could be used in a nuclear weapon). The security objective for these sites is to prevent the theft or detonation of a nuclear weapon. DOE decided on a more stringent DBT to protect nuclear weapons facilities than sites with the potential for radiological sabotage in accordance with its graded approach, which provides for a higher level of protection to sites with greater potential consequences to public health and safety in the event of a terrorist attack. According to DOE officials, the consequences of theft or detonation of a nuclear weapon would be “orders of magnitude” greater than radiological sabotage at a DOE site or nuclear power plant. Consistent with DOE’s graded approach, NRC officials told us they do not consider comparisons between the DOE DBT for nuclear weapons facilities and the NRC DBT for nuclear power plants valid. NRC considers that the potential consequences of the theft of material that could be used in a nuclear weapon could be much greater than radiological sabotage at a nuclear power plant. Furthermore, according to NRC officials, terrorists seeking to steal or detonate a nuclear weapon would require greater capabilities to accomplish their objectives than terrorists seeking to cause radiological sabotage. For example, theft of a nuclear weapon (or material that could be used in a weapon) would require terrorists to defeat a site’s security systems when entering and leaving a site. In contrast, attackers willing to commit suicide in the process of causing the release of radiological material from a nuclear power plant would have to overcome security to enter a site and reach a target set but would not have to leave the site. Like DOE, NRC uses a graded approach to security, and, therefore, the NRC DBT for NRC-licensed facilities that store or process material that could be used in a nuclear weapon is more stringent than the NRC DBT for nuclear power plants. NRC staff sent a draft DBT to stakeholders in January 2003, held a series of meetings with them to obtain their comments, and received written comments. In addition to nuclear power plant licensees and NEI, which represents the nuclear industry, these stakeholders included other federal agencies and government authorities in affected states. NRC specifically sought and received feedback from the nuclear industry on what is reasonable for a private security force to defend against and the cost of and time frame for implementing security measures to defend against specific adversary characteristics. During the same period that the threat assessment staff was receiving industry and other stakeholder feedback, they continued to analyze intelligence information and modify the draft DBT. In April 2003, NRC staff submitted their final draft DBT to the commissioners for their review and approval, together with a summary of stakeholder comments. In its written comments on the January 2003 draft DBT, NEI objected to the size of the vehicle bomb, the inclusion of certain weapons, and the inclusion of an active violent insider. The NRC staff’s draft DBT submitted to the commissioners reflected some (but not all) of NEI’s objections. The reasons for NEI’s objections to key adversary characteristics and changes to the NRC threat assessment staff’s recommendations included the following: Vehicle bomb. NEI objected to the vehicle bomb in the draft DBT because of its assessment of (1) the low probability of a vehicle bomb of the size proposed by NRC, (2) the likelihood that federal authorities or local law enforcement would detect a large vehicle bomb, and (3) the inability of some sites to protect against the size of the vehicle bomb proposed by NRC because of insufficient land for installation of vehicle barrier systems at a necessary distance. Instead, NEI agreed that it would be reasonable to protect against a smaller vehicle bomb. In its recommendations to the commissioners, the NRC staff subsequently reduced the size of the vehicle bomb to the amount proposed by NEI. After review, the staff’s reason for agreement with NEI was that vehicle bombs as large as that included in the draft provided to stakeholders had rarely been used in previous terrorist attacks and would not be reasonable or practical to include in the DBT. Weapons. NEI argued against the inclusion of a number of weapons. For example, NEI wrote that (1) one particular weapon recommended by the NRC staff would render the ballistic shielding used at nuclear power plants obsolete, and (2) another proposed weapon would initially cost $1 million to $7 million per site to defend against, with annual recurring costs of up to $2 million per site. Furthermore, NEI argued that these weapons (as well as the vehicle bomb size initially proposed by the NRC staff) would be indicative of an enemy of the United States, which sites are not required to protect against under NRC regulations. In the final draft submitted to the NRC commissioners, the NRC staff removed a number of weapons NEI had objected to. The staff reasoned that the weapons had rarely been used in armed assaults, or had been used infrequently in terrorist assaults despite their wide availability and use by violent criminals in the United States. NRC staff did not remove one particular weapon NEI had objected to, which, according to NRC’s analysis, has been a staple in the terrorist arsenal since the 1970s and has been used extensively worldwide. (As discussed below, the NRC commissioners later voted to remove this particular weapon.) Inside assistance. NEI wrote that the nuclear power industry had taken a number of steps to reduce the likelihood of an active violent insider— for example, it tightened the process for granting employees unescorted access to nuclear power plants. Furthermore, NEI wrote that the industry had been unable to identify cost-effective solutions to defend against an active violent insider, and that costs would range from $2 million to $8 million per site for equipment and $5 million per site per year for additional personnel. Despite these objections, the NRC staff recommended the inclusion of an active violent insider in the final draft of the DBT. (The NRC commissioners later allowed nuclear power plants to reduce the likelihood of an active violent insider through a human reliability program.) The chief of NRC’s threat assessment staff told us that NRC did not make changes to the draft DBT based solely on industry views. Rather, according to NRC officials, the changes were made based on multiple internal analyses and discussions among the threat assessment staff and higher levels of review within NRC and its Office of Nuclear Security and Incident Response, which includes the Threat Assessment Section. Nevertheless, in our view, the process NRC used to obtain feedback from stakeholders, including the nuclear industry, created the opportunity for, and appearance of, industry influence on the threat assessment regarding the characteristics of an attack. When we raised this issue with NRC officials, they told us that under normal circumstances the threat assessment process is initially undertaken utilizing intelligence and law enforcement information, with other stakeholders subsequently having an opportunity to provide feedback—for example, regarding the cost of implementing security measures in response to proposed changes to the DBT. Furthermore, NRC threat assessment staff and other intelligence agency officials told us they support the separation of intelligence analysis from other responsibilities, such as obtaining stakeholder feedback on changes to the DBT, in order to insulate analysis of intelligence from other considerations. However, according to NRC, the agency made a deliberate decision as part of the process for revising the DBT in 2003 to have the threat assessment staff analyze intelligence information and obtain stakeholder feedback simultaneously, rather than sequentially, in order to accelerate the process in response to the increase in the terrorist threat. NRC officials said that in considering future changes to the DBT, NRC plans to ensure the initial separation of intelligence analysis from interaction with stakeholders. The NRC staff provided the commissioners with a number of documents to consider in making the final decision on changes to the DBT. These included, but were not limited to, two assessments in the fall of 2002 on the terrorist threat to nuclear power plants (one specifically on the potential use of vehicle bombs) and a final paper in April 2003 with the staff recommendations for revisions to the DBT. The April 2003 document also included a summary of comments on the draft DBT received from the nuclear industry and other federal and state agencies; a summary of NEI’s estimates of the cost of and time frame for implementing security measures to address specific changes to the DBT; and an updated assessment of the terrorist threat to nuclear power plants. The NRC commissioners told us they also had direct contacts with intelligence agencies that provided them with information on the terrorist threat. The commissioners made the final decision on changes to the DBT by majority vote. While the commission largely supported the NRC staff’s recommendations for changes to the DBT, it also made some significant changes that reflected policy judgments. Specifically, the commissioners considered whether any of the recommended changes to the DBT constituted characteristics representative of an enemy of the United States, which sites are not required to protect against under NRC regulations. In approving the revised DBT, the commission stated that nuclear power plants’ civilian security forces cannot reasonably be expected to defend against all threats, and that defense against certain threats (such as an airborne attack) is the primary responsibility of the federal government, in coordination with state and local law enforcement officials. In connection with this position, the commission directed NRC’s Office of General Counsel to prepare a paper for commission approval articulating the factors to be considered in determining whether particular characteristics of an attack constitute an enemy of the United States. (Officials from NRC’s Office of General Counsel told us they prepared a document with an analysis of this issue for the commission, but that the document was not a decision paper for approval by the commissioners.) We recognize that consideration of issues such as what is reasonable for a private security force to defend against is an appropriate role of the commission in approving changes to the DBT. However, in approving the revised DBT, the commission did not identify explicit criteria for determining whether specific adversary characteristics constitute an enemy of the United States or criteria for what is reasonable for a private security force to defend against. For example, the commission did not define whether the criteria include the cost for nuclear power plants to defend against an adversary characteristic or the efforts of local, state, and federal agencies to address particular threats. The lack of such criteria can reduce the transparency of commission decisions to make changes to the threat assessment staff’s recommendations. NRC officials said detailed criteria on what is reasonable for a private guard force would reduce the commissioners’ discretion in approving changes to the DBT. Furthermore, in NRC’s view, the basis for the commission’s policy decisions and direction to the NRC staff regarding the DBT are sufficiently articulated in the commission’s voting record, in which individual commissioners provided the rationale for their votes, and in the related staff requirements memorandum, which documented the commission’s decisions. As indicated in table 1, the significant changes the commission made to the NRC staff’s recommendations included removal of certain weapons, a decrease in the maximum amount of weight carried by the attackers, and mitigation of an active insider through a human reliability program. In other cases, such as the size of the vehicle bomb, the commission supported the recommendations of the NRC staff. Based on our review of the commissioners’ voting records, the commission’s decisions on key aspects of the DBT included the following: Vehicle bomb. A majority of commissioners voted to increase the maximum vehicle bomb to the size recommended by the NRC staff. However, one commissioner supported a larger vehicle bomb that the NRC staff had included in a previous draft of the DBT. The commissioner recognized that some sites would not have sufficient property to install vehicle barrier systems far enough from the plants to protect against the larger vehicle bomb and suggested NRC could provide such sites with an exemption and require them to protect against a smaller vehicle bomb. Weapons. The commission decided to remove two weapons the NRC staff had recommended for inclusion in the revised DBT. As part of this decision, the commission directed the staff to conduct an in-depth analysis of the additional defensive capabilities, changes to sites’ protective strategies, and costs associated with protecting against one of the weapons. Removal of weapons from the revised DBT was significant because of the strength of the NRC staff’s intelligence analysis supporting their inclusion. For example, in the April 2003 report to the commissioners, the NRC staff reported that while one such weapon had not been used in the United States, it had been found in weapons caches in the United States. Similarly, the staff noted the use of the other weapon in captured terrorist training videos and its ready availability. The document summarizing the commission’s changes to the proposed DBT did not provide a reason for excluding these weapons. However, in written comments on their votes, one commissioner identified these weapons as representative of an enemy of the United States; another commissioner agreed that threat data showed an increased possibility of the use of these weapons but stated that NRC staff needed to assess whether it would be reasonable for a private security force to defend against such weapons. One of the commissioners supported inclusion of these weapons in the DBT, as well as other weapons the staff had not recommended, but nevertheless told us there was more agreement than disagreement among the commissioners about what weapons should be included. The same commissioner told us he supported inclusion of one of the weapons because he considered the means for defending against it to be affordable. Weight of equipment and explosives. In voting to decrease the maximum weight of equipment, weapons, and explosives (such as grenades) per attacker in the final DBT, three of the commissioners indicated they supported decreasing the weight that an attacker could be expected to carry. In their written comments, the three commissioners indicated that the staff’s recommendation regarding carry weight would require further study—for example, to determine whether the greater amount of weight could reduce the capability of the attack force by reducing individual attackers’ mobility. Inside assistance. The commission added language to the DBT stating that a human reliability program for monitoring employees at the sites could reduce the likelihood of an active insider. To qualify, the sites’ human reliability program would have to include background checks, substance abuse testing, psychological evaluations, annual supervisory review, and periodic background reinvestigations. The commissioners told us they made this decision based, in part, on the long-standing assumption by NRC that a human reliability program reduces the likelihood of an active insider. The commissioners also told us that other factors, such as increased awareness about the potential for an attack in the communities where nuclear power plants are located, would reduce the likelihood of an active insider. In addition to making changes to specific elements of the DBT for nuclear power plants, the commission provided overall policy direction on NRC’s oversight of security of the sites. In particular, recognizing that an attack on a site could exceed the characteristics identified in the DBT, the commission directed the staff to continue coordinating with DHS and other federal and state authorities to help assure the security of nuclear power plants. For example, the commissioners told us that NRC works with the Federal Aviation Administration to address the threat of air strikes against a site. Similarly, NRC supports and participates in DHS comprehensive security reviews of nuclear power plant sites. Other significant policy direction included the following: The commission affirmed the NRC staff’s operating assumption that there may be no specific advance warning of an attack on a nuclear power plant but indicated that a general warning of a potential attack may be provided. The commission directed the staff to continue providing the commissioners with assessments of specific adversary characteristics, including those not in the revised DBT, and to provide additional recommendations as part of the semiannual review of threats to nuclear power plants. However, the commission also indicated its expectation that there would be a period of “regulatory stability” (a period with no major changes to security regulations) in order to allow sites time to adjust to the changes already made to the DBT and other security requirements. The commission supported the clarification that sites are not required to “defeat” an attack, because such a requirement could require sites’ security forces to employ offensive tactics beyond what is allowed under law for private security forces. Rather, the commission supported the requirement that sites protect against radiological sabotage by preventing the destruction or disablement of vital equipment. The four nuclear power plant sites we visited made substantial changes after the September 11, 2001, attacks and in response to the revised DBT, including measures to detect, delay, and respond to the increased number of attackers and to address the increased vehicle bomb size. According to NRC, other sites took comparable actions to defend against the revised DBT. Despite the industry’s considerable efforts, the changes have not been completely without problems and licensees can continue to make improvements. For example, NRC baseline and force-on-force inspections have found that the security changes have not always met NRC’s requirements. The four sites we visited all implemented a “defense-in-depth” strategy, with multiple layers of security systems that attackers would have to defeat before reaching vital areas or equipment and destroying or disabling systems sufficient to cause an elevated release of radiation off site. The sites varied in how they implemented these measures, primarily depending on site-specific characteristics such as topography and on the degree to which they planned to interdict attackers within the owner-controlled area and far from the sites’ vital area, as opposed to inside the protected area but before they could reach the vital equipment. (See fig. 1 for a diagram of the areas commonly found at nuclear power plants.) NRC officials told us that licensees have the freedom to design their protective strategies to accommodate site-specific conditions, so long as the strategies satisfy NRC requirements and prove successful in a force-on-force inspection. The sites we visited implemented security measures corresponding to the three elements generally recognized as constituting an effective security system for defending fixed sites. These include early detection of an attack, sufficient delay for security officers to report to their defensive positions, and capability of the security force to respond to the attack: Detection. At all four sites, the owners installed additional cameras throughout different areas of the sites and instituted random patrols in the owner-controlled areas. The owner-controlled areas generally contain undeveloped property and administrative buildings that would not be targets for terrorists seeking to commit radiological sabotage. Nevertheless, by upgrading security in this area, the sites increased the chance that they would detect attackers before the attackers would be able to approach or infiltrate the protected area, where they might be able to gain access to vital equipment. Patrols can be used to accommodate areas of the sites that are remote or where the view of cameras is obstructed, while cameras provide for a safer inspection of questionable activities than sending a security officer. Delay. The sites we visited installed a variety of devices designed to delay attackers and allow security officers more time to respond to their posts and fire upon attackers. The sites generally installed these delay devices throughout the protected areas so that attackers would have to defeat multiple security systems before reaching vital areas or equipment. For example, the sites installed fences outside the buildings housing the reactors and other vital equipment and blocked off entrances to make it more difficult for attackers to enter the buildings. Similarly, the sites installed a variety of delay devices within the reactor and other buildings, some of which are permanent and others that security officers would deploy in the event of an attack. Response. Each of the four sites we visited constructed bullet-resistant structures at various locations in the protected area or within buildings, increased the minimum number of security officers defending the sites at all times, and expanded the amount of training provided to them. Security officers are stationed in the bullet-resistant structures or move to them during an attack, at which point they can fire at attackers through gun ports while not exposing themselves to the attackers’ gunfire. (See fig. 2 for an example of a bullet-resistant structure.) Having more security officers on duty at any given time means that more individuals can respond to more locations in the event of an attack. It can also increase the sites’ ability to detect attackers by allowing more security officers to observe the owner-controlled area and monitor video cameras. Security managers at each site told us they also made changes to their training—for example, to train officers to use new security equipment or to comply with NRC’s training order, issued at the same time as the revised DBT. Moreover, each of the licensees told us they implemented measures to comply with NRC’s requirements limiting the number of hours security officers can work to 72 hours during a 7- day period. The majority of the security officers we interviewed told us that their training was adequate or had improved and that they generally did not experience fatigue on the job. Security managers at the four sites considered the layouts of their sites and the paths that attackers might use to reach vital equipment in deciding where to deploy these enhancements. As a result, the sites employed different protective strategies that primarily varied by the degree to which they implemented an external strategy designed to interdict attackers within the owner-controlled area, but far from the sites’ vital area, rather than an internal strategy designed to interdict attackers inside the protected area. For example, one site with a predominantly external strategy installed an intrusion detection system in the owner-controlled area. While NRC requires all sites to have an intrusion detection system at the perimeter of the protected area, security managers at this site decided to install a second intrusion detection system so that security officers would be able to identify intruders as soon as they cross into the owner- controlled area. The site was able to install such a system because of the large amount of open, unobstructed space in the owner-controlled area. Similarly, the protective strategy at another site focused on the ability of security officers to deny attackers access to the vital area buildings. The site uses cameras and patrols to detect attackers in the owner-controlled area and deploys security officers in bullet-resistant structures. From the structures, located on the roof and attached to the walls of the vital area buildings, security officers could fire upon attackers before they could enter the buildings. In contrast, security managers at the other two sites we visited described protective strategies that combined elements of an external strategy and an internal strategy. At both sites, the external strategy included bullet- resistant structures positioned so that security officers could fire on attackers attempting to enter vital area buildings. Other security officers are stationed inside the vital area buildings and would move to bullet- resistant structures within the buildings to interdict attackers who defeat the external security. At one of these sites in particular, security managers decided to implement a protective strategy that relied more heavily on interdicting attackers inside the protected area. The site uses elements of an external strategy, such as cameras and patrols for detecting attackers in the owner-controlled area, but in contrast to the sites described above, relies to a lesser extent on security officers to stop the attackers in the owner-controlled area. Instead, security managers told us they had implemented an internal protective strategy by identifying “choke points”—locations inside the protected area attackers would need to pass before reaching their targets—and installing bullet-resistant structures at the choke points where officers would be waiting to interdict the attackers. Security managers at the site also told us one of the reasons for implementing a more internal strategy was their desire to maintain radiation doses to security officers as low as is reasonably achievable. In particular, the internal strategy allowed the site to not install bullet- resistant structures on one side of the site, where security officers who would be stationed in the structures could receive elevated radiation doses. In addition to the security enhancements we observed, security managers at each site described changes they plan to make as they continue to improve their protective strategies, such as adding fencing to block a path attackers might use to enter the protected area and a device at the entrance to the site that can detect explosives. Security managers at three of the sites we visited also told us the number of security officers on duty at any one shift exceeded the minimum number of security officers that NRC requires be dedicated to responding to attacks. (The fourth site maintained the minimum number of armed dedicated security officers.) According to NRC’s analysis, sites typically exceeded the minimum number of responders required by NRC. To protect against the increase in the vehicle bomb size, the licensees at the sites we visited designed comprehensive systems consisting of sturdy barriers to prevent a potential vehicle bomb from approaching the sites and to channel vehicles to entrances where security officers could search them for explosives and other prohibited items. Prior to increasing the maximum size vehicle bomb sites must defend against, NRC required the sites to have a vehicle barrier system encircling the reactors and other vital equipment and set at a distance far enough from the plants to prevent a smaller vehicle bomb from damaging vital equipment and releasing radiation. After NRC increased the maximum size of the vehicle bomb in the revised DBT, plants installed a second vehicle barrier system at an even greater distance from the vital equipment, while also keeping the original vehicle barrier systems as a second layer of defense. At the sites we visited, the new vehicle barrier systems consisted of rows of large steel-reinforced concrete blocks, or (at one plant) large boulders weighing up to 7 tons in combination with piles of smaller rocks. (See fig. 3 for an illustration of a vehicle barrier system.) The vehicle barrier systems either completely encircled the plants (except for entrances manned by armed security officers) or formed a continuous barrier in combination with natural or manmade terrain features, such as bodies of water or trenches, that would prevent a vehicle from approaching the sites. Licensees at the four sites adapted their vehicle barrier systems to the unique conditions at each site. The vehicle barrier systems also shared many features in common and generally consisted of a combination of the following basic elements: Vehicle searches. Generally, the security managers told us they implemented procedures to search vehicles at the entry point to the outer vehicle barrier systems. (NRC requires sites to search all vehicles capable of carrying more than a certain amount of TNT and to search a random sample of vehicles capable of carrying a smaller amount of explosives). Examples of search procedures included visual examination of the compartments of vehicles and use of detection equipment to test for explosives. Security managers told us security officers would conduct a second search of all vehicles, regardless of size, at a second checkpoint where vehicles pass through the inner vehicle barrier system. During this search, security officers would look for weapons and other prohibited equipment in addition to any explosives. “Overwatches.” The sites stationed security officers in bullet-resistant structures, or “overwatches,” from which the officers could observe the vehicle searches and provide backup support in case of an attack. Like the other bullet-resistant structures installed by the sites, these structures included gun ports for firing at attackers. “Active” vehicle barrier systems. These systems were installed in the roadways leading into the plants and were designed to block unauthorized vehicles from entering the site. They consisted either of steel plates that could be raised or lowered or rolling gates. (See fig. 4 for an example of an active vehicle barrier system.) Security officers in multiple locations, such as alarm stations and overwatches, could activate the systems if security officers manning the vehicle entrances, who are more vulnerable to attack, were unable to do so. At two of the plants, the barriers were always in the closed position and required two security officers at separate locations to open them. At the other two plants, the barriers were generally in the open position but could be closed by a single security officer to prevent unauthorized entry. In some cases, the new vehicle barrier systems at the sites we visited appeared to exceed the requirements necessary to protect against the revised DBT. For example, security managers at one site told us that the vehicle barrier system was wider than necessary in order to protect against the vehicle bomb. Furthermore, in at least some areas of the sites, the new vehicle barrier systems were farther from the reactors and other vital equipment than necessary to protect the sites against the size of vehicle bomb in the revised DBT. In particular, security managers at the site with a more external protective strategy decided to take advantage of the large amount of open, unobstructed property surrounding the site to create a large zone between the vehicle barrier system and the site buildings. Although we generally toured the complete perimeter of the vehicle barrier systems at the four sites, we did not calculate how far the barrier systems were installed from the vital equipment, test the equipment performance, or determine how well security officers conducted vehicle searches. Like other aspects of security at the plants, these factors would affect how well the vehicle barrier systems would work in the event of a terrorist attack. In addition, the sites implemented other related measures, such as winding lanes designed to cause vehicles to slow down as they approach entrances; emergency exits to facilitate evacuation of employees from the plant; devices to block unauthorized trains from reaching the plant; parking lots outside the vehicle barrier system for use during an outage to limit the number of additional vehicles entering the vehicle barrier systems and requiring searches; and, at one site, receiving deliveries at an off-site warehouse to limit the number of trucks entering the site. As of November 1, 2005, NRC had completed force-on-force inspections— testing sites’ ability to defend against the revised DBT—at 20 sites. NRC officials told us, and our review of baseline and force-on-force inspection reports indicated, that plants have generally complied with their security plans and other NRC security requirements and have generally performed well during force-on-force inspections. However, we also noted from the reports, as well as from our own observations, that sites have encountered a range of problems in meeting NRC security requirements, including a force-on-force inspection in which the site had problems demonstrating it could defend against the revised DBT. (According to NRC officials, inspectors do not leave the site at which a problem is identified until it is corrected or until sufficient compensatory measures are put in place.) Twelve of the 18 baseline inspection reports and 4 of the 9 force-on-force inspection reports we reviewed identified problems or items needing correction. These findings, such as failures in the intrusion detection system at one site and not including certain elements of training at several sites, demonstrate that NRC’s baseline and force-on-force inspections are important to identifying problems that need correction. (See app. II for a discussion of the findings in the force-on-force and baseline inspection reports we reviewed.) During a force-on-force inspection at one site, we observed that although the security measures appeared impressive, the site’s ability to defend against the DBT was at best questionable. The site’s security measures were similar to those we observed at other sites, such as an intrusion detection system equipped with cameras for assessing alarms, bullet- resistant structures both in the protected and vital areas, and a vehicle barrier system consisting of large concrete blocks and large boulders. However, some or all of the attackers were able to enter the protected area in each of the three exercise scenarios. Furthermore, attackers made it to the targets in two of the scenarios, although the outcomes of the two scenarios were called into question by uncertainties regarding whether the attackers had actually been neutralized before reaching the targets. NRC, in turn, raised concerns about the site’s lack of “defense in depth” and concluded that it could not validate the licensee’s protective strategy in the two scenarios. NRC noted that security officers’ ability to interdict attackers was impacted due to problems in the site’s detection and assessment, and that, in two of the scenarios, security officers left the external bullet-resistant structures to which they were assigned and transitioned to internal positions once they could account for the number of attackers in the revised DBT. This meant that the security officers left positions that covered a “breach” the attackers had made in the protected area perimeter. As a result of the inspection, NRC required the licensee to install additional security equipment immediately after the inspection, NRC inspectors remained on site until the equipment was put in place, and NRC decided to conduct another force-on-force inspection at the site. At the follow-up force-on-force inspection at the same site, which we also observed, the licensee told us it had spent an additional $37 million to improve security in the 6 months following the first inspection. Some of these changes were clearly visible, such as elevating the bullet-resistant structures that had been on the ground to give officers greater visibility and firing opportunities, razing several buildings to reduce opportunities for attacker concealment, and increasing the distance between the vehicle barrier system and the protected area in a part of the site. The licensee also told us about other changes directly related to the internal aspect of the protective strategy, including positioning more security officers within the vital area, installing additional cameras to increase security officers’ ability to detect attackers, and creating new bullet-resistant structures that provided additional protected positions for firing upon the attackers. From the second exercise, NRC officials concluded that they could evaluate the protective strategy and that the site had adequately defended against a DBT-style attack. In addition to our observations of security during force-on-force inspections, GAO security experts who accompanied us to the four other sites we visited suggested a number of opportunities to improve security at the sites. While our experts did not find a lack of compliance with NRC regulations or an inability to defend the sites against the adversary characteristics in the revised DBT, the suggestions support our assessment that security at nuclear power plants is an ongoing process of identifying and implementing potential improvements. For example, at one site, we observed a bullet-resistant enclosure in which curtains—installed to reduce glare from the sun—obstructed the view through windows, and video equipment associated with surveillance cameras blocked access to several gun ports. We suggested that the site consider replacing the curtains with tinted glass and providing the security officer in the bullet- resistant enclosure with better access to the gun ports. At another site, we suggested that the addition of a bullet-resistant structure on one side of the site would provide the site’s security force with greater opportunity to interdict attackers entering on that side of the site. NRC has made a number of improvements to the force-on-force inspection program, several of which address recommendations we made in our September 2003 report on NRC’s oversight of security at commercial nuclear power plants. We had made our recommendations when NRC was restructuring the force-on-force program to provide a more rigorous test of security at the sites in accordance with the DBT, which was also under revision. For example, we had recommended that NRC strengthen the force-on-force inspections by (1) conducting the inspections more frequently at each site, (2) using laser equipment to better simulate attackers’ and security officers’ weapons, and (3) requiring the inspections to make use of the full terrorist capabilities stated in the DBT, including the use of an adversary force trained in terrorist tactics. NRC has taken a number of actions as part of its restructuring of the force- on-force program that satisfy the recommendations we made to strengthen the program. For example, NRC has begun conducting the exercises more frequently at each site and is using laser equipment to simulate weapons. Furthermore, the attackers in the force-on-force exercise scenarios we observed used many of the adversary characteristics of the revised DBT, including the number of attackers in the revised DBT, a vehicle bomb, a passive insider, and explosives. In addition, NRC officials told us that the adversaries were trained in military tactics. Nevertheless, in observing three force-on-force inspections and discussing the program with NRC officials, we noted the following issues that continue to warrant NRC’s attention: Problems with laser equipment. At the three force-on-force inspections we observed, the sites used laser equipment to simulate firing live weapons. In general, the equipment appeared to help make the inspections a realistic test of security at the sites. For example, laser equipment provides a much more reliable account of shots fired in comparison with the equipment NRC and the sites had been using, which relied on the judgment of individual participants to determine shooting accuracy. However, problems in using the equipment contributed to NRC’s limited ability to evaluate security at one of the sites. In part because of problems with the laser equipment, NRC decided to conduct a second force-on-force inspection at this site. The second inspection made better use of the laser equipment, which proved to be a valuable tool in determining that several security officers engaged attackers unsuccessfully by firing at the attackers while they were too far away. NRC raised this issue to the licensee in the context of improving training so that security officers would not waste ammunition on targets that are beyond the range of their weapons. Inspection schedules. The way in which NRC schedules force-on-force exercises may create artificialities that enable sites to perform better than they otherwise would. NRC officials said they notify sites of the date of their force-on-force inspection only 8 to 12 weeks in advance. Nevertheless, NRC may be able to further reduce the artificiality of the inspection schedules and thereby enhance its ability to test security at the sites. For example, in each of the exercises we observed, NRC followed the same schedule for conducting nighttime and daytime attacks. Furthermore, the adversary force typically initiated the attack soon after the opening of the exercise “window” (the agreed-upon time for the exercise to begin). Consequently, the sites’ security forces might have been able to anticipate the approximate time that the attack would begin, and industry observers from other sites might have more information than necessary prior to inspections at their own sites about NRC’s standard practices for conducting the inspections. NRC officials told us that, while the attacks began soon after the opening of the exercise window in the exercises we observed, the attackers do sometimes wait longer in order to increase the level of uncertainty among the site’s security force and thereby create a more realistic scenario. Testing of sites’ internal security strategies. Given the amount of resources invested in preparing for and implementing a force-on-force inspection, we believe inspections should test the full extent of sites’ “defense-in-depth” strategies, including both the external and internal elements of the strategies. However, the force-on-force exercises end when a site’s security force successfully stops an attack. Consequently, if the security force stops an attack before the attackers enter the vital area, NRC would not have an opportunity to observe how the security force would perform in the event that the attackers successfully defeat the site’s external security strategy. In a number of the force-on-force exercises we observed, the security force did, in fact, stop the attackers early in the scenario. According to NEI officials, force-on-force inspections would be more valuable if NRC allowed the adversaries to challenge each layer of defense until reaching their targets, or being defeated at the last possible point of defense. NRC officials also told us such an approach is worth considering but that NRC would have to first determine how to implement it. Operational security. At two of the force-on-force inspections we observed, we noted areas in which “operational security”—the protection of information about the planned scenarios for the mock attacks—could be improved. For example, during a safety “walk down”—a physical site check conducted prior to every exercise scenario to ensure the safety of exercise participants—a site employee made motions that may have alerted security officers to the targets the adversaries would be trying to reach that evening. In another inspection, security officers could observe adversaries getting into position inside the protected area prior to the start of an exercise, potentially providing clues about the route the adversaries would use to enter the site. We also observed that each force-on-force exercise was attended by a large number of people who had access to scenario information, after signing a nondisclosure form, thus increasing the chance that details about an exercise scenario might be compromised. While we recognize that procedures such as safety walk downs and prepositioning of adversary teams are necessary to the proper conduct of the force-on-force inspections, lapses in operational security have the potential to give security officers knowledge that would allow them to perform better than they would otherwise and raise questions about whether the force- on-force inspections are a true test of the sites’ protective strategy. According to NRC officials, NRC inspectors have been instructed to be vigilant regarding any indications that a site’s security force may have received advance knowledge of an attack scenario, and procedures for safety walk downs have been revised to improve operational security. Standards for controllers. NRC relies on the sites to assign and train controllers to observe each participant (both the adversaries and security officers) in the force-on-force inspections. In the three inspections we observed, the level of security expertise and training among the controllers varied among the sites. For example, one site assigned as controllers plant employees who did not have security- related backgrounds but who volunteered to help. In its force-on-force inspection report for this site, NRC concluded that the level of controller training was a factor in the force-on-force exercises not being brought to a definitive conclusion. (As discussed above, NRC decided to conduct another force-on-force inspection at this site.) In contrast, another plant used personnel with security backgrounds. NEI has prepared a set of guidelines for controllers in force-on-force inspections that NRC has reviewed. NEI has also created a controller-training workshop in which NEI shares lessons learned from force-on-force exercises. Quality of feedback to licensee. The quality of the feedback among the force-on-force inspections we observed was inconsistent. In particular, during the first inspection, NRC failed to discuss with the licensee several potential problems raised by the NRC team after each scenario. In the two subsequent inspections we observed, NRC appeared to have improved the quality of its feedback to the licensees. Specifically, the team leader provided the licensee with concise feedback that accurately reflected what the team members had expressed in closed NRC meetings. An NRC official told us that, based on comments from us as well as from NRC team members, NRC took measures to improve the quality of the feedback. Force-on-force inspection schedule. So far, NRC is on schedule to conduct the first round of force-on-force inspections at all sites within 3 years. As we reported in 2004, NRC is planning to conduct an inspection at each site every 3 years instead of every 8 years, as the agency had been doing. NRC initiated a new force-on-force program in November 2004, together with a 3-year schedule to complete inspections at all sites, after the revised DBT took effect on October 29, 2004. NRC officials told us they had completed inspections at 20 (or about 31 percent) of the 65 sites as of November 1, 2005. Furthermore, NRC officials told us that three teams are conducting the inspections and that NRC is hiring additional force-on-force personnel. Given the importance of the force-on-force inspections in demonstrating how well a nuclear power plant might defend against a real-life threat, we believe it is important that NRC devote the necessary resources to ensure that it continues to meet the inspection schedule. The nuclear power industry and NRC have taken very seriously the need to protect nuclear power plants against a potential terrorist attack and have made important investments to this end. However, NRC’s process for revising the DBT for nuclear power plants raises a fundamental question— the extent to which the DBT represents the terrorist threat as indicated by intelligence data versus the extent to which it represents the threat that NRC considers reasonable for the plants to defend against. Specifically, NRC’s process for deciding on the DBT raised the possibility that the industry may have inappropriately influenced the staff’s interpretation of intelligence data. The NRC threat assessment staff obtained the views of the nuclear industry on a draft of the revised DBT while they continued to assess intelligence information, and the staff made industry-recommended changes to the DBT even though the intelligence information had not changed. We recognize that NRC should and would want to obtain feedback from the industry and other stakeholders on the implications of the proposed changes before finalizing the DBT. In addition, NRC has stated that it has altered its process for obtaining industry feedback so that the threat assessment staff interacts with industry only after it has made its proposals for changes to the DBT. However, this approach does not entirely eliminate the appearance of industry influence. Threat assessment is a continuous process, and this sequential approach would still allow for interactions between the agency’s threat assessment staff and the nuclear industry. Assigning responsibility for obtaining feedback from the nuclear industry to an office within NRC other than the Threat Assessment Section would further reduce any appearance of industry influence on the process of assessing the terrorist threat to nuclear power plants. The commissioners would then be able to review the threat assessment staff’s recommended changes to the DBT with confidence that the recommendations are based strictly on an assessment of the threat. In making the final decision to revise the DBT, the commissioners would also consider industry feedback on the staff’s recommendations. Furthermore, the commissioners did not have explicit criteria that they used as the basis for removing certain weapons from the DBT recommended by the NRC staff. Consideration of what is reasonable for a private security force to defend against, as well as industry views on proposed changes to the DBT, is an appropriate function of the commissioners. However, explicit criteria setting out the factors and how they would be weighed to determine what adversary characteristics are not reasonable for a private security force to defend against would have provided greater transparency for the commissioners’ decisions to exclude certain characteristics from the DBT. Such criteria would also potentially increase the rigor and consistency of the process. The underlying process used by NRC was logical and well defined and should enable NRC to produce a more credible DBT if these shortcomings are addressed. In our visits to nuclear power plants, we saw a clear connection between the changes in the DBT and the plants’ recent security enhancements. The plants’ response to the revised DBT and other NRC orders following the September 11 terrorist attacks has been substantial and, in some cases, has gone beyond what was required. Nevertheless, because the plants essentially designed their security to defend against the DBT outlined by NRC, their capability to defend against an attack is essentially limited to how similar such an attack would be to the DBT. Therefore, it is imperative that NRC and the plants continue to work with DHS and other federal, state, and local authorities to ensure they have coordinated their efforts to defend plants in the event of an attack, particularly one that exceeds the adversary characteristics in the revised DBT. Furthermore, although security has improved, the results of NRC’s baseline and force-on-force inspections conducted thus far have uncovered some problems that needed to be addressed. Moreover, the effectiveness of any nuclear power plant’s security depends on the various parts and systems working well together during the stress of an actual attack. Therefore, NRC’s continued vigilance at the plant level, especially in conducting force-on-force inspections, is needed to ensure that plants are consistently well protected. In conjunction with revising the DBT, NRC has implemented improvements to its force-on-force inspection program that put the agency in a better position to evaluate the nuclear power plants’ protective strategies. These improvements have addressed several of our previous recommendations regarding the force-on-force inspections. However, in observing three inspections, we noted additional opportunities for improvement, such as artificialities that could be further reduced to better test how plants would respond to an actual terrorist attack. Making further improvements to the force-on-force program would enhance NRC’s ability to assure the public and Congress that nuclear power plants are capable of defending against a DBT-style terrorist attack. To improve the process by which NRC makes future revisions to the DBT for nuclear power plants, we recommend that the NRC commissioners take the following two actions: Assign responsibility for obtaining feedback from the nuclear industry and other stakeholders on proposed changes to the DBT to an office within NRC other than the Threat Assessment Section, so that the threat assessment staff is able to assess the terrorist threat to nuclear power plants without creating the potential for or appearance of industry influencing their analysis. The commissioners, in turn, could consider both the staff’s analysis of the terrorist threat and industry feedback to make the final determination as to whether and how to revise the DBT. Develop explicit criteria to guide the commissioners in their deliberations to approve changes to the DBT. These criteria should include setting out the specific factors and how they will be weighed in deciding what characteristics of an attack on a nuclear power plant would constitute an enemy of the United States, or otherwise would not be reasonable for a private security force to defend against. We further recommend that the NRC commissioners continue to evaluate and implement measures to further strengthen the force-on-force inspection program. For example, NRC may be able to identify and reduce artificialities associated with the inspections to better test how nuclear power plants would respond to an actual terrorist attack. We provided a draft of this report to NRC for its review and comment. In its written comments (see app. III), NRC commended GAO’s effort to ensure that the report is accurate and constructive. It also provided additional clarifying comments on two areas of the report pertaining to the process NRC used in 2003 to revise the DBT for nuclear power plants. First, NRC stated that the report should provide a better description of the context for the process by which the agency obtained industry input and the appearance of industry influence on the development of the revised DBT. NRC wrote that the agency made a deliberate decision to develop the revised DBT while simultaneously (rather than sequentially) seeking input from stakeholders, including the nuclear industry. NRC stated that this was a departure from its typical approach and was intended to advance public health and safety and the common defense and security, similar to other government actions taken after the September 11, 2001, terrorist attacks. In addition, NRC stated that it has returned to its normal sequential approach to developing DBT revisions and seeking input from stakeholders. We are pleased that NRC recognizes the need to separate the process of analyzing intelligence information from seeking input from stakeholders, including the nuclear industry. In response to NRC’s earlier comments on the classified version of this report, which were essentially the same, we revised the reports to clarify that NRC deliberately decided to develop the revised DBT while simultaneously obtaining stakeholder input to speed up the process in the aftermath of the September 11, 2001, terrorist attacks. However, whether NRC chooses to use a simultaneous or sequential process, we continue to believe that the best approach would be to insulate the threat assessment staff from interactions with the nuclear industry by assigning responsibility for such interactions to a different office in NRC. This would best separate the fact-based analysis of the threat to commercial nuclear power plants from policy-level considerations regarding what is reasonable for a private security force to defend against. We also clarified our recommendation to indicate our view that the threat assessment staff should be insulated from interacting with the nuclear industry and other stakeholders. Second, regarding the criteria the commission used to make decisions regarding the DBT, NRC wrote that a more comprehensive discussion in the report of the commission’s deliberative decision-making process would provide important perspective. NRC stated that the agency first established a DBT for nuclear power plants in the late 1970s and has a long history in this area. Furthermore, NRC wrote that the commission’s decision-making authority does not require, and could be unduly restricted by, detailed prescriptive criteria. Finally, NRC stated its view that the basis for the commission’s policy decisions and direction to the NRC staff with regard to the DBT are sufficiently articulated in the commission’s voting record and related staff requirements memorandums. We revised the reports to include NRC’s view that the basis for the commission’s policy decisions regarding the DBT is articulated in the commission’s voting record and related staff requirements memorandum. However, based on our review of the voting record and staff requirements memorandum, as well as other documents related to the April 2003 revised DBT, we remain concerned that the basis for how the commissioners made decisions to exclude certain characteristics from the DBT is not as transparent as it could be. We did not find that the commissioners agreed upon a definition of “enemy of the United States” or explicit criteria for what adversary characteristics would not be reasonable for a private security force to defend against. For example, the memorandum accompanying the commission’s April 2003 decision approving changes to the DBT for nuclear power plants did not provide the reason for the commission’s decision to remove two weapons the NRC threat assessment staff had recommended for inclusion. Rather, the voting record showed that individual commissioners used differing criteria and emphasized different factors, such as cost or practicality of defensive measures. The staff requirements memorandum set forth the general criteria that a civilian security force cannot reasonably be expected to defend against all threats. Furthermore, the intent of our recommendation that NRC develop criteria for what adversary characteristics constitute an enemy of the United States, or otherwise would not be reasonable for a private security force to defend against, is not to restrict the commission’s decision-making authority through detailed prescriptive criteria. Instead, the intent of our recommendation is to have general criteria or definitions to guide the commissioners’ decisions and to provide greater transparency for commission decisions, the details of which are safeguards information and withheld from the public. Finally, NRC commented that NRC and GAO staffs discussed potential issues related to the draft report that needed to be addressed. NRC also wrote that the draft report contained safeguards information, which should be removed prior to the report being made public. The potential issues have been resolved, and we have revised the report for the purpose of removing safeguards information. The resulting report is substantially the same as the classified version of the report, with the exception that the classified version contains additional details about the DBT and security at nuclear power plants. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Chairman of NRC, and other interested parties. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or wellsj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. To examine the process the Nuclear Regulatory Commission (NRC) used to develop the April 2003 design basis threat (DBT) for radiological sabotage applied to nuclear power plants, we analyzed NRC’s documentation of the process and conducted interviews with NRC threat assessment staff and other officials. In particular, we compared the adversary characteristics of the April 2003 revised DBT approved by the commissioners with the adversary characteristics in the previous DBT, as described in a February 2000 NRC staff position paper; the January 2003 draft DBT provided to stakeholders for comment; and the NRC staff’s April 2003 recommended changes to the DBT submitted to the commissioners. Furthermore, for each component of NRC’s process, we analyzed documents and conducted a series of interviews: To examine the role of intelligence analysis, we analyzed the NRC staff’s reports on the terrorist threat to nuclear power plants and the results of their analysis of intelligence information on terrorist activities worldwide. The three key reports we analyzed included an October 2002 report on the use of vehicle bombs; a November 2002 report on the potential use of other adversary characteristics against nuclear power plants; and the April 2003 report that included the staff recommendations on the DBT. To obtain further insight into the NRC’s use of intelligence information, we interviewed NRC officials, including the head of NRC’s Threat Assessment Section; reviewed a description of the adversary characteristics screening process; and received briefings on the process from NRC. We also interviewed officials from other federal agencies, including the Department of Homeland Security (DHS) and the Federal Bureau of Investigation (FBI). NRC redacted text from a number of the documents provided to us if the text contained classified information from other federal agencies, including the Department of Energy (DOE). As agreed with NRC, we identified the selected portions of the redacted text that we wanted to review, and NRC requested permission from the other agencies to provide the text to us. All of the agencies NRC contacted except one granted permission to release the redacted text to us. We compared NRC’s April 2003 revised DBT with DOE’s October 2004 DBT and February 2004 Terrorist Adversary Capabilities List and interviewed DOE Office of Security officials regarding the DOE DBT and differences with the NRC DBT. We also reviewed the September 2004 final report of the DOE DBT re-examination task force. We did not compare the implementation of security measures at DOE sites to defend against the DOE DBT with security at commercial nuclear power plants. To examine NRC’s consultation with the nuclear industry, we reviewed the written comments submitted by the Nuclear Energy Institute (NEI) on the January 2003 draft DBT and compared NEI’s comments with the changes the NRC staff made to the draft DBT. We also interviewed NEI officials and senior officials at the nuclear power plant sites we visited, including some who served on the NEI working group responsible for security matters. To examine the decisions by the NRC commission, we analyzed the commission voting record (including written comments of individual commissioners), the April 2003 memorandum summarizing the commission’s final decisions, and the NRC regulation on enemy of the United States (10 C.F.R. § 50.13). Furthermore, we interviewed three of the four commissioners who were serving on the commission at the time the DBT was revised and who participated in the decision-making process. We interviewed the three commissioners as a group in a meeting that was not subject to the requirements of the Government in the Sunshine Act. This meant that the commissioners could discuss previous actions, including their April 2003 decisions on changes to the DBT, but not the formulation of future policy. For example, we did not ask the commissioners about the potential for future changes to the DBT. In addition to this meeting, we met individually with the two commissioners who assumed their posts in 2005 and did not participate in the decision-making process for the April 2003 revised DBT. To determine what actions nuclear power plants have taken to enhance security in response to the revised DBT, we interviewed staff from NRC’s Office of Nuclear Security and Incident Response, reviewed security orders NRC has issued since September 11, 2001, and visited a nonprobability sample of four nuclear power plant sites. We do not name the sites we visited in this report because information about security at particular sites is sensitive and considered safeguards information, and because the objective of our visits was to provide a general description of the changes in security sites implemented in response to the revised DBT, rather than the changes at a particular site. Prior to our site visits, we observed a baseline inspection at one site and a multiexercise force-on-force inspection at another site in order to better familiarize ourselves with NRC security requirements as well as sites’ security equipment and strategies. We selected these two sites based on the timing of the activities. To select the nonprobability sample of four sites we visited, we first eliminated certain sites, such as those we had recently visited for security- related work (including the two sites where we observed NRC inspections) and sites frequently visited by Congress. We then selected one site from each of the four NRC regions using the following criteria: sites representing different sizes and types of licensees, including licensees that own or operate a single nuclear power plant site, licensees that own or operate two to six sites, and licensees that own or operate seven or more sites; sites with different surroundings, such as different topography and proximity to water, in order to consider the effect of such factors on sites’ security strategies; sites with security forces hired both directly as site employees as well as through a contractor, including one site that uses security officers employed by Wackenhut Corporation, which provides security services to about half of the nuclear power plant sites; sites with the two different categories of reactors licensed by NRC for operation in the United States—two sites with boiling-water reactors and two sites with pressurized-water reactors; and sites with different numbers of reactors. At each of the four sites, we used a semistructured guide to interview security managers and other site officials, and interviewed a random selection of security officers. We worked with site management so that our interviews with the security officers did not interfere with their duties. We conducted individual interviews with security officers in private rooms, without the attendance of plant management or other plant staff. We also examined security equipment and reviewed documents, including security plans, protective strategy documents, safeguards event logs, security officer work-hour records, training materials, and equipment testing records. GAO staff with a professional background in security accompanied us on our visits in order to provide the expertise needed to fully comprehend the sites’ security equipment and strategies. In addition to site visits, we reviewed 9 of the 16 force-on-force inspection reports and a sample of 18 baseline inspection reports that NRC had completed between November 2004 and the time we reviewed the reports. The 18 baseline inspection reports we reviewed consisted of reports provided by NRC from each of the four regions, plus additional reports we randomly selected ourselves. Time constraints prevented us from reviewing additional reports. We also discussed the revised DBT and security improvements at nuclear power plant sites with the Nuclear Energy Institute and the Project on Government Oversight, an independent nonprofit organization. To review NRC’s progress in strengthening the conduct of force-on-force inspections, we observed a total of three inspections at two sites. Two of the inspections were at a site where NRC decided to conduct a second inspection as a result of the agency’s limited ability to evaluate security during the first inspection. After the first inspection at this site, but before the second, we also attended a meeting at the site in which the licensee briefed NRC on security improvements the site had made in response to the first inspection, and we observed these improvements. GAO staff with a professional background in security accompanied us to the third inspection. In addition, as discussed above, we reviewed NRC reports on 9 of the 16 force-on-force inspections NRC had completed at the time we reviewed the reports. Finally, we interviewed NRC officials responsible for implementing the force-on-force inspection program. We conducted our work from November 2004 through January 2006 in accordance with generally accepted government auditing standards. Of the 27 baseline and force-on-force inspection reports we reviewed, NRC identified no findings in 11 of the reports but did describe a variety of problems with the sites’ security in the remaining 16. The reports we reviewed included one on a force-on-force inspection we observed, in which NRC required the licensee to implement measures to address weaknesses in the site’s protective strategy and decided to return for a second force-on-force inspection. The following are additional examples of NRC findings from the 16 reports, including corrective actions taken by the licensees: In a baseline inspection at a site, several alarms failed to activate during a test of the intrusion detection system, which alerts security officers to the occurrence and location of a breach. Further testing identified multiple alarms that were not functioning properly, and the site subsequently declared the entire intrusion detection system inoperable. Prior to leaving the site, NRC inspectors confirmed that the site implemented compensatory measures to address problems with the intrusion detection system, and NRC determined that further inspection of the site at a later date was warranted. According to NRC, the subsequent inspection at the site confirmed that the problem had been corrected. During a force-on-force exercise at another site, NRC observed two officers performing duties other than their assigned patrols of the owner-controlled area. The patrols are a component of NRC’s requirement for continuous surveillance of the owner-controlled area. Further inspection revealed that the security officers manning the site’s central and secondary alarm stations were unaware that the owner- controlled area was not being continuously patrolled. In the event of an attack, owner-controlled area observations can be crucial both for setting a response in motion by detecting intruders as early as possible and for providing information about where attackers have entered the site and where they are going so that security officers know how to respond. According to NRC, the licensee took immediate corrective action. Also during this inspection, NRC observed that the licensee deployed too many officers in the force-on-force scenarios as a result of a misunderstanding. In particular, the licensee had temporarily increased the number of dedicated responders above the minimum listed in the security plan to respond to the increased national threat level. However, according to NRC, the additional officers did not play a role in stopping the attackers in the scenarios. In a baseline inspection, NRC observed three examples of failure to perform proper searches of personnel entering the protected area. For example, a security officer did not examine items that had alarmed a metal detector and allowed an individual to collect and carry the items into the protected area without further examination. Based on discussions with security officers and supervisors, NRC found that this deficiency was routine and commonly accepted at the site. NRC concluded that this situation had the potential to reduce the overall effectiveness of the protective strategy by allowing the uncontrolled introduction of weapons or explosives into the protected area. According to NRC, the licensee took immediate corrective action, and security staff were required to attend remedial training on search techniques and policy. In a force-on-force exercise, the attackers were able to destroy three out of four targeted components. NRC observed that the attackers faced an insufficient level of delay, which allowed them to reach the three components before being interdicted by security officers. According to the inspection report, sufficient delay is an essential component of a protective strategy to prevent radiological sabotage. As a result of the inspection, the licensee agreed to add delay locks to doors and relocate security officers to ensure they could interdict attackers. NRC found that a number of sites ran weapons-training qualification courses in which security officers were not trained in the way they would be expected to perform during an attack. For example, sites did not train security officers to use backup weapons for when they could not use their primary weapons, or to undergo the level of physical stress an officer would experience during an attack. At one of the sites, NRC also found that the site had lowered the minimum qualification score related to training security officers to use their weapons, potentially resulting in security officers being less qualified in the use of their weapons than what NRC believes is necessary. In addition, the licensee did not seek NRC approval for the change as mandated by NRC’s regulations. However, NRC found that all of the security officers who had received the training before the issue was observed and corrected had qualified on the use of their weapons at the higher score. Furthermore, according to NRC, the agency issued amplified guidance to all nuclear power plant sites regarding weapons-training qualification courses. During the force-on-force inspection we observed, NRC inspectors found that a site had not included the control room, spent fuel pool, and the alternative shutdown panel among its targets. NRC required the licensee to redevelop its target components for use in the force-on-force scenarios. The adequate identification of target components is vital to a site’s ability to position security officers or direct them to locations where they can interpose themselves between the attacker and target components. In an inspection initiated after the licensee observed security officers who were inattentive at their posts, NRC inspectors found the licensee had recorded 19 incidences in which security officers worked more hours in a specific time period than allowed by NRC regulations. NRC concluded that failure to meet the work-hour limits increased the susceptibility of security officers to fatigue and had the potential to reduce the effectiveness of the site’s protective strategy. According to the inspection report, the licensee identified several causes that contributed to the problem and took immediate corrective actions. According to NRC, the agency verified that the site updated its procedures to conform to NRC’s work-hour regulations. (At the four sites we visited, we reviewed work-hour logs and found that each site had generally stayed within security officer work-hour limits.) In a baseline inspection, the licensee was unable to provide engineering documents to demonstrate the acceptable minimum safe standoff distance from the inner vehicle barrier system, which is designed to protect the site from a vehicle bomb. NRC requested that the licensee measure the distance between several structures and the closest part of the vehicle barrier system. The measurements showed that the barrier was too close to at least two structures. As immediate corrective and compensatory actions, the licensee installed additional vehicle barriers in the area of concern and implemented direct observation by a security officer. In addition to the individuals named above, Raymond H. Smith, Jr. (Assistant Director), Joseph H. Cook, and Michelle K. Treistman made key contributions to this report. Also contributing to this report were John Cooney, Doreen Feldman, Andrew O’Connell, Judy K. Pagano, Keith A. Rhodes, Carol Herrnstadt Shulman, and Barbara Timmerman.
The nation's commercial nuclear power plants are potential targets for terrorists seeking to cause the release of radioactive material. The Nuclear Regulatory Commission (NRC), an independent agency headed by five commissioners, is responsible for regulating and overseeing security at the plants. In April 2003, in response to the terrorist attacks of September 11, 2001, NRC revised the design basis threat (DBT), which describes the threat that plants must be prepared to defend against in terms of the number of attackers and their training, weapons, and tactics. NRC has also restructured its program for testing security at the plants through force-on-force inspections, which consist of mock terrorist attacks. GAO was asked to review (1) the process NRC used to revise the DBT for nuclear power plants, (2) the actions nuclear power plants have taken to enhance security in response to the revised DBT, and (3) NRC's progress in strengthening the conduct of force-on-force inspections at the plants. NRC revised the DBT for nuclear power plants using a generally logical and well-defined process in which trained threat assessment staff made recommendations for changes based on an analysis of demonstrated terrorist capabilities. The process resulted in a DBT requiring plants to defend against a larger terrorist threat, including a larger number of attackers, a refined and expanded list of weapons, and an increase in the maximum size of a vehicle bomb. Key elements of the revised DBT, such as the number of attackers, generally correspond to the NRC threat assessment staff's original recommendations, but other important elements do not. For example, the NRC staff made changes to some recommendations after obtaining feedback from stakeholders, including the nuclear industry, which objected to certain proposed changes such as the inclusion of certain weapons. NRC officials said the changes resulted from further analysis of intelligence information. Nevertheless, GAO found that the process used to obtain stakeholder feedback created the appearance that changes were made based on what the industry considered reasonable and feasible to defend against rather than on an assessment of the terrorist threat itself. Nuclear power plants made substantial security improvements in response to the September 11, 2001, attacks and the revised DBT, including security barriers and detection equipment, new protective strategies, and additional security officers. It is too early, however, to conclude that all sites are capable of defending against the DBT because, as of November 1, 2005, NRC had conducted force-on-force inspections at about one-third of the plants. NRC has improved its force-on-force inspections--for example, by conducting inspections more frequently at each site. Nevertheless, in observing three inspections and discussing the program with NRC, GAO noted potential issues in the inspections that warrant NRC's continued attention. For example, a lapse in the protection of information about the planned scenario for a mock attack GAO observed may have given the plant's security officers knowledge that allowed them to perform better than they otherwise would have. A classified version of this report provides additional details about the DBT and security at nuclear power plants.
In 1986, the United States and the FSM and the RMI entered into the Compact of Free Association. This Compact represented a new phase of the unique and special relationship that has existed between the United States and these island areas since World War II. It also represented a continuation of U.S. rights and obligations first embodied in a U.N. trusteeship agreement that made the United States the Administering Authority of the Trust Territory of the Pacific Islands. The Compact provided a framework for the United States to work toward achieving its three main goals—(1) to secure self-government for the FSM and the RMI, (2) to assure certain national security rights for all the parties, and (3) to assist the FSM and the RMI in their efforts to advance economic development and self-sufficiency. The first two goals have been met through the Compact and its related agreements. The third goal, advancing economic development and self-sufficiency, was to be accomplished primarily through U.S. direct financial payments (to be disbursed and monitored by the U.S. Department of the Interior) to the FSM and the RMI. However, economic self-sufficiency has not been achieved. Although total U.S. assistance (Compact direct funding as well as U.S. programs and services) as a percentage of total government revenue has fallen in both countries (particularly in the FSM), the two nations remain highly dependent on U.S. assistance. In 1998, U.S. funding accounted for 54 percent and 68 percent of FSM and RMI total government revenues, respectively, according to our analysis. This assistance has maintained standards of living that are artificially higher than could be achieved in the absence of U.S. support. Another aspect of the special relationship between the FSM and the RMI and the United States involves the unique immigration rights that the Compact grants. Through the Compact, citizens of both nations are allowed to live and work in the United States as “nonimmigrants” and can stay for long periods of time, with few restrictions. Further, the Compact exempts FSM and RMI migrating citizens from meeting U.S. passport, visa, and labor certification requirements. Unlike economic assistance provisions, the Compact’s migration provisions are not scheduled to expire in 2003. In recognition of the potential adverse impacts that Hawaii and nearby U.S. commonwealths and territories could face as a result of an influx in migrants, the Congress authorized Compact impact payments to address the financial impact of migrants on Guam, Hawaii, and the CNMI. Finally, the Compact served as the vehicle to reach a full settlement of all compensation claims related to U.S. nuclear tests conducted on Marshallese atolls between 1946 and 1958. In a Compact-related agreement, the U.S. government agreed to provide $150 million to create a trust fund. While the Compact and its related agreements represented the full settlement of all nuclear claims, it provided the RMI the right to submit a petition of “changed circumstance” to the U.S. Congress requesting additional compensation. The RMI government submitted such a petition in September 2000. Under the most recent (May 2002) U.S. proposals to the FSM and the RMI, new congressional authorizations of approximately $3.4 billion would be required for U.S. assistance over a period of 20 years (fiscal years 2004 through 2023). The share of new authorizations to the FSM would be about $2.3 billion, while the RMI would receive about $1.1 billion (see table 1). This new assistance would be provided to each country in the form of annual grant funds, extended federal services (that have been provided under the original Compact but are due to expire in 2003), and contributions to a trust fund for each country. (Trust fund earnings would become available to the FSM and the RMI in fiscal year 2024 to replace expiring annual grants.) For the RMI, the U.S. proposal also includes funding to extend U.S. access to Kwajalein Atoll for U.S. military use from 2017 through 2023. In addition to new authorized funding, the U.S. government will provide (1) continuing program assistance amounting to an estimated $1.1 billion to the two countries over 20 years and (2) payments previously authorized of about $189 million for U.S. access to Kwajalein Atoll in the RMI through 2016. If new and previous authorizations are combined, the total U.S. cost for all Compact-related assistance under the current U.S. proposals would amount to about $4.7 billion over 20 years, not including costs for administration and oversight that are currently unknown. Under the U.S. proposals, annual grant amounts to each country would be reduced over time, while annual U.S. contributions to the trust funds would increase by the grant reduction amount. Annual grant assistance to the FSM would fall from a real value of $76 million in fiscal year 2004 to a real value of $53.2 million in fiscal year 2023. Annual grant assistance to the RMI would fall from a real value of $33.9 million to a real value of $17.3 million over the same period. This decrease in grant funding, combined with FSM and RMI population growth, would also result in falling per capita grant assistance over the funding period – particularly for the RMI (see fig. 1). The real value of grants per capita to the FSM would decrease from an estimated $684 in fiscal year 2004 to an estimated $396 in fiscal year 2023. The real value of grants per capita to the RMI would fall from an estimated $623 in fiscal year 2004 to an estimated $242 in fiscal year 2023. In addition to grants, however, both countries would receive federal programs and services, and the RMI would receive funding related to U.S. access to Kwajalein Atoll. The U.S. proposals are designed to build trust funds that earn a rate of return such that trust fund yields can replace grant funding in fiscal year 2024 once annual grant assistance expires. The current U.S. proposals do not address whether trust fund earnings should be sufficient to cover expiring federal services or create a surplus to act as a buffer against years with low or negative trust fund returns. At a 6 percent rate of return (the Department of State’s assumed rate) the U.S. proposal to the RMI would meet its goal of creating a trust fund that yields earnings sufficient to replace expiring annual grants, while the U.S. proposal to the FSM would not cover expiring annual grant funding, according to our analysis. Moreover, at 6 percent, the U.S. proposal to the RMI would cover the estimated value of expiring federal services, while the U.S. proposal to the FSM clearly would not. At a 6 percent return, neither proposed trust fund would generate buffer funds. If an 8.2 percent average rate of return were realized, then the RMI trust fund would yield earnings sufficient to create a buffer, while the FSM trust fund would yield earnings sufficient to replace grants and expiring federal services. I now turn my attention to provisions in the current U.S. proposals designed to provide improved accountability over, and effectiveness of, U.S. assistance. This is an area where we have offered several recommendations in the past 2 years. As I discuss key proposed accountability measures, I will note whether our past recommendations have been addressed where relevant. In sum, many of our recommendations regarding future Compact assistance have been addressed with the introduction of strengthened accountability measures in the current U.S. proposals. However, specific details regarding how some key accountability provisions would be carried out will be contained in separate agreements that remain in draft form or have not yet been released. The following summary describes key accountability measures included in the U.S. proposals that address past GAO recommendations: The proposals require that grants would be targeted to priority areas such as health, education, and infrastructure. Further, grant conditions normally applicable to U.S. state and local governments would apply to each grant. Such conditions could address areas such as procurement and financial management standards. U.S. proposals also state that the United States may withhold funds for violation of grant terms and conditions. We recommended in a 2000 report that the U.S. government negotiate provisions that would provide future Compact funding through specific grants with grant requirements attached and allow funds to be withheld for noncompliance with spending and oversight requirements. However, identification of specific grant terms and conditions, as well as procedures for implementing and monitoring grants and grant requirements and withholding funds, will be addressed in a separate agreement that has not yet been released. The U.S. proposals to the FSM and the RMI list numerous items for discussion at the annual consultations between the United States and the two countries. Specifically, the proposals require that consultations address single audits and annual reports; evaluate progress made for each grant; discuss the coming fiscal year’s grant; discuss any management problems associated with each grant; and discuss ways to respond to problems and otherwise increase the effectiveness of future U.S. assistance. In the previously cited report, we recommended that the U.S. government negotiate an expanded agenda for future annual consultations. Further, the proposals give the United States control over the annual review process: The United States would appoint three members to the economic review board, including the chairman, while the FSM or the RMI would appoint two members. Recommendations from our 2000 report are being addressed regarding other issues. The U.S. proposals require U.S. approval before either country can pledge or issue future Compact funds as a source for repaying debt. The proposals also exclude a “full faith and credit” pledge that made it impracticable to withhold funds under the original Compact. In addition, the U.S. proposals provide specific uses for infrastructure projects and require that some funds be used for capital project maintenance. We also recommended that Interior ensure that appropriate resources are dedicated to monitoring future assistance. While the U.S. proposals to the two countries do not address this issue, an official from the Department of the Interior’s Office of Insular Affairs has informed us that his office has tentative plans to post five staff in a new Honolulu office. Further, Interior plans to bring two new staff on board in Washington, D.C., to handle Compact issues, and to post one person to work in the RMI (one staff is already resident in the FSM). A Department of State official stated that the department intends to increase its Washington, D.C., staff and overseas contractor staff but does not have specific plans at this point. Trust fund management is an area where we have made no recommendations, but we have reported that well-designed trust funds can provide a sustainable source of assistance and reduce long-term aid dependence. The U.S. proposals would grant the U.S. government control over trust fund management: The United States would appoint three trustees, including the chairman, to a board of trustees, while the FSM or the RMI would appoint two trustees. The U.S. Compact Negotiator has stated that U.S. control would continue even after grants have expired and trust fund earnings become available to the two countries; in his view, “the only thing that changes in 20 years is the bank,” and U.S. control should continue. He has also noted that it may be possible for the FSM and the RMI to assume control over trust fund management at some as yet undetermined point in the future. Finally, while the departments of State and the Interior have addressed many of our recommendations, they have not implemented our accountability and effectiveness recommendations in some areas. For example, our recommendation that annual consultations include a discussion of the role of U.S. program assistance in economic development is not included in the U.S. proposals. Further, the departments of State and the Interior, in consultation with the relevant government agencies, have not reported on what program assistance should be continued and how the effectiveness and accountability of such assistance could be improved. Finally, U.S. proposals for future assistance do not address our recommendation that consideration should be given to targeting future health and education funds in ways that effectively address specific adverse migration impact problems, such as communicable diseases, identified by Guam, Hawaii, and the CNMI. I would also like to take just a moment to cite proposed U.S. changes to the Compact’s immigration provisions. These provisions are not expiring but have been targeted by the Department of State as requiring changes. I believe it is worth noting these proposed changes because, to the extent that they could decrease migration rates (a shift whose likelihood is unclear at this point), our current per capita grant assistance figures are overstated. This is because our calculations assume migration rates that are similar to past history and so use lower population estimates than would be the case if migration slowed.
The United States entered into the Compact of Free Association with the Federated States of Micronesia (FSM) and the Republic of the Marshall Islands (RMI) In 1986. The Compact has provided U.S. assistance to the FSM and the RMI in the form of direct funding as well as federal services and programs. The Compact allows for migration from both countries to the United States and established U.S. defense rights and obligations in the region. Provisions of the Compact that deal with economic assistance were scheduled to expire in 2001; however, they will remain in effect for up to 2 additional years while the affected provisions are renegotiated. Current U.S. proposals to the FSM and the RMI to renew expiring assistance would require Congress to approve $3.4 billion in new authorizations. The proposals would provide decreasing levels of annual grant assistance over a 20-year term. Simultaneously, the proposals would require building up a trust fund for each country with earnings that would replace grants once those grants expire. The U.S. proposals include strengthened accountability measures, though details of some key measures remain unknown. The proposals have addressed many, but not all, recommendations that GAO made in past reports regarding assistance accountability.
Medicaid is an open-ended entitlement; states are generally obligated to pay for covered services provided to eligible individuals, and the federal government is obligated to pay its share of a state’s expenditures under a federally approved state Medicaid plan. The federal share of each state’s Medicaid expenditures is based on a statutory formula known as the Federal Medical Assistance Percentage (FMAP). Some states design their Medicaid programs to have local governments contribute to the programs’ costs, for example, through intergovernmental transfers of funds from government-owned or -operated providers to the state Medicaid program. States may, subject to certain requirements, also receive funds to finance Medicaid payments from health care providers, for example, through provider taxes—taxes levied on providers such as hospitals or nursing facilities. Under federal law, provider taxes must be broad-based, must be uniformly imposed, and must not hold providers harmless; that is, they must not provide a direct or indirect guarantee that providers will receive all or a portion of tax payments back. Taxes that are at or below 6 percent of the individual provider’s net patient service revenues are considered not to have provided an indirect guarantee that providers will receive their tax payments back. In addition to flexibility in determining sources of funds they use to finance their nonfederal share, states have flexibility, within broad federal requirements, in designing and operating their Medicaid programs, including determining which services to cover and setting payment rates for providers. In general, federal law provides for federal matching funds for state Medicaid payments for covered services provided to eligible beneficiaries up to a ceiling or limit, often called the upper payment limit (UPL). The UPL is based on what Medicare would pay for the same services. States often make two general types of Medicaid supplemental payments: First, under federal Medicaid law, states are required to make disproportionate share hospital (DSH) payments to certain hospitals. These payments are designed to help offset these hospitals’ uncompensated care costs for serving Medicaid and uninsured low- income patients. States’ Medicaid payment rates are not required to cover the full costs of providing care to Medicaid beneficiaries, and many providers also provide care to low-income patients without any insurance or ability to pay. Under federal law, DSH payments are capped at a facility-specific level and state level. Second, many states also make another type of Medicaid supplemental payment, referred to here as non-DSH supplemental payments, to hospitals and other providers who, for example, serve high-cost Medicaid beneficiaries. Unlike DSH payments, non-DSH supplemental payments are not required under federal law, do not have a specified statutory or regulatory purpose, and are not subject to firm dollar limits at the facility or state level. Unlike regular Medicaid payments, which are paid on the basis of covered Medicaid services provided to Medicaid beneficiaries through an automated claims process, non-DSH supplemental payments are not necessarily made on the basis of claims for specific services to particular patients and can amount to tens or hundreds of millions of dollars to a single provider, annually. States can generally make non-DSH payments up to the UPL. Typically, state Medicaid payment rates are lower than what the Medicare program would pay, and so many states make supplemental payments under the UPL. Non-DSH supplemental payments, like regular Medicaid payments, must be consistent with Medicaid payment principles. Under federal law, to receive federal matching funds, payments generally must (1) be made for covered Medicaid items and services, (2) be consistent with economy, efficiency, and quality of care, and (3) not exceed the UPL. Supplemental payments may also be made under Medicaid demonstrations, but may not be subject to these requirements, depending on the terms of the demonstration. Historically, DSH payments exceeded Medicaid non-DSH payments. In recent years the opposite has occurred, and non-DSH payments have exceeded DSH payments. In fiscal year 2011, Medicaid non-DSH payments totaled nearly $26 billion compared to over $17 billion for DSH payments. For about two decades, we have raised concerns about supplemental payments and the adequacy of federal oversight. We have designated Medicaid a high-risk program due in part to these concerns. For example, in a February 2004 report, we found that over the years some states had made relatively large non-DSH supplemental payments to relatively small numbers of government-owned providers, and that these providers were then sometimes required to return these payments to the states, resulting in an inappropriate increase in federal matching funds. We also found that some states had used widely varying and inaccurate methods for estimating their non-DSH payment amounts, which may inflate the amount of non-DSH supplemental payments. CMS is responsible for ensuring that state Medicaid payments are consistent with federal requirements, including that payments are consistent with economy and efficiency and are for Medicaid-covered services. To do so, it is important for CMS to have relevant, reliable, and timely information for management decision making and external reporting purposes. In recent years, our work examining these payments has identified several instances of payments that further raise concerns about whether Medicaid payments that greatly exceeded costs are economical and efficient. For example, as reported in November 2012, we found that 39 states had made non-DSH supplemental payments to 505 hospitals that, along with their regular Medicaid payments, exceeded those hospitals’ total costs of providing Medicaid care by $2.7 billion. In some cases, payments greatly exceeded costs; for example, one hospital received almost $320 million in non-DSH payments and $331 million in regular Medicaid payments, which exceeded the $410 million in costs reported for the hospital for providing Medicaid services by about $241 million. As we reported in April 2015, our more recent analysis of average daily payment amounts—which reflect both regular payments and non-DSH supplemental payments—identified hospitals for which Medicaid payments received exceeded their Medicaid costs, and we also found a few cases where states made payments to local government hospitals that exceeded the hospitals’ total operating costs. CMS’s oversight mechanisms had not identified large overpayments to two hospitals in one state that resulted from non-DSH supplemental payments until we identified them. CMS began reviewing the appropriateness of the two hospitals’ payments during the course of our review. As we concluded in our 2012 and 2015 reports, although Medicaid payments are not required to be limited to a provider’s costs of delivering Medicaid services, payments that greatly exceed these costs raise questions, including whether they are consistent with economy and efficiency, whether they contribute to beneficiaries’ access to quality care, and the extent to which they are ultimately used for Medicaid purposes. However, CMS lacks data at the federal level on non-DSH supplemental payments, and the payments are not subject to audit. Based on our findings, we have identified opportunities to improve the oversight, transparency, and accountability of non-DSH supplemental payments to providers, in particular through improved reporting, auditing, and guidance. Since 2010, states have been required by federal law to submit annual facility-specific reports and annual independent certified audits on DSH payments. In connection with the independent audit requirement, standard methods were established for calculating DSH payment amounts. However, similar requirements for reporting, annual independent audits, and guidance on acceptable methods for calculating non-DSH supplemental payments are not in place for non-DSH payments. As we reported in November 2012, we found that the newly implemented annual reporting and audits for DSH payments improved CMS oversight—and we concluded that better reporting and audits of non-DSH supplemental payments could improve CMS’s oversight of these payments as well. As our work has shown, states’ non-DSH supplemental payments can be complex and challenging to assess. Hospital-specific information can be helpful to CMS and others for understanding, at the provider level, the relationship of supplemental payments to both regular Medicaid payments and Medicaid costs. For example, reporting of non-DSH payments that states make to individual hospitals and other providers relative to the providers’ Medicaid costs could improve the transparency of these payments. In addition, audits could improve accountability by providing information on how these payments are calculated and the extent to which payments to individual providers are consistent with the Medicaid payment principles of economy and efficiency. Absent complete and reliable provider-specific data on the non-DSH supplemental payments individual providers receive, CMS may not identify potentially excessive payments to providers, and the federal government could be paying states hundreds of millions—or billions—of dollars more than what is appropriate. CMS has taken some steps to improve oversight of these payments, but has not established facility-specific reporting requirements, required annual independent audits of states’ non-DSH payments, or specified uniform methods for calculating non-DSH supplemental payment amounts. Steps CMS has taken include issuing a state Medicaid Director letter in 2013 to obtain more information on non-DSH supplemental payments and awarding a contract in May 2014 to review Medicaid supplemental payment information, the outcomes of which were not yet known as of July 2015. CMS said in 2012 that legislation was necessary for them to implement reporting and auditing requirements for DSH payments, and that legislation would be needed for the agency to implement similar requirements for non-DSH supplemental payments. Consequently, we have suggested that Congress consider requiring CMS to take steps to improve the transparency and accountability of non-DSH supplemental payments, including requirements similar to those in place for DSH. Our work has found that states are increasingly relying on providers and local governments to finance Medicaid, and has also pointed to the need for better data and improved oversight to ensure that Medicaid payments are financed consistent with federal requirements, to understand financing trends, and to ensure federal matching funds are used efficiently. Further, our work has shown that state flexibility to seek contributions from local governments or impose taxes on health care providers to finance Medicaid may create incentives for states to overpay providers in order to reduce states’ financial obligations. Such financing arrangements can have the effect of shifting costs of Medicaid from states to the federal government. Benefits to providers, which may be financing a large share of any new payments, and to the beneficiaries whom they may serve, may be less apparent. CMS is responsible for ensuring that state Medicaid payments made under financing arrangements are consistent with Medicaid payment principles, including that they are economical and efficient, and that the federal government and states share in the financing of the Medicaid program as established by law. To oversee the Medicaid program, it is important for CMS to have accurate and complete information on the amount of funds supplied by health care providers and local governments to states to finance the nonfederal share of Medicaid. As we reported in July 2014, our survey of all state Medicaid programs found that states are increasingly relying on providers and local governments to help fund Medicaid. For example, in state fiscal year 2012, funds from providers and local governments accounted for 26 percent (or over $46 billion) of the approximately $180 billion in the total nonfederal share of Medicaid payments that year—an increase from 21 percent ($31 billion) in state fiscal year 2008. (See fig. 1.) These sources were used to fund Medicaid supplemental payments—both DSH and non-DSH—to a greater extent than other types of payments, and we found this reliance was growing. For Medicaid DSH and non-DSH supplemental payments, the percentage of the nonfederal share financed with funds from providers and local governments increased from 57 percent (or $8.1 billion) in state fiscal year 2008 to 70 percent (or $13.6 billion) in state fiscal year 2012. Several states relied on health care providers and local governments for the entire nonfederal share of supplemental payments in 2012. Our reports have illustrated how this increased reliance on non-state sources of funds can shift costs from states to the federal government, changing the nature of the federal-state partnership. For example, in our July 2014 report, our analysis of arrangements involving financing of the nonfederal share of Medicaid payments with funds from provider taxes or local governments in three selected states illustrated how Medicaid costs can be shifted from the state to the federal government and, to a lesser extent, to health care providers and local governments. The use of funds from providers and local governments is, as previously described, allowable under federal rules, but it can also have implications for federal costs. By increasing providers’ Medicaid payments, and requiring providers receiving the payments to supply all or most of the nonfederal share, we found that states claimed an increase in federal matching funds without a commensurate increase in state general funds. For example, in our 2014 report, we found that in one state a $220 million payment increase for nursing facilities in 2012 (which was funded by a tax on nursing facilities) resulted in an estimated $110 million increase in federal matching funds; no increase in state general funds; and a net payment increase to the facilities, after paying the taxes, of $105 million. (See fig. 2.) As we found in our 2014 report, due to data limitations, CMS is not well- positioned to either identify states’ Medicaid financing sources or assess their impact. Apart from data on provider taxes, CMS generally does not require (or otherwise collect) information from states on the funds they use to finance Medicaid, nor ensure that the data that it does collect are accurate and complete. The lack of transparency in states’ sources of funds and financing arrangements hinders CMS’s and federal policymakers’ efforts to oversee Medicaid. Further, it is difficult to determine whether a state’s increased reliance on funds from providers and local governments primarily serves to (1) provide fiscal relief to the state by increasing federal funding, or (2) increase payments to providers that in turn help improve beneficiary access. CMS has recognized the need for better data from states on how they finance their share of Medicaid and has taken steps to collect some data, but additional steps are needed. We recommended in July 2014 that CMS take steps to ensure that states report accurate and complete information on all sources of funds used to finance the nonfederal share of Medicaid, and offered suggestions for doing so. The Department of Health and Human Services (HHS) did not concur with our recommendation, stating that its current efforts were adequate; however, HHS acknowledged that additional data were needed to ensure that states comply with federal requirements regarding how much local governments may contribute to the nonfederal share, and stated that it would examine efforts to improve data collection for oversight. As of June 2015, HHS reported that its position continued to be that no further action is needed. Given states’ increased reliance on non-state sources to fund the nonfederal share of Medicaid, which can result in costs shifting to the federal government, we continue to believe that improved data are needed to improve transparency and oversight, such as to understand how increased federal costs may affect beneficiaries and the providers who serve them. In conclusion, the flexibility states have in how they pay providers and finance the nonfederal share has enabled states to make excessive payments to certain providers and allowed states to shift costs to the federal government. While Congress and CMS have taken important steps to improve the integrity of the Medicaid program through improved oversight of some Medicaid supplemental payments and financing arrangements, Congress and CMS need better information and more tools to understand who receives non-DSH supplemental payments and in what amounts, to ensure they are economical and efficient as required by law, and to determine the extent to which they are ultimately used for Medicaid purposes. Chairman Pitts, Ranking Member Green, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to respond to any questions that you might have at this time. If you or your staff have any questions about this testimony, please contact Katherine M. Iritani at (202) 512-7114. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this testimony include Tim Bushfield, Assistant Director; Robin Burke; Sandra George; Jessica Morris; Laurie Pachter; Said Sariolghalam; and Emily Wilson. The following table lists matters for congressional consideration regarding actions to improve the transparency of and accountability for the Medicaid non-disproportionate share hospital (DSH) supplemental payments states make to providers. It also includes recommendations we have made to the Department of Health and Human Services (HHS) regarding actions to improve data and oversight of the sources of funds states use to finance the nonfederal share of Medicaid. Medicaid: Key Issues Facing the Program. GAO-15-677. Washington, D.C.: July 30, 2015. 2015 Annual Report: Additional Opportunities to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-15-404SP. Washington, D.C.: April 14, 2015. Medicaid: CMS Oversight of Provider Payments Is Hampered by Limited Data and Unclear Policy. GAO-15-322. Washington, D.C.: April 10, 2015. Medicaid Financing: Questionnaire Data on States’ Methods for Financing Medicaid Payments from 2008 through 2012. GAO-15-227SP. Washington, D.C.: March 13, 2015, an e-supplement to GAO-14-627. High-Risk Series: An Update. GAO-15-290. Washington, D.C.: February 11, 2015. Medicaid Financing: States’ Increased Reliance on Funds from Health Care Providers and Local Governments Warrants Improved CMS Data Collection. GAO-14-627. Washington, D.C.: July 29, 2014. Medicaid: Completed and Preliminary Work Indicate that Transparency around State Financing Methods and Payments to Providers Is Still Needed for Oversight. GAO-14-817T. Washington, D.C.: July 29, 2014. 2013 Annual Report: Actions Needed to Reduce Fragmentation, Overlap, and Duplication and Achieve Other Financial Benefits. GAO-13-279SP. Washington, D.C.: April 9, 2013. Medicaid: More Transparency of and Accountability for Supplemental Payments Are Needed. GAO-13-48. Washington, D.C.: November 26, 2012. Medicaid: States Reported Billions More in Supplemental Payments in Recent Years. GAO-12-694. Washington, D.C.: July 20, 2012. Medicaid: Ongoing Federal Oversight of Payments to Offset Uncompensated Hospital Care Costs is Warranted. GAO-10-69. Washington, D.C.: November 20, 2009. Medicaid: CMS Needs More Information on the Billions of Dollars Spent on Supplemental Payments. GAO-08-614. Washington, D.C.: May 30, 2008. Medicaid Financing: Federal Oversight Initiative Is Consistent with Medicaid Payment Principles but Needs Greater Transparency. GAO-07- 214. Washington, D.C.: March 30, 2007. Medicaid: Improved Federal Oversight of State Financing Schemes Is Needed. GAO-04-228. Washington, D.C.: February 13, 2004. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Medicaid is an over $500 billion dollar jointly financed program for which the federal government matches state Medicaid expenditures. Within certain limits, states can make supplemental payments to providers in addition to their regular claims-based payments and receive federal matching funds. These payments have grown in the past decade. To finance the nonfederal share of Medicaid payments, states can use funds from local governments and providers, within federal parameters. CMS is responsible for overseeing state programs and ensuring that state payments are consistent with Medicaid payment principles—including that they are economical and efficient, and appropriately financed. States may have incentives to make excessive supplemental payments to certain providers who finance the nonfederal share of the payment. GAO has a body of work from 2004 to 2015 raising concerns with Medicaid supplemental payments and financing methods. Congress and CMS have taken actions to improve accountability for these payments, and GAO has made further suggestions for Congress and CMS. This statement highlights key issues and opportunities for improving transparency and oversight from GAO's work related to (1) certain supplemental payments states make to providers, and (2) states' financing of the non-federal share of Medicaid. This testimony is based on GAO reports from 2004 to 2015 on state Medicaid financing and supplemental payments, and selected updates from CMS on the status of prior recommendations. GAO has found that complete and reliable data are lacking on the tens of billions in Medicaid supplemental payments states often make, hindering transparency and oversight. In a November 2012 report, GAO found that Congress and the Centers for Medicare & Medicaid Services (CMS) have acted to improve transparency and accountability for one type of Medicaid supplemental payment known as disproportionate share hospital (DSH) payments, made for uncompensated care costs experienced by hospitals serving low-income and Medicaid patients. Since 2010, DSH payments are required to be reported to CMS and are subject to independent audits that assess their appropriateness. States also make other supplemental payments—referred to here as non-DSH payments—to hospitals and other providers that, for example, serve high-cost Medicaid beneficiaries. Gaps in oversight remained for non-DSH supplemental payments, which as of 2011 exceeded DSH in amounts paid. For example, GAO reported that 39 states made non-DSH supplemental payments to 505 hospitals that, along with regular Medicaid payments, exceeded those hospitals' total costs of providing Medicaid care by about $2.7 billion. Medicaid payments are not limited to a provider's costs for services, but GAO concluded in an April 2015 report that payments that greatly exceed costs raise questions about whether they are economical and efficient as required by law, and the extent to which they are ultimately used for Medicaid services. CMS lacks data on supplemental payments made to individual providers. Per federal internal control standards, agencies should have reliable information for decision making and reporting, and reasonable assurance that agency objectives, such as compliance with laws, are being met. In 2012, CMS officials said legislation was needed to implement non-DSH reporting and auditing requirements, and GAO suggested that Congress consider requiring CMS to provide guidance on permissible methods for calculating non-DSH payments and require state reports and audits. GAO found in a July 2014 report that states are increasingly relying on providers and local governments to finance Medicaid and data needed for oversight are lacking. About $46 billion or 26 percent of the nonfederal share was financed with funds from providers and local governments in 2012—an increase from 21 percent in 2008. GAO found that states' financing arrangements can effectively shift costs from states to the federal government. In one state, a $220 million payment increase for nursing facilities funded by a $115 million tax on nursing facilities yielded a net payment increase to the facilities of $105 million. The state obtained $110 million in federal matching funds for the payments. GAO found that CMS generally does not require or otherwise collect data from states on sources of funds to finance Medicaid, nor ensure that the data it does collect are accurate and complete. GAO identified, for example, incomplete reporting of provider taxes. As a result, CMS cannot fully assess the appropriateness of states' financing or the extent to which the increased reliance on providers and local governments serves to provide fiscal relief to states or improve access. Per federal internal control standards, agencies should collect accurate and complete data for monitoring. GAO recommended in 2014 that CMS improve the data states report on Medicaid financing. The agency disagreed, stating its efforts were adequate. GAO maintains its recommendation is valid.
The INA defines U.S. parameters for intercountry adoptions. INA establishes criteria for children’s entry into the United States and eligibility requirements for prospective adoptive parents of children from foreign countries. A child is eligible for an immediate relative (IR) classification under the INA if the child meets the definition of an “orphan,” as stipulated in the act. In addition, adopting parents must meet certain requirements related to their age, financial status, and medical condition. Over the past 5 years, there have been legislative developments in the U.S. intercountry adoption process, with further changes proposed. The Intercountry Adoption Act of 2000 (IAA) provides the domestic legislation to implement the 1993 Hague Convention on Protection of Children and Co- operation in Respect of Intercountry Adoption, hereinafter referred to as the Hague Convention or the Convention in this report. The IAA designates State to serve as the Central Authority of the United States to carry out most responsibilities of the Convention, including accreditation of adoption service providers. Also in 2000, the United States enacted the Child Citizenship Act to allow automatic citizenship for eligible adopted children. Furthermore, in 2005 congressional legislation was introduced regarding the U.S. intercountry adoption process. Intercountry adoptions may be a viable alternative to domestic adoptions for parents interested in adopting an infant, according to the National Adoption Information Clearinghouse. In fiscal years 2002 to 2004, DHS reported that at least 40 percent of children adopted by U.S. parents were under age 1, and at least 42 percent were between the ages of 1 and 4. In the past 10 years, the annual number of U.S. intercountry adoptions has consistently increased and nearly tripled, from more than 8,000 in fiscal year 1994 to more than 22,000 in fiscal year 2004 (see fig. 1). The majority of children adopted into the United States between 1994 and 2004 have originated from four countries—China, Russia, South Korea, and Guatemala—which consistently ranked among the top five sending countries of children adopted by U.S. parents (see app. II for a listing of all intercountry adoptions by country in fiscal year 2004). In total, adoptions from these four countries accounted for over 70 percent of all U.S. intercountry adoptions in the past 10 years (see fig. 2). Adoptions from China, Russia, and Guatemala increased significantly between fiscal years 1994 and 2004, accounting for the vast majority (92 percent) of the total increase (about 14,000) in U.S. intercountry adoptions during this time. The number of adoptions from South Korea has remained more consistent during this time (see fig. 3). Although the United States is one of the world’s leading receiving countries for intercountry adoptions, some U.S. children are adopted by foreigners. Statistics on the number of U.S. children adopted by foreigners are not currently collected on a national level, however, Canada, for example, reports adoptions of over 700 U.S. children from 1993 to 2002. In recent years, several countries, including the United States, have restricted intercountry adoptions from or to all or specified countries for various reasons, including concerns of fraud, medical concerns, natural disasters, and time allowed for a country to review safeguards in its intercountry adoption process. Although some of these restrictions have since been removed, such as those recently in China, Azerbaijan, and Vietnam, several remain in effect. Currently, the United States has a suspension on intercountry adoptions from Cambodia—the only U.S.- imposed suspension—which was issued in 2001 due to evidence of widespread corruption. Additionally, several foreign governments currently have restricted intercountry adoptions, in some cases because the countries are examining their adoption process (see table 1). The U.S. intercountry adoption process, which is defined by the INA and primarily implemented by USCIS and State, is complicated by a number of domestic and foreign government requirements and can be separated into three phases. First, USCIS determines the parents’ eligibility and fitness to adopt through its review of the prospective parents’ application, home study reports, and background checks. Next, USCIS—or State, in countries where USCIS has no offices—determines the child’s orphan status by examining documents and, when warranted, conducting overseas investigations. Finally, State’s overseas consular officers verify the child’s orphan status and eligibility for an immigrant visa. (Fig. 4 illustrates the three phases of the U.S. government’s process for intercountry adoptions.) Depending on the type of visa issued, children admitted to the United States may qualify for automatic citizenship. Various factors, such as different foreign government’s requirements, contribute to varying lengths of time required for the process and the varying costs incurred by adoptive parents. In the first phase of the U.S. intercountry adoption process, USCIS determines the potential parents’ suitability to adopt a child who resides outside of the United States. The INA defines qualifications for prospective adoptive parents. USCIS implements this requirement through its 68 domestic offices. Prospective parents submit to USCIS fingerprints; a filing fee; home study; proof of compliance with preadoption requirements of the prospective parent’s state of residence; other documents such as proof of citizenship, age, marriage license, and divorce decrees; and, in some cases, an Application for Advance Processing of Orphan Petition (Form I-600A). The home study is completed by a party approved under the laws of the prospective parent’s state of residence, such as an adoption agency, and includes interviews with the family and an assessment of the prospective parent’s suitability to adopt a child. After USCIS receives all required documents from parents, the agency reviews the documents, conducts background checks on all adult members living in the household, and makes its determination. If USCIS determines that the prospective parents are eligible to adopt and fit to provide the child with proper care, then it sends them a Notice of Favorable Determination Concerning Applications for Advance Processing of Orphan Petition (Form I-171H or I-797C). The second phase in the process requires U.S. federal agencies to determine the orphan status of the child to be adopted, as defined by the INA. Depending upon the location of the child, either an USCIS officer or State consular officer determines whether the prospective adoptive child meets the U.S. immigration law definition of an orphan. Once a child has been identified, adopting parents file a Petition to Classify the Orphan as an Immediate Relative (Form I-600), provide proof of the child’s age and identity, proof that the child is an orphan, and proof of a foreign government-issued adoption decree or guardianship. In order to obtain adoption decree or guardianship in the foreign country, parents must meet the foreign government’s laws and requirements. Foreign governments may place additional requirements on prospective parents, including those regarding the prospective parents’ age and residency in country, as well as the requirement for parents to agree to provide post adoption information. The Russian government, for example, requires an agreement from adoptive parents to provide periodic and on time postplacement reports. However, Russian government officials have noted that American adoptive parents have not always complied with this requirement. USCIS or State officials review the documentation for Form I-600 and, depending on the specifics of the case, may also undertake a field investigation. Sometimes officers interview birth mothers or visit orphanages to verify the circumstances surrounding the child’s orphan status. Following the completed review, USCIS or State officials make a determination. If all documents are sufficient and the child has been determined an orphan, USCIS or State officials approve the I-600 petition and send parents a Notice of Approval of Relative Immigration Visa Petition (Form I-171). The third step of the process involves State’s issuance of an immigrant visa to the child for admission into the United States. With respect to state law, most states grant full recognition to a foreign adoption decree. In some instances, states require adoptive parents to validate the foreign decree. After parents have an approved Form I-600, they submit a completed Immigrant Visa application (Form DS-230, Parts I and II) along with the application fee and required documentation. The consular officer then examines the documents, which include the following: child’s birth certificate and passport (or other valid travel document); evidence of adoption or legal custody for purposes of emigration and adoption, as well as evidence of whether the adopting parents saw the child prior to or during adoption proceedings (if applicable); and record of medical exam from the embassy’s panel physician (this exam is largely to ensure that children with communicable diseases do not enter the United States; parents are advised by State to consult with other professionals for complete physical or mental evaluations of the orphan’s health). If significant health problems are uncovered, parents may be asked to sign an affidavit acknowledging their desire to continue with the case given the medical condition of the child and, in some cases, parents may be requested to sign an affidavit regarding their intent to obtain necessary vaccinations for the child upon entry to the United States. State consular officers then approve the child for an IR-3 or IR-4 immigrant visa, if all documentation for the orphan is in order. State provides an immigrant visa package for the child to be presented to the Customs and Border Protection officer at the U.S. port of entry. According to State officials, the visa issuance process usually takes about 1 or 2 days. The INA establishes the criteria by which foreign-born children adopted by U.S. parents become U.S. citizens. Foreign-born children under the age of 18 admitted to the United States and residing permanently in the United States based on an issued IR-3 visa automatically acquire U.S. citizenship as of the date of admission to the United States. USCIS reviews IR-3 visa packages and sends Certificates of Citizenship to eligible children without requiring any additional forms or fees. In most cases, children that receive IR-4 visas may automatically acquire U.S. citizenship at entry if there was a final adoption abroad by a U.S. citizen parent and the state where the child resides does not require readoption. Other children that receive IR-4 visas acquire U.S. citizenship upon full and final adoption in the United States. Various factors make it difficult to generalize the length of time required and exact costs incurred by adoptive parents for intercountry adoptions. Country-specific adoption requirements, particularly in the top sending countries of U.S. intercountry adoptions, may contribute to the different time frames it may take to adopt a child. For example, the Russian government requires adoptive parents to travel to Russia to meet the prospective adoptive child. Since Russia also requires that the child remain in Russia before the court hearing, adoptive parents may travel a second time to Russia to attend the court hearing and adopt the child. In addition, procedural requirements in the foreign country may be difficult to meet. For instance, in Guatemala, birth certificates of the adopted child and documents proving the identity of birth mothers may be difficult and time- consuming to locate. Other factors, such as the prospective parents’ ability to provide adequate and timely information to meet U.S. intercountry adoption requirements, may also contribute to the length of time it takes for U.S. government officials to approve adoptions. For example, prospective parents may file application forms for intercountry adoptions to USCIS, which allows prospective parents up to 12 months to submit supportive documentation, such as the home study. Estimated total adoption costs incurred by adoptive parents may also vary depending on the foreign country. Table 2 shows the variations on the estimated length of time that foreign governments take to approve typical U.S. intercountry adoption cases in country and estimated cost of an adoption incurred by adoptive parents among the top sending countries of U.S. intercountry adoptions. In 2002, an interagency task force on intercountry adoptions was created to examine ways to improve the U.S. intercountry adoption process, and USCIS and State implemented most of the priorities that the task force identified as necessary for improving the adoption process. USCIS has taken measures to review the quality of the adoptions process but lacks a structured quality assurance program where results are summarized and communicated to senior agency officials. USCIS and State have taken several measures to improve the intercountry adoptions process. The Commissioner of the Immigration and Naturalization Service (INS) made intercountry adoptions a priority for the agency in March 2002, after the Commissioner suspended orphan visa processing for Cambodia. The INS created an adoptions task force with State to comprehensively review the existing INS structure for handling intercountry adoptions. The task force identified several priorities to improve the intercountry adoption process, and the agencies have, over the past 3 years, addressed many of the priorities by taking the following actions: Improved interagency coordination: The task force suggested that coordination needed to continue and that USCIS consider how adoption work should be distributed and coordinated between USCIS and State. USCIS and State’s Bureau of Consular Affairs have established a relationship to work together to address and resolve adoption issues. For instance, the agencies hold quarterly meetings to coordinate implementing changes to regulations, discuss challenges to the process, and improve the forms used by officials in the process. In particular, USCIS and State officials regularly discuss specific adoption cases and issues that arise in overseas posts, as well as regulatory, administrative, and policy matters related to intercountry adoptions. Moreover, in April 2005, USCIS and State’s Bureau of Consular Affairs officials met at a USCIS field office to establish a mechanism for sharing information on visa processing. Improved efforts to communicate with parents: The adoptions task force identified the need for USCIS and State to provide advisory notices for prospective adoptive parents. The task force suggested that emphasis should be placed on ensuring that parents were consistently informed about procedures, as well as prohibitions related to child buying in the process, and that parents receive the most current and complete information available on issues identified in countries where adoptions occur. To improve communications with parents, USCIS and State’s Office of Children’s Issues in the Bureau of Consular Affairs provide information on their Web sites, USCIS and State officials meet with adoption organizations and parents to discuss various issues and USCIS field offices have taken steps to provide customer service to parents. For example, USCIS and State Web sites provide information on the process in the United States and overseas, and the roles of both agencies, and alert parents to potential concerns through advisory notices about adoption procedures in other countries. USCIS and State’s Office of Children’s Issues provide outreach to parents and the adoption community by presenting information at regional and national conferences. In the two domestic USCIS field offices we visited, we found that officials had taken several actions to communicate with prospective adoptive parents, such as establishing procedures to meet prospective parents and providing telephone numbers for parents to directly contact the Adoption Adjudication Officer. Developed standard operating procedures: Another priority of the adoptions task force was for USCIS to provide consistent guidance for its field officers adjudicating orphan petitions by developing standard operating procedures on how to determine parents’ suitability to adopt and a child’s orphan status. The task force pointed out that, in some cases, an adjudicator may be the only person in the field office that handles adoptions and may have a supervisor reviewing their work who does not have expertise in adoptions. The task force also reported that, even among the best adjudicators, there was little procedural consistency in adjudicating adoptions and no centralized guidance from headquarters. To address this priority, in 2003, USCIS developed standard operating procedures on adoption adjudications and made them available to their staff electronically. We reviewed the standard operating procedures and found that they described in a very detailed manner the process for adjudicating orphan petitions. Furthermore, State provides guidance to consular officers on how to process orphan visa cases in its Foreign Affairs Manual. Conducted agency training: The task force noted that both USCIS and State officials sometimes lacked the training necessary to determine orphan status. In response, USCIS developed training materials for its domestic and overseas field adjudicators for determining orphan status and conducted an intercountry adoptions training course in 2002, which some State officials attended. We reviewed the training materials and found that they re-emphasized INA statutes for defining orphan status, defined the agency’s role and responsibilities for adjudicating orphan petitions, and provided details for conducting orphan investigations. In addition, to assist consular officers in their orphan investigations, State offers fraud training to help its officers ascertain whether information in documents for determining orphan status, such as birth certificates, is false or whether documents have been altered or falsified. According to a State official, State increased the frequency of the course from twice a year to 8 to 10 times per year in 2005. Streamlined the intercountry adoption process: The task force emphasized that the agency continue to demonstrate its commitment to maintaining the intercountry adoption process as a priority. USCIS streamlined some of its intercountry adoption procedures as a result. A USCIS official acknowledged that the agency is challenged to balance the prospective parents’ interest in creating their new family as quickly as possible with the need to review and process each application and the required documents in accordance with U.S. law. To address issues relating to timeliness, the agency has instituted a policy requiring completion of all immigration related applications within 6 months. Between October 2003 and July 2005, the agency has processed adoption applications in less than 4 months, on average. According to USCIS officials, many petitioners file an incomplete Advanced Orphan Petition (Form I-600A) or Orphan Petition (Form I-600) while they are in the process of completing their home study. Regulations allow the petitioner(s) up to 12 months to submit supportive documentation. The agency strives to process completed applications within 30 days of receiving all required documentation, according to USCIS officials. In addition, in November 2003, USCIS made efforts to eliminate its backlog of U.S. citizenship certificates by centralizing the process. USCIS advised its field offices that, if they needed assistance with their backlog, to send these cases to a central location for processing. According to a USCIS official, from November through December 2003, this central office processed 671 of the 700 backlogged cases—the remaining cases were either denied or returned to the field office for additional follow-up. In addition, to streamline and simplify the issuance of Certificates of Citizenship for adopted children who receive IR-3 visas and have their adoption finalized overseas, USCIS created the IR-3 Entrant Program in January 2004. The program eliminates the application and fee for citizenship certificates for about 70 percent of children adopted by U.S. citizens. A USCIS official noted that the agency has consistently met its goal to provide certificates within 45 days after the family has entered the United States. Further, in the intercountry adoption process, a system of checks and balances has developed that allows State, in the visa issuance phase, to review USCIS paperwork before issuing the visa to the child. Additionally, as a part of the streamlined process for issuing the citizenship certificates to adopted children, USCIS reviews IR-3 immigrant visa packages to verify that the child acquired automatic U.S. citizenship upon admission to the United States as required under the INA. Since April 2005, USCIS and State have established procedures to specifically address issues in visa classification decisions. The adoptions task force reported that USCIS should adopt a quality assurance program for orphan adjudications. Although USCIS has taken measures to review the quality of the adoptions process, we found that USCIS has not developed a formal quality assurance process, similar to the programs used in other areas of the agency. The activities of the quality assurance program could include such procedures as the review of random cases adjudicated by field officers to determine whether they are following agency guidance and procedures, as well as the evaluation of statistics to determine average processing times of orphan petitions by field offices. The task force reported that, even among the best officers, there was little procedural or substantive consistency in approach or a centralized source of guidance from headquarters. In 2003, USCIS developed standard operating procedures that detailed a step by step approach for adjudicating orphan petitions with the goal of improved consistency among its adjudications, but the agency has not provided training to adjudicating officers on these new procedures. The task force also reported that problems in adjudicating orphan petitions can be traced to the fact that individuals train their successors as best as they can, but there is no routine mechanism for obtaining feedback on written work from someone with specialized adoption expertise. In the field offices we visited, we found that the adjudicators had learned the process for adjudicating adoption petitions mostly through on-the-job training and mentoring. USCIS held intercountry adoptions training in 2002 as a result of the task force’s suggestion; however, not all adjudicators who process adoption cases attended the training. A USCIS official stated that the agency plans to hold similar training in fiscal year 2006, but no dates have been set. In January 2004, as part of the Child Citizenship Act Program, the USCIS Buffalo office began its review of adoption documents for children who received IR-3 visas to help ensure the accuracy and completeness of the information relating to the adoption process. This review process accounts for about 70 percent of the adoption cases, with the remaining 30 percent not included. The USCIS Buffalo office has an informal process for addressing individual cases by sending e-mails to relevant officials at USCIS and State. We reviewed several of the e-mails sent by the USCIS Buffalo office, which identified issues such as insufficient documentation, misclassifications of visas, and inconsistent interpretation of INA regulations. While USCIS Buffalo’s review process has merits, the results of the review are not summarized and formally reported to either senior USCIS or State officials. Moreover, USCIS does not have a structured approach that would allow the agencies to assess the quality of the intercountry adoption process over time, ensure that senior officials from USCIS and State are aware of the results, and identify opportunities where additional training or guidance may be warranted. The conditions in foreign environments can contribute to potential abuses in the intercountry adoptions process. USCIS and State have taken steps to increase safeguards and mitigate the potential for fraudulent adoptions, though USCIS has not formally and systematically documented specific problematic incidents to help USCIS adjudicators better understand the potential pitfalls in some intercountry adoptions. While the U.S. immigration law covering intercountry adoptions is designed to ensure that adopting parents are suitable and fit to provide proper care of the child and that the foreign-born child is an orphan, conditions in some countries—such as corruption and the lack of a legal framework over intercountry adoptions—may lead to abuses in the intercountry adoption process. According to the United Nations Children’s Fund (UNICEF), such abuses are more likely to occur in countries where legislative provisions are nonexistent, inadequate, or plagued with gaps and loopholes. UNICEF research also identified that risk for abuses significantly increased when government entities that oversee the adoption process are absent, insufficient, or when the prospective parents act through intermediaries that may not be licensed. For example, State has received a growing number of complaints concerning adoption facilitators operating in various countries. Licensing of agents and facilitators is done in accordance with local law. However, not all foreign governments require that agents and facilitators be licensed. Accordingly, it can be difficult to hold facilitators accountable for fraud, malfeasance, or other bad practices in general. According to USCIS, State, and UNICEF, there have been some known cases of abuse in intercountry adoptions, which include exchanging a child for financial or material rewards to the birth family, or “child buying”; deliberately providing misleading information to birth parents to obtain providing false information to prospective adopters; obtaining favorable adoption decisions from corrupt local or central government officials. Past difficulties with intercountry adoptions in foreign countries, most notably in Cambodia, illustrate the potential effect of high-risk adoption environments. The United States issued a suspension on intercountry adoptions from Cambodia in December 2001 after receiving complaints from nongovernmental organizations in Cambodia that criminals were involved in “baby buying” for adoptions. From 1997 to 2001, the conspirators operated a scheme to defraud U.S. citizens who adopted some 700 children from Cambodia. The conspirators received approximately $8 million dollars from adoptive parents in the United States. The conspiracy involved assorted crimes, including alien smuggling, visa fraud, and money laundering, and included schemes such as the use of baby buyers obtaining children from birth parents by informing the birth parents that they may have their child back at any time, then obtaining false Cambodian passports to enable the children to leave the country. In 2004, after a DHS investigation, a U.S. adoption facilitator pled guilty to conspiracy to commit visa fraud and conspiracy to launder money. The United States has also noted ongoing concerns with intercountry adoptions in certain countries, as of the date of this report. These countries include Guatemala, where a large number of U.S. adopted children originate from, as well as Nepal, Nigeria, and Sierra Leone. In Guatemala, USCIS and State noted on their Web sites that the use of a false birth mother to release her child is the usual method chosen by unscrupulous operators to create a paper trail for an illegally obtained child. USCIS and State officials acknowledged the known problems in Guatemala of birth mothers who are paid by private adoption attorneys to relinquish their children for adoption. According to State, problematic cases may further be complicated by high incidence of corruption and civil document fraud in Guatemala. In Nepal, visa fraud is a significant problem facing potential adoptive parents, according to State’s Web site. State also emphasized that document and identity fraud related to adoptions are serious concerns in Nigeria, and a high rate of adoption fraud has been uncovered in Sierra Leone. USCIS and State have taken various steps to strengthen safeguards against abuses associated with adoptions from foreign countries. USCIS and State’s Office of Children’s Issues coordinate to provide publicly available information to alert prospective adoptive parents to country-specific adoption processes and serious problems that may develop or already exist in foreign adoption processes State officials also hold diplomatic discussions with foreign countries regarding intercountry adoptions. Through discussions between the United States and Vietnam, for example, intercountry adoptions, which had been suspended, resumed after the two countries signed an agreement of cooperation in June 2005. In addition, USCIS has established written guidance for ways to identify fraud in intercountry adoption cases. The guidance provides that USCIS officers consider specific fraud indicators, such as documentary deficiencies and delays in registering birth certificates. Furthermore, State has provided fraud prevention management training to State consular officers, and USCIS officials said that all USCIS officers receive training, which includes an antifraud segment, when hired. USCIS and State have established procedures for determining the orphan status of the child based on the conditions that exist in the country relating to the intercountry adoption process. These procedures may add to the length of time for the adoption process. For example, in countries where the adoption process is clear and transparent, and when officers deal with adoption agencies with high standards, USCIS allows the field investigation to be completed through a documentary review. In some instances, however, deficiencies or inconsistencies in the documentation presented will require in-depth field investigations. In certain countries, these investigations can include additional steps, such as interviews with the birth mother, DNA testing when necessary and feasible, and interviews with adoption entities such as facilitators, orphanage directors, and local officials. In Guatemala, for example, USCIS requires DNA testing in all cases where the child is released by an identified birth mother due to concerns over the use of false birth mothers to release illegally obtained children. In Nigeria, where document and identity fraud related to adoptions are serious concerns, all adoptions are required to undergo full field investigations to verify the authenticity of the information provided in the adoption decrees and U.S. orphan petitions. These added steps in the process may contribute to a lengthier completion time for intercountry adoptions, but may also help the U.S. government in ensuring the legitimacy of information provided on the adopted child and in detecting fraud. Both USCIS and State publicize general knowledge of the risk environment in foreign countries. However, while State has documented specific concerns via cable communication, USCIS has not established a procedure to systematically document instances of individual problematic situations identified through its intercountry adoption work in foreign countries, including its staff’s knowledge of unscrupulous adoption attorneys and facilitators, as well as disreputable orphanages and adoption agencies. Although USCIS officials informed us that they discuss the risk environment in foreign countries with overseas staff on a periodic and informal basis, agency officials’ knowledge of specific incidents of concern may be better captured in a systematically documented method. In Guatemala, for example, USCIS staff informed us of instances where facilitators may have provided substantial funds to birth mothers who have relinquished their children for adoption. A USCIS official had also banned specific adoption attorneys in Guatemala from submitting intercountry adoption cases due to the USCIS official’s suspicions of the attorneys’ fraudulent practices. This information, however, is communicated anecdotally to other USCIS officials without being specifically and systematically documented. In addition, while USCIS provides information for State’s publicly available notices documenting country conditions, individual and detailed accounts of concern are not systematically captured. USCIS officials also noted that they have access to cables prepared by State officials that discuss concerns with intercountry adoptions in foreign countries. Contents in State’s documented cables range from documentation of general risks to intercountry adoptions in foreign countries to findings during orphan investigations in specific adoption cases. GAO internal control standards specify that agencies consider adequate mechanisms to identify risks arising from external factors, including careful considerations of the risks resulting from interactions with other federal entities and parties outside the government. Our standards also note that agency management should establish a formal process to analyze these risks. Documentation by USCIS staff of specific problematic incidents would provide a systematic method to retain institutional knowledge, analyze trends in the occurrence of these problems, and share critical information. The Hague Convention, which governs intercountry adoptions, establishes minimum standards designed to help alleviate some of the risk associated with foreign governments’ adoption processes. The United States has signed the Convention and taken key steps toward implementation but has not yet formally ratified it, while some U.S. top sending countries have also not ratified the Convention. The Hague Convention on Protection of Children and Co-operation in Respect of Intercountry Adoptions is designed to help alleviate some of the risks associated with the adoption process by establishing international minimum standards that sending and receiving countries must abide by. In particular, the objectives of the Convention are (1) to establish safeguards to ensure that intercountry adoptions take place in the best interests of the child and with respect for his or her fundamental rights as recognized in international law; (2) to establish a system of cooperation among Contracting States to ensure that those safeguards are respected and thereby prevent the abduction, the sale of, or traffic in children; and (3) to secure the recognition in Contracting States of adoptions made in accordance with the Convention. Standards of the Convention will only be applicable for intercountry adoptions in which both the sending and receiving country have ratified the Convention. More specifically, the Convention’s safeguards include the required designation of a Central Authority in each country to implement Convention procedures, as well as requirements regarding prospective parents, adoptable children, and other involved entities. These Central Authorities must coordinate with each other, provide evaluation reports on their country’s adoption experiences to other countries, and take the appropriate measures to prevent improper financial gain from an intercountry adoption, among other responsibilities. Additionally, under a Hague adoption, a prospective adoptive parent must apply for an adoption through the sending country’s Central Authority. To meet these requirements, the sending country must establish that a child is eligible for adoption by ensuring that adoption is in the best interest of the child and that appropriate counseling and consents, not induced by payment, have been provided, while the receiving country must determine that the prospective parents are eligible and suitable to adopt. Moreover, the Convention requires the receiving country to ensure that the child will be authorized to enter and permanently reside in the country before the adoption takes place, though this is not a prerequisite to parent-child contact. Under the Convention, the Central Authority may delegate certain functions to a public authority or an accredited body, as long as this body pursues only nonprofit objectives, is staffed by qualified persons, and is subject to supervision. The Convention also permits many Central Authority functions to be performed by other bodies or persons who meet certain ethical, training, and experience standards. Although the Convention establishes minimum standards, it does not establish formal means to determine whether countries are complying with them. The United States has monitoring mechanisms, outlined in the IAA, to ensure it adheres to the standards of the Convention. However, the Convention does not include mechanisms to determine whether other countries are doing so as well. For example, although the Convention requires the sending countries to prepare a report for each child, it is up to the sending country to ensure that its report is factual. Although State has taken some key steps to implement the Convention, several remain. The United States signed the Convention in 1994. In 2000, the Senate gave its advice and consent, but the United States will not formally ratify the Convention until it is able to carry out the obligations required of it by the Convention. That same year, the United States passed the implementing legislation, the IAA, which established State as the U.S. Central Authority for intercountry adoptions. Following passage of the legislation, State has taken several steps to prepare for implementation, including drafting and issuing regulations for comment as required by IAA, hiring staff to carry out some of the responsibilities of the Convention, and requesting applications for accrediting entities, which under IAA are responsible for accrediting adoption service providers. However, some key steps remain, which include finalizing regulations, deploying case registry software to track adoption cases, and signing agreements with accredited entities. Implementation of the Convention is one of State’s highest priorities, according to State officials. Figure 5 provides a detailed time line of the United States’ implementation of the Convention. State officials said the implementation of the Convention is a long-term project and attributed its lengthiness to several challenges. First, IAA required that State consider the standards or procedures developed or proposed by the adoption community before issuing regulations. Once regulations were issued to the public for comment, State received about 1,500 comments from more than 200 entities expressing a wide range of views, some calling for more stringent standards and others for less stringent. State revised the regulations, including responses to comments, and sent them to Office of Management and Budget. Second, State is further challenged in drafting agreements with accrediting entities because such entities could be either state licensing bodies or nonprofits, both of which entail a variety of different restrictions, such as state laws. Finally, State must complete some steps in sequential order. For example, agreements with accrediting entities can not be signed until the regulations are finalized. Further, although State’s target date for implementation is fiscal year 2007, agency officials stated that they could not predict how long it will take accrediting entities to approve adoption service agencies. According to officials, State will have completed its work needed to implement the Convention by the end of 2005, at which time the adoption service providers will have to apply for accreditation. In addition, USCIS must revise regulations to implement the Convention, and, according to USCIS, they are currently in the process of making the revisions. Since its creation, 66 countries (which represented about 39 percent of all U.S. intercountry adoptions in fiscal year 2004) have ratified the Convention. Three out of four top sending countries for U.S. intercountry adoptions—Guatemala, Russia, and South Korea—have not ratified the Convention. In September 2005, China, the top sending country of U.S. adoptions, ratified the Convention. State officials said that Russia is working toward ratification of the Convention. Guatemala acceded to the Convention in 2002, but, in 2003, the Guatemalan court ruled the accession to be unconstitutional based on technicalities. South Korea has not signed the Convention. See appendix IV for the countries that have implemented the Convention, as well as the total number of U.S. intercountry adoptions in fiscal year 2004 from these countries, as well as the top four sending countries. Following implementation of the Convention, the United States plans to continue adoptions with non-Hague Convention countries, according to State officials. However, State officials told us that prior to U.S. implementation of the Convention, other Hague Convention countries could potentially suspend adoptions with the United States. For example, Costa Rica announced the suspension of non-Hague adoptions in 2003, though the country has continued to allow some adoptions by U.S. citizens, according to State officials. With more and more U.S. citizens expanding their families by adopting children who live in other countries, it is important for the U.S. government to have procedures in place that provide prospective parents with transparent and accessible information on adoptions, coordination between the two primary agencies responsible for implementing U.S. immigration law on intercountry adoptions, and mechanisms to evaluate the suitability of prospective parents and to determine the child’s status as an orphan and eligibility to immigrate to the United States. The United States has such procedures in place, and the designated agencies have worked to improve them. While U.S. adoptive parents desire a smooth and expedited adoption process, agency officials are challenged to balance the importance of prioritizing and expediting intercountry adoption cases with the need to conduct thorough investigations to ensure the legitimacy of each adoption. In recent years, State and USCIS have taken measures to enhance the intercountry adoption process. However, the development of an intercountry adoptions quality assurance program would help to ensure that U.S. intercountry adoption procedures are consistently followed domestically and worldwide. Moreover, because foreign governments play a prominent role in U.S. intercountry adoptions, the U.S. government is limited in its ability to mitigate against abuses that take place abroad. Although the U.S. government has taken measures to address some risks to intercountry adoptions, a systematic documentation of such incidents in foreign countries would allow for a formal mechanism for retaining and sharing specific information among staff working on adoption cases. To improve the management of the U.S. intercountry adoption process, we recommend that the Secretary of Homeland Security take the following actions working with the Director of USCIS to formalize its quality assurance mechanisms so that the agency can assess the quality of the intercountry adoption process over time, ensure that senior officials from USCIS and State are aware of the outcomes of the quality assurance process, and identify opportunities where additional training or guidance may be warranted; and consider establishing a formal and systematic approach to document specific incidents of problems in intercountry adoptions that it has identified in foreign countries to retain institutional knowledge and analyze trends of individuals or organizations involved in improper activities. The Departments of Homeland Security and State provided written comments on a draft of this report (see apps. V and VI). GAO incorporated technical comments from both agencies as appropriate. Both DHS and State generally agreed with the report draft’s observations and conclusions. DHS agreed with our two recommendations for USCIS to formalize its quality assurance mechanisms and for USCIS to consider establishing a formal and systematic approach to document specific incidents of problems in intercountry adoptions identified in foreign countries. DHS also commented that the report accurately describes the process and responsibilities of both agencies in the intercountry adoption process. DHS noted that both agencies have improved interagency coordination and efforts to communicate with parents, developed standard operations procedures, conducted training, and streamlined the process. State commented that the department and USCIS have established procedures that provide prospective adoptive parents with transparent and accessible information, ensure coordination between their distinct supportive roles, and meet the requirements of U.S. law. State also noted that the report outlines the separate responsibilities of each agency and recognizes the steps that both agencies have taken to ensure that intercountry adoptions take place within the context of strong safeguards. State replied that it is deeply concerned about the welfare of children around the world and regards the Hague Convention as an important means of promoting strong safeguards and ensuring that intercountry adoption remains a viable option for children around the world who seek permanent family placements. The department also discussed its action on implementing the Hague Convention and indicated that it plans to complete the tasks necessary to ratify the Convention in 2005 and to enable adoption service providers to be accredited in time for the United States to be able to ratify the Convention in 2007. Also, State provided additional information regarding its outreach efforts to the adoption community that we added to the report and provided supplementary information on Department of State Actions on Intercountry Adoptions in Selected Countries. We will send copies of this report to appropriate Members of Congress, the Secretaries of the Departments of Homeland Security and State, and the Director of U.S. Citizenship and Immigration Services. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or fordj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. To describe the U.S. intercountry adoption process, we met with Department of Homeland Security (DHS) U.S. Citizenship and Immigration Services (USCIS) and Department of State (State) officials in Washington, D.C., with responsibilities for managing intercountry adoptions cases. Specifically, we met with officials in USCIS’ Offices of Field Operations and International Operations, and State’s Bureau of Consular Affairs’ Offices of Children’s Issues and Visa Services. We reviewed documents on U.S. intercountry adoptions, including information on USCIS and State Web sites, which outlined specific procedures for prospective adoptive parents to complete during the intercountry adoption process, as well as information on the child’s eligibility for citizenship. Furthermore, we reported on the varying intercountry adoption completion times and costs incurred by adoptive parents in the top four sending countries based on discussions with officials from State’s Office of Children’s Issues, who obtained this information from State’s Bureau of Consular Affairs officers handling intercountry adoption cases in the four overseas locations. These Bureau of Consular Affairs officers provided approximate time frames based on their experiences in reviewing intercountry adoption cases. We did not test the reliability of this data. To assess the U.S. government agencies’ efforts to manage the intercountry adoption process, we interviewed USCIS and State officials in Washington, D.C. We reviewed a USCIS task force report from June 2002, which assessed the U.S. intercountry adoption process and identified areas for improvements. We discussed actions taken to address these issues with USCIS officials. We further reviewed USCIS and State documents to understand their procedures for handling intercountry adoptions cases and their management of the process. These documents include USCIS Standard Operating Procedures and Adjudicator’s Field Manual, as well as State’s Foreign Affairs Manual. Additionally, we reviewed USCIS and State’s caseload data, documentation of communication and coordination between the two agencies, as well as training materials provided to USCIS and State officials related to intercountry adoptions. The estimate of the time that USCIS officials took to process intercountry adoption applications relies on their Performance Analysis System (PAS). GAO tested the reliability of the PAS data and found that it was sufficiently reliable at the aggregate level to identify overall trends. We also visited USCIS offices in New York and Los Angeles to see how USCIS implements domestic procedures related to intercountry adoptions. Both of these offices ranked in the top 20 percent of domestic offices for the number of adoption cases received in fiscal year 2004 and demonstrate geographic diversity. Additionally, we visited Guatemala City, Guatemala, and Moscow, Russia, to see how USCIS and State implement intercountry adoption procedures overseas. Our reasons for selecting those two countries are discussed below. Furthermore, we contacted U.S. adoption organizations and agencies to understand their roles, as well as some adoptive parents who belonged to national adoptive parent support groups and were willing to respond to us, to hear about their experiences with the U.S. intercountry adoption process. To understand the intercountry adoption process and the adoption environment in overseas locations, we visited Guatemala City and Moscow; we selected these locations for various reasons. Both countries were consistently ranked among the top five sending countries for U.S. intercountry adoptions from fiscal year 1994 to fiscal year 2004. In Guatemala City, USCIS officials adjudicate all orphan petitions to determine the eligibility of the adopted children while State officials issue visas to the adopted children. In contrast, State officials with designated responsibility in Moscow approve the eligibility of the adopted children and issue their visas. In addition, USCIS and State officials noted concerns with intercountry adoptions abuses in Guatemala, which research conducted for United Nations Children’s Fund (UNICEF) corroborated. In both countries, we interviewed USCIS and State Bureau of Consular Affairs officials managing the U.S. intercountry adoption process. We also met with foreign government officials and private adoption facilitators and visited orphanages to learn about their respective roles in the adoption process and their views on the risks associated with intercountry adoptions in Guatemala and Russia. The information on foreign law in this report does not reflect our independent legal analysis, but is based on interviews and secondary sources. To describe the Hague Convention and the statuses of U.S. and top sending countries’ implementation of the Convention, we analyzed the text of the Convention on the Protection of Children and Co-operation in Respect of Intercountry Adoption to identify the purpose and standards of the Convention. We also analyzed the U.S. Intercountry Adoption Act of 2000, which provides for the implementation of the Convention by the United States. We also reviewed State’s Web sites and documents regarding the status of its implementation of the Convention, including comments on draft regulations and State’s Fiscal Year 2006 Performance Summary. In addition, we interviewed USCIS and State officials to determine the status of the U.S. implementation of the Convention. Finally, we obtained data on other countries’ ratification of the Convention from the Web site of The Hague Conference on Private International Law, which tracks the statuses of countries that have signed and ratified the Convention. To verify that this data on the Web site was current, we interviewed an official from the member of the permanent bureau of the Convention. Furthermore, we discussed the implementation statuses of the Convention in their own countries with Guatemalan and Russian government officials. We performed our work from February to October 2005 in accordance with generally accepted government auditing standards. Congo, Democratic Republic of the 13 (Continued From Previous Page) Great Britain and Northern Ireland Hong Kong S.A.R. Macedonia, The Former Yugoslav Rep. of Marshall Islands, Republic of the 32 (Continued From Previous Page) Saint Vincent and the Grenadines 17 (Continued From Previous Page) Sub Office under Bangkok District Office Sub Office under Bangkok District Office Sub Office under Bangkok District Office Sub Office under Bangkok District Office Sub Office under Bangkok District Office Sub Office under Bangkok District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Mexico City District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Sub Office under Rome District Office Although USCIS has a Sub Office in Moscow, Russia, USCIS officials in Moscow do not handle U.S. intercountry adoption cases. Instead, State’s consular officers approve these cases in Moscow. The following are GAO’s comments on the letter from the Department of State dated October 12, 2005. 1. State commented that the draft report’s table of countries that currently maintain restrictions on intercountry adoptions included countries affected by the 2004 South Asian tsunami. The department did not believe that it was appropriate to do so because the region was following accepted international practices rather than imposing restrictions on intercountry adoptions in contrast to the other countries included in the table. We removed this listing from the table. 2. State provided additional information regarding its outreach efforts to the adoption community, which we added to the report. In addition to the individual named above, Phyllis Anderson, Assistant Director; Joe Carney; Tracey Cross; Mark Dowling; Joel Grossman; Rhonda Horried; and Victoria Lin made key contributions to this report.
U.S. intercountry adoptions nearly tripled from more than 8,000 to more than 22,000 between fiscal years 1994 and 2004. While the Department of State (State) and U.S. Citizenship and Immigration Services (USCIS) manage the process, factors ranging from corruption to inadequate legal frameworks in foreign countries could lead to abuses such as the abduction of children. GAO (1) describes the U.S. intercountry adoption process, (2) assesses the U.S. government's efforts to manage the intercountry adoption process, (3) assesses U.S. efforts to strengthen safeguards and mitigate against the potential for fraudulent adoptions, and (4) describes the Hague Convention (Convention) and the statuses of U.S. and top sending countries' implementation of the Convention. Adoptive parents must meet domestic and foreign government requirements to complete intercountry adoptions. However, factors such as foreign governments' procedures may contribute to varying time frames for adoptions. USCIS and State are the domestic agencies responsible for intercountry adoptions. USCIS and State made efforts to enhance the process by improving interagency coordination and communication with parents and developing additional guidance on adoptions. In addition, USCIS streamlined the intercountry adoption process by eliminating the application and fees for parents to obtain U.S. citizenship certificates for eligible children. While USCIS has taken measures to review the quality of the adoptions process, GAO found that the agency does not have a formal quality assurance program in place where results are summarized and reported to senior agency officials so that an assessment of the quality of the intercountry adoption process can be made over time. Factors in foreign countries' environments may allow for abuses in adoptions. To reduce the likelihood of such abuses, USCIS and State have taken such steps as holding diplomatic discussions with foreign governments and imposing additional U.S. procedural requirements. However, USCIS has not established a formal and systematic process for documenting specific incidents of problems in foreign countries. Such a process would allow for a systematic approach to analyze problematic trends and retain institutional knowledge. The Hague Convention governing intercountry adoptions establishes minimum standards designed to help alleviate some of the risk associated with the adoption process. The United States has signed the Convention and taken several steps toward implementing the Convention; however, key steps remain, including formal ratification of the Convention. Since its creation, 66 countries (which represented about 39 percent of all U.S. intercountry adoptions in fiscal year 2004) have ratified the Convention.
Compared with RAPS data, MA encounter data contain more data elements, including data on diagnoses not used for risk adjustment. Specifically, MAOs may enter more diagnoses on each encounter data submission than on each RAPS data submission and may transmit encounter data more frequently than RAPS data. (See table 1.) Although there are a number of differences between RAPS data and encounter data, an important distinction is a shift in who is responsible for identifying diagnoses used for risk adjustment. Under RAPS data submissions, MAOs individually analyze all their claims data and only submit data with diagnoses that are relevant for risk adjustment. In contrast, for encounter data submissions, MAOs transmit data to CMS on all enrollee encounters, regardless of whether the encounter contains diagnoses used for risk adjustment. In addition, the encounter data submission process involves a number of steps and exchanges of information between providers, MAOs, and CMS. First, MAOs collect encounter data—originating from an enrollee’s medical record—from providers to manage and process reimbursement for health care services and supplies for enrollees. After collecting and reviewing these data, MAOs submit the data to CMS through the Encounter Data System. CMS then processes and checks the data for problems. CMS designed the system to reject and return problematic data to MAOs for corrections. MAOs are expected to work with providers to correct the encounter files and resubmit the data to CMS. (See fig. 1.) Since our July 2014 report, CMS has taken additional steps across several activities to ensure that MA encounter data are complete but has yet to fully address data accuracy. (See fig. 2) The agency has taken the following steps, which address primarily the completeness of encounter data and provide feedback to MAOs on data submission: Creating a report card with basic statistics on the completeness of encounter data for MAOs. This step partially fulfills CMS’s Medicaid data validation protocol activity to conduct statistical analyses to assess completeness and accuracy. Analyzing values in specific data elements and generating basic statistics on the volume and consistency of data elements can help detect data validity issues. Agency officials told us they are using the report cards to encourage MAOs to submit data more frequently and completely. The report cards contain the following information: quarterly performance indicators. These indicators relate to submission frequency (such as the percentage of biweekly periods with submitted data), data volume (such as the numbers of submitted encounters per 1,000 enrollees), and data quality (such as rejection rates of data submissions). comparisons with other MAOs. These MAO-specific comparisons display an MAO’s volume of encounters—overall and by service type (professional, inpatient, outpatient, and durable medical equipment)—alongside the regional and national averages for both MA encounter data submissions and Medicare FFS claims for each of the past 3 years. Developing an automated report for MAOs on diagnoses used for risk adjustment. This step partially fulfills the data validation protocol activity to summarize findings on encounter data completeness and accuracy to provide recommendations to MAOs. The automated report identifies diagnoses from MAO encounter data submissions that CMS will use to calculate risk scores for the next payment year. The report is primarily intended to help MAOs ascertain the basis of enrollee risk scores, though representatives from health insurance trade associations told us that they have also used the automated reports to prepare internal financial projections and compare patient diagnoses between encounter data and RAPS data submissions. MAOs first received these reports in December 2015, and since then, CMS has modified the report layout in response to MAO feedback. According to agency officials, CMS has finalized the initial version of the automated report and is distributing the automated reports to MAOs on a monthly basis. Further, the agency intends to make technical changes as necessary in the future. According to agency officials, they finalized the protocol for validating MA encounter data in November 2016 and have begun implementing several parts of it. In September 2016, CMS awarded a contract to update the protocol and report annually on the implementation and outcomes of protocol activities. The stakeholder organizations we interviewed raised several issues with CMS’s recent actions to ensure the completeness of MA encounter data. The main issues mentioned by several stakeholder organizations included the following: Errors in identifying diagnoses used for risk adjustment. Representatives from health insurance trade associations we interviewed criticized CMS’s process for identifying diagnoses that are relevant for risk adjustment. First, they stated that MAOs question the integrity of CMS’s data processing. They noted, for example, that the automated reports MAOs receive had missing procedure codes for some encounters where the original data submissions had included them. CMS told us that they are working with MAOs to make needed corrections to these reports. Second, representatives said that MAOs have been unable to replicate CMS’s analyses because CMS has made adjustments to how it identifies diagnoses eligible for risk adjustment using encounter data. As a result, they say, MAOs are unsure whether CMS is properly distinguishing diagnoses that are used for risk adjustment from those that are not used. When asked about this concern, CMS officials noted that they publicly announced how the agency intends to implement the risk adjustment transition in December 2015, and the methodology has not changed. Inclusion of encounter data elements considered irrelevant. Representatives we interviewed from some health insurance and provider trade associations questioned CMS’s inclusion and checks of data elements the agency does not use for risk adjustment and which they contend are irrelevant for the purposes enumerated in the August 2014 final rulemaking. CMS’s Encounter Data System is designed to reject erroneous information by applying a subset of the edits used to process FFS claims, which are not all relevant to MA encounter data, according to CMS officials. CMS requires MAOs to make corrections and resubmit the data for all rejected encounters. Representatives stated that both MAOs and providers must dedicate significant resources to meet CMS requirements. In particular, they said MAOs must alter their data systems and submit numerous requests to providers for data corrections and medical record reviews. When asked about this comment, CMS officials noted that the agency wants encounter data elements to be comparable to FFS claims data. Although not all of the encounter data elements are used for risk adjustment purposes, CMS noted that they should be reliable, comprehensive, and complete in the event they are used for any authorized purposes. Technical problems with encounter data submission. Representatives we interviewed from several health insurance and provider trade associations reported that MAOs have experienced difficulties with certain data submissions. They cited, for example, difficulty with submitting encounters for recurring services, such as physical therapy. While MAOs and providers typically record such services as a single encounter, they must submit multiple separate encounters to CMS for recurring services. Agency officials told us they had not heard about this problem from MAOs. In addition, stakeholder representatives mentioned challenges resulting from frequent, randomly scheduled changes to data submission requirements, which they say generate costs and confusion. CMS stated that technical changes occur quarterly, which is typical for other CMS data collection efforts. Agency officials pointed out that CMS has worked with some of the larger MAOs to address data submission issues. Inadequate CMS communication with individual MAOs. Although some researchers praised CMS officials for their assistance and support, representatives from several health insurance trade associations told us that their members are dissatisfied with CMS’s communication efforts. Representatives noted that CMS’s webinars are not designed to facilitate a conversation between the agency and MAOs, and that the email address CMS set up to handle questions from MAOs does not produce responses that can be shared across MAOs in a timely fashion. CMS officials told us that its response time for emailed questions on MA encounter data largely depends on the complexity of the issue. They said that the agency has not been able to provide individualized assistance because of resource limitations, but has recently contracted with an external organization to provide on-site assistance to MAOs to improve their data submission processes. Although CMS has taken several steps to ensure the completeness of MA encounter data, as of October 2016, the agency had yet to take a number of other important steps identified in the agency’s Medicaid protocol. Steps CMS has not taken include the following: establish benchmarks for completeness and accuracy. This step would address the data validation protocol activity to establish requirements for collecting and submitting MA encounter data. Without benchmarks, CMS has no objective standards against which it could hold MAOs accountable for complete and accurate data reporting. conduct analyses to compare against established benchmarks. This step would address the data validation protocol activity to conduct statistical analyses to ensure accuracy and completeness. Without such analyses, CMS is limited in its ability to detect potentially inaccurate or unreliable data. determine sampling methodology for medical record review and obtain medical records. This step would address the data validation protocol activity to review medical records to ensure the accuracy of encounter data. Without medical record reviews, CMS cannot substantiate the information in MAO encounter data submissions and lacks evidence for determining the accuracy of encounter data. summarize analyses to highlight individual MAO issues. This step would address the data validation protocol activity to provide recommendations to MAOs for improving the completeness and accuracy of encounter data. Without actionable and specific recommendations from CMS, MAOs might not know how to improve their encounter data submissions. To the extent that CMS is making payments based on data that have not been fully validated for completeness and accuracy, the soundness of billions of dollars in Medicare expenditures remains unsubstantiated. Given the limited progress CMS has made, we continue to believe that the agency should complete all the steps necessary to validate the data before using them to risk adjust payments or for other intended purposes, as we recommended in our July 2014 report. Since our July 2014 report, CMS has made progress in defining its objectives for using MA encounter data for risk adjustment purposes and in communicating its plans and time frames to MAOs. Although additional work is needed, CMS has improved its ability to manage a key aspect of the MA program. In April 2014, CMS announced that it would begin incorporating patient diagnoses from MA encounter data submissions into risk score calculations. For 2015 MAO payments, CMS used encounter data diagnoses as an additional source of diagnoses to compute risk scores. CMS supplemented the diagnoses from each enrollee’s RAPS data file with the diagnoses from each enrollee’s MA encounter data file. For 2016, CMS used a different process that increased the importance of encounter data in computing risk scores. Specifically, CMS calculated risk scores as follows: CMS determined two separate risk scores for each enrollee. CMS based one risk score on the diagnoses from each enrollee’s RAPS data file and the other risk score on the diagnoses from each enrollee’s encounter data file. CMS combined the two risk scores, weighting the RAPS risk score by 90 percent and the encounter data risk score by 10 percent. CMS intends to increase the weight of encounter data in the risk score calculation in the next 4 years so that encounter data will be the sole source of diagnoses by 2020. (See fig. 3.) While some stakeholder organizations we interviewed supported CMS’s time frame for transitioning from RAPS data to encounter data for risk score calculation, others raised objections to CMS’s planned timeline. Representatives from several stakeholder organizations we interviewed— primarily research firms and a health insurance trade association—said that CMS’s time frame was appropriate and that MAOs had adequate time to adjust their data submission processes. However, representatives from several health insurance and provider trade associations we interviewed said that many MAOs and providers are apprehensive about CMS’s time frame because it does not allow sufficient time for a successful transition. They told us that many MAOs and providers are still configuring their encounter data systems. In a December 2015 memo to all MAOs, CMS noted that, since 2016 will be the fourth year of collecting encounter data, the transition time frame is a reasonable, modest step toward ultimately relying exclusively on encounter data as the source of diagnosis information in risk adjustment. The agency noted that it has worked with MAOs to correct issues with how the agency applies the methodology for identifying diagnoses for risk adjustment is applied. Although the agency has formulated general ideas of how to use MA encounter data for some purposes besides risk adjustment, CMS has not determined specific plans and time frames for most of the additional purposes for which the data may be used, namely (1) to update risk adjustment models; (2) to calculate Medicare disproportionate share hospital percentages; (3) to conduct quality review and improvement activities; (4) for Medicare coverage purposes; (5) to conduct evaluations and other analysis to support the Medicare program (including demonstrations) and to support public health initiatives and other health care-related research; (6) for activities to support the administration of the Medicare program; (7) for activities to support program integrity; and (8) for purposes authorized by other applicable laws. CMS officials explained that their main priority to date has been to use MA encounter data for calculating risk scores and that they plan to use the data for other purposes at a future time. However, this is inconsistent with federal internal control standards relating to risk assessment and information and communication that call for clearly defining objectives and communicating those objectives to key external organizations. In addition to articulating plans for using encounter data for risk adjustment, CMS has indicated its interest in using MA encounter data for additional purposes. As of October 2016, CMS has begun planning for two of the eight remaining authorized uses: quality review and improvement activities. In April 2016, CMS awarded a contract to develop quality metrics that represent care coordination using encounter data. As of September 2016, the contractor had developed some plans for using encounter data to develop the metrics but testing and analyzing the data are ongoing. program integrity activities. CMS officials told us they anticipate including MA encounter data in the Fraud Prevention System to help identify abusive billing practices, but have yet to fully develop plans for this proposed use. To date, CMS officials reported that the Center for Program Integrity has begun using encounter data to determine improper payments to providers. It conducted a study of the number of services both paid as FFS claims and submitted as MA encounters. Additionally, it used encounter data to identify MA providers that were not enrolled in Medicare. For the remaining authorized uses of encounter data, CMS reportedly has developed general ideas, but not specific plans and time frames. For example, CMS officials told us the CMS Innovation Center has plans to use MA encounter data to evaluate three demonstration models. Because these efforts are in their infancy, the officials could not provide details or specific time frames for these applications of encounter data. In addition, CMS has released MA encounter data to the Medicare Payment Advisory Commission and the Department of Health and Human Services’ Office of Inspector General for research purposes using standard protocols for releasing FFS data to those agencies. However, CMS has not yet released the data to other organizations or finalized protocols for doing so. Stakeholder organizations we interviewed acknowledged that the agency has the authority to collect MA encounter data, but some indicated unease about CMS’s expansion of allowable uses of the data. In addition, some were concerned about the potential for future expansions because CMS has not fully defined its plans and time frames for other applications. In contrast, representatives from both health insurance and provider trade associations and research organizations noted that the authorized purposes are within CMS’s purview and that the agency already uses FFS data for similar purposes. CMS officials told us that some of the authorized uses are purposefully broad because they want to have some flexibility to expand their uses of encounter data in the future. A common point made by all of the stakeholder organizations we interviewed was the importance of privacy protections for releasing MA encounter data to researchers and other interested parties. Many organizations were concerned that CMS might release commercially sensitive information to external entities. A few also highlighted the importance of protecting patient privacy. To protect proprietary information and patient privacy, stakeholder organizations offered the following suggestions: Aggregate the data. Representatives from some health insurance trade associations said aggregating the data on a geographic level, such as the county or state level, would generally allow MAOs to remain anonymous. One MAO stated that aggregating the data at the physician group level would be appropriate. In the preamble to its final rule, CMS clarified that payment data released to external entities would be aggregated at the level necessary to protect commercially sensitive information, such as proprietary payment rates between plans and providers. Deny or delay the release of certain data. Representatives from health insurance and provider trade associations were opposed to releasing encounter data elements—such as payment or utilization data—that could be used for anti-competitive behavior. They proposed delaying the release of encounter data by several years to protect proprietary information. In the preamble to its final rule, CMS stated that such delays in releasing the data to external entities would defeat the purposes of improving transparency in the Medicare program. Limit data access. Representatives from health insurance and provider trade associations and research organizations emphasized that CMS should implement appropriate safeguards for releasing MA encounter data to external entities similar to those protections used for Medicare FFS data. Representatives from three trade associations argued that encounter data should not be made available to researchers and other interested parties until the data quality is assured. In the preamble to its final rule, CMS stated that making encounter data available to researchers using a process similar to that applied to FFS data would enhance transparency in the MA program. To the extent that specific plans for using the MA encounter data remain undeveloped, CMS is unable to communicate a set of well-defined objectives to stakeholders. Furthermore, in the absence of planning for all of the authorized uses, the agency cannot be assured that the amount and types of data being collected are necessary and sufficient for specific purposes. Given the agency’s limited progress on developing plans for additional uses of encounter data, we continue to believe that CMS should establish specific plans and time frames for using the data for all intended purposes, in addition to risk adjusting payments to MAOs, as we recommended in our July 2014 report. We provided a draft of this report to the Department of Health and Human Services (HHS) for comment. HHS provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees and the Secretary of Health and Human Services. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or cosgrovej@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix I. In addition to the contact named above, Rosamond Katz (Assistant Director), Manuel Buentello, David Grossman, and Jessica Lin made key contributions to this report. Also contributing were Muriel Brown, Christine Davis, Elizabeth Morrison, and Jennifer Rudisill.
CMS collects MA encounter data to help ensure the proper use of federal funds by improving risk adjustment in the MA program—the private health plan alternative to traditional Medicare—and for other potential purposes. CMS's ability to make proper payments depends on the completeness and accuracy of MA encounter data. In July 2014, GAO reported that CMS had taken some, but not all, appropriate actions to validate the completeness and accuracy of encounter data and had not fully developed plans for using them. GAO was asked to provide an update on its July 2014 findings. In this report, GAO identifies (1) steps CMS has taken to validate MA encounter data, and (2) CMS's plans and time frames for using MA encounter data—as well as stakeholder perspectives on these steps and plans. To do this work, GAO compared CMS activities with the agency's protocol for validating Medicaid encounter data—comparable data collected and submitted by entities similar to MAOs—and federal internal control standards. In addition, GAO reviewed relevant agency documents and interviewed CMS officials on MA encounter data collection and reporting. GAO also reviewed comments in response to CMS's 2014 proposed rule and reports from stakeholder organizations. GAO also interviewed a non-generalizable selection of 11 stakeholders including health insurance and provider trade associations and research organizations. HHS provided technical comments on this report that were incorporated as appropriate. Since GAO issued its July 2014 report, the Centers for Medicare & Medicaid Services (CMS) within the Department of Health and Human Services (HHS) has made limited progress to validate the completeness and accuracy of Medicare Advantage (MA) encounter data. CMS collects encounter data—detailed information about the care and health status of MA enrollees—to determine payments to MA organizations (MAO). These entities received approximately $170 billion to provide coverage to nearly one-third of all Medicare beneficiaries in 2015. The agency uses a risk adjustment process to account for differences in enrollees' expected health care costs relative to an average beneficiary. Without complete and accurate encounter data, CMS cannot appropriately risk adjust MAO payments. CMS has begun compiling basic statistics on the volume and consistency of data submissions and preparing automated summary reports for MAOs indicating diagnosis information used for risk adjustment. However, CMS has yet to undertake activities that fully address encounter data accuracy, such as reviewing medical records. (See figure.) Furthermore, some health insurance and provider trade associations GAO interviewed voiced concerns about CMS's ability to properly identify diagnoses used for risk adjustment. CMS officials noted that they are working with MAOs to refine how the methodology used to obtain diagnoses data is applied. To the extent that CMS is making payments based on data that have not been fully validated for completeness and accuracy, the soundness of billions of dollars in Medicare expenditures remains unsubstantiated. Given the agency's limited progress, GAO continues to believe that CMS should implement GAO's July 2014 recommendation that CMS fully assess data quality before use. Since the July 2014 report, CMS has made progress in developing plans to use MA encounter data for risk adjustment, but has not specified plans and time frames for most other purposes, such as conducting program evaluations and supporting public health initiatives. CMS began phasing in patient diagnosis information from encounter data in its risk adjustment process in 2015 and intends to rely completely on those data by 2020. Because it has primarily focused on collecting comprehensive encounter information for risk adjustment purposes—which is key to ensuring proper payments—CMS officials told GAO that the agency has largely deferred planning for additional uses of the data. Some stakeholder organizations have objected to the risk adjustment transition time frame, asserting that it does not allow sufficient time for a successful transition. According to CMS, the multiyear transition time frame is reasonable. Some stakeholders also were concerned that releasing data to external entities could compromise the confidentiality of proprietary information, such as payments to providers. CMS officials said that they intend to use data protections similar to those used with other Medicare data. In the absence of planning for all of the authorized uses, the agency cannot be assured that the amount and types of data being collected are necessary and sufficient for specific purposes. Given the agency's limited progress, GAO continues to believe that CMS should implement GAO's July 2014 recommendation that CMS fully develop plans for the additional uses of encounter data.
Since TARP was authorized, Treasury has implemented a range of programs aimed at stabilizing the financial system and preserving homeownership. As of June 30, 2010, it had disbursed $385 billion for TARP loans and equity investments, and Treasury has already recouped some of these disbursements (table 1). As of June 30, 2010, Treasury had received almost $25 billion in dividend and interest payments and warrant repurchases and more than $198 billion in repayments. Bank capital programs. Bank capital programs authorized under TARP, such as CPP, TIP, and the Capital Assistance Program (CAP), were established to help stabilize the financial system and ensure the flow of credit to businesses and consumers. Treasury is no longer disbursing funds through these programs because according to Treasury, they have largely achieved their goals of both stabilizing the financial system and individual institutions. CPP was intended to restore confidence in the banking system by increasing the amount of capital in the system. Treasury provided capital to qualifying financial institutions by purchasing preferred shares and warrants or subordinated debentures. Under the CPP, Treasury disbursed about $205 billion to 707 financial institutions nationwide from October 2008 through December 2009. Treasury has received about $147 billion in repayments and about $17 billion in dividend and interest payments and warrant income as of June 30, 2010. In our past reports, we have made numerous recommendations to strengthen transparency and accountability of this key TARP program. For instance, we recommended that Treasury report whether financial institutions’ activities are generally consistent with the purposes of program. We also recommended that Treasury consider making the warrant valuation process transparent to the public by disclosing details regarding the warrant repurchase process. In both of these areas, Treasury has addressed these recommendations by releasing bank survey information on lending and detailed reports on warrant repurchases. However, as institutions leave the program, which includes the largest banks, they are no longer required to report information on lending to Treasury. TIP was designed to foster market stability and thereby strengthen the economy by investing in institutions that Treasury deemed critical to the functioning of the financial system on a case-by-case basis. Only two institutions—Bank of America Corporation and Citigroup Inc.— participated in this program and each received $20 billion in capital investment. Both institutions repaid Treasury for these investments in December 2009. CAP was designed to further improve confidence in the banking system by helping ensure that the nation’s largest 19 U.S. bank holding companies had sufficient capital to cushion themselves against larger than expected future losses, as determined by the Supervisory Capital Assessment Program (SCAP)—or “stress test”—conducted by the federal banking regulators. CAP made TARP funds available to any institution not able to raise private capital to meet SCAP requirements. In the end, 9 of the 10 institutions that needed additional capital as a result of SCAP raised over $70 billion from private sources, and GMAC received additional capital from Treasury under the Automotive Industry Financing Program (AIFP). No CAP investments were made and the program closed on November 9, 2009. Although these programs are no longer making new investments, the lessons learned from them will be useful in future efforts to stabilize the financial markets and improve ongoing bank supervision. We are currently reviewing the characteristics of firms that received CPP investments and assessing Treasury’s procedures for selecting institutions to participate and Treasury’s role when institutions elect to repay their CPP investments. We are also evaluating the process that the regulators used to design and implement SCAP, as well as the financial performance of the participating institutions compared to SCAP estimates. As part of this work, we will also assess how regulators and the banks are applying lessons learned from SCAP. We plan to issue reports on CPP and SCAP in the coming months. Auto Industry Financing Program (AIFP). From December 2008 through June 2009, Treasury committed $81.1 billion to help stabilize the auto industry, including about $62 billion to fund GM and Chrysler while they restructured. In return for the assistance provided to Chrysler and GM, Treasury received 9.85 percent equity in the reorganized Chrysler, 60.8 percent equity and $2.1 billion in preferred stock in the reorganized GM, and $13.8 billion in debt obligations between the two companies. As of June 30, 2010, approximately $11.2 billion of the $79.7 billion disbursed has been repaid to the Treasury. Treasury has stated that it plans to sell its equity in these companies as soon as practicable. The federal government’s ability to recoup its investments will depend on the profitability of GM and Chrysler. Since we last reported on the financial condition of the auto industry in November 2009, Chrysler and GM have shown some indications of progress towards returning to profitability. For example: In April and May 2010, both the new GM and new Chrysler released financial statements for 2009 and the first quarter of 2010. Thus far, according to Treasury officials, both companies are doing better than they and Treasury had initially projected in terms of revenues, operating earnings, and cash flow. We are in the process of reviewing the financial statements in more detail for a subsequent report. Also in April 2010, GM repaid Treasury the remaining $4.7 billion on the $6.7 billion in debt it owed to Treasury using TARP funds from an escrow account established for the company when it reorganized through the bankruptcy process. According to Treasury officials, GM was legally permitted to keep the remaining $6.6 billion left in the escrow account after this repayment. Treasury recently stated that it plans to participate in a GM initial public offering (IPO), in which Treasury, other GM shareholders, and GM will sell a portion of their shares in the company. Treasury stated that it expects the IPO to occur sometime after the third quarter of this year. Treasury has hired the Lazard investment firm to help manage its equity and prepare for the IPO. The proceeds from the sale of Treasury’s shares will be used towards repaying the government’s initial investment in GM. While these steps indicate progress in the companies’ journey towards profitability, the extent to which the federal government will recoup its investment in the auto industry is uncertain, and the companies’ face several challenges in the coming years. For instance: In April 2010, we reported on the impact of restructuring on GM’s and Chrysler’s pension plans. We found that although the new companies had assumed sponsorship of the pension plans, the future of the plans remained uncertain, in part because the companies are legally required to make large contributions to the plans that they will be able to make only if they became profitable again. If the companies are not able to return to viability and their plans are terminated, the Pension Benefit Guarantee Corporation would face the significant financial and administrative costs of taking over these plans. While Chrysler and GM sales, and industry sales as a whole, were up substantially in spring 2010 from spring 2009 (up 12 percent for GM and 35 percent for Chrysler), more recent trends are not as positive. For example, compared with May 2010 levels, June 2010 sales decreased more than usual (13 percent for GM and 12 percent for Chrysler). Industry analysts largely attributed this decline to consumers’ wariness about the state of the economy. Improved economic conditions, and in turn, improved vehicle sales, are critical to the future profitability of the companies and the timing and success of an IPO. To help address these challenges, we made several recommendations in our November 2009 report. For example, we recommended that Treasury ensure that it had adequate staffing to monitor the government’s investment in the auto companies and that it communicate to Congress its plans to monitor the companies’ performance. In response to our recommendation, Treasury has hired additional staff to monitor the federal government’s investment in the auto companies. However, as of July 2010 Treasury had not committed to additional communication with Congress on its future monitoring plans. In addition, we are continuing to monitor the financial condition of the industry and in ongoing work are reviewing the current financial condition and outlook of GM and Chrysler. As part of that ongoing work, we are also reviewing the status of the federal government’s efforts to assist workers and communities that have relied on the auto industry for their economic base. American International Group, Inc (AIG) Investments. One of TARP’s earliest programs was designed to provide exceptional assistance aimed at preventing broad disruptions to the financial markets by stabilizing institutions that were considered systemically significant. In particular, in November 2008 Treasury joined the Federal Reserve’s effort to provide assistance to AIG, which first began in September 2008 and was restructured in November 2008 and again in March 2009. Since early 2009, we have been monitoring the status of federal assistance to AIG and the company’s financial condition using GAO-developed indicators and we have issued two reports that include information on them. In the April 2010 report, our indicators showed that AIG’s financial condition had remained relatively stable largely due to the federal assistance from the Federal Reserve and Treasury. AIG is repaying its debt to the federal government, but much of the progress reflects numerous exchanges of debt that AIG owed the Federal Reserve Bank of New York Revolving Credit Facility for various issues of preferred equity. With this shift from debt to equity, the federal government’s exposure to AIG is increasingly tied to the future health of AIG, its restructuring efforts, and its ongoing performance. Similarly, the government’s ability to fully recoup the federal assistance is uncertain and will be determined by the long-term health of AIG, the company’s success in selling businesses as it restructures, and other market factors such as the performance of the insurance sectors and the credit derivatives markets that are beyond the control of AIG or the government. We will continue to monitor these issues and plan to issue our next report in October 2010. Home Affordable Modification Program (HAMP). HAMP is Treasury’s cornerstone effort under TARP to meet the act’s purposes of preserving homeownership and protecting home values and is designed to address the dramatic increase in foreclosures. Treasury announced the framework for HAMP in 2009 and said it would use up to $50 billion of TARP funds to help at-risk homeowners avoid potential foreclosure, primarily by reducing their monthly mortgage payment. Unlike other TARP programs, HAMP expenditures are not investments that will be partially or fully repaid, but rather, expenditures that once made will not be recouped. According to Treasury, $250 million has been disbursed under the HAMP program as of June 30, 2010. In our March 2010 testimony before the House of Representatives’ Committee on Oversight and Government Reform, we noted that Treasury continued to face implementation challenges with HAMP. We stated that the program had made limited progress, suffered from inconsistent program implementation, and faced additional challenges going forward. Specifically: While the program was anticipated to help 3 to 4 million homeowners, Treasury reported as of the end of May 2010, only 1.2 million homeowners had started trial modifications and 347,000 homeowners had received permanent modifications. Servicers told us that the continued changes to the program posed significant implementation challenges for them. Although HAMP’s goal was to create clear, consistent, and uniform guidance for loan modifications across the industry, we reported that there was wide variation in servicers’ practices with respect to communicating with borrowers about HAMP, evaluating borrowers who were current or not yet 60 days delinquent on mortgage payments for whether they were in danger of “imminent default,” and tracking HAMP complaints. Finally, we identified additional challenges that HAMP faced going forward, including converting trial modifications to permanent status, addressing the growing issue of negative equity, limiting redefaults among borrowers who receive modifications, and ensuring program stability and effective program management. In June 2010, we issued a report that expanded on our March testimony and discussed Treasury’s actions to address the challenges that we had outlined in the March hearing. We reported that while Treasury had taken some steps to address these challenges it urgently needed to finalize and implement the various components of HAMP and ensure the transparency and accountability of these efforts. For example, Treasury announced several potentially substantial new HAMP-funded efforts in March 2010, but did not say how many borrowers these programs were intended to reach. In particular, Treasury announced a principal reduction program that could help borrowers with substantial negative equity, but made the program voluntary for servicers. We noted that Treasury needed to ensure that future public reporting on this program provided program transparency and address the potential question of whether borrowers were being treated fairly. In addition, we reported that as Treasury continues with its first-lien mortgage loan modification program and implements other HAMP-funded programs, including the second-lien modification and foreclosure alternatives, it will need to adhere to standards for effective program management and establish sufficient program planning and implementation capacity. Our June 2010 report contained eight recommendations to Treasury, including that it expeditiously establish specific criteria for imminent default, specify which HAMP complaints servicers should track, finalize and issue remedies for servicer noncompliance with HAMP requirements, and implement a prudent design for remaining HAMP-funded programs. However, Treasury has yet to fully implement several of the recommendations we made in July 2009 to improve HAMP’s effectiveness, transparency, and accountability. For example, we recommended that Treasury consider methods of monitoring borrowers who receive HAMP mortgage loan modifications and continue to have high total household debt (more than 55 percent of their income) to determine whether they obtain the required HUD-approved housing counseling. While Treasury has told us that monitoring borrower compliance with the counseling requirement would be too burdensome, we continue to believe that it is important that Treasury determine whether borrowers are actually receiving counseling and whether the counseling requirement is having its intended effect of limiting redefaults. In addition, we recommended that Treasury place a high priority on fully staffing the Homeownership Preservation Office—the office within Treasury responsible for overseeing HAMP implementation—and noted that having enough staff with appropriate skills was essential to governing HAMP effectively. However, Treasury has since reduced the number of staff in this office without formally assessing staffing needs. We believe that having sufficient staff is critical to Treasury’s ability to design and implement HAMP-funded programs quickly and effectively. We will continue to monitor Treasury’s implementation and management of HAMP-funded programs as part of our ongoing oversight of TARP to ensure that these programs are appropriately designed and operating as intended. Small Business Initiatives. TARP also includes programs that have a small business emphasis or component. Treasury has announced two new initiatives aimed at small business lending. The Community Development Capital Initiative (CDCI) will provide capital to Community Development Financial Institutions (CDFIs). CDCI is open to CDFI-certified banks, thrifts, and credit unions which have been certified by Treasury’s CDFI Fund as targeting more than 60 percent of their small business lending and other economic development activities to underserved communities. The second initiative, the Small Business and Community Lending Initiative, refers to Treasury’s SBA 7(a) securities purchase program, which makes direct purchases of securitized loan pools guaranteed under SBA’s 7(a) small business loan guarantee program. Finally, the Term Asset-Backed Securities Loan Facility (TALF), which is winding down, accepted asset- backed securities (ABS) as collateral for loans to restore liquidity in securitization markets, including securities consisting of SBA-guaranteed loan pools. We are currently reviewing these efforts and our objectives are to assess the data that are available on small business lending and to assess the status of Treasury’s actions in meeting its goals for these programs. Term Asset-Backed Securities Loan Facility (TALF). TARP was also intended to address problems in the securitization markets. TALF was designed to restore the securitization markets and improve access to credit for consumers and businesses. It is administered by the Board of Governors of the Federal Reserve System (Federal Reserve) and the Federal Reserve Bank of New York (FRBNY) and Treasury committed $20 billion of TARP funds for credit protection for TALF assets. The program stopped accepting ABS and legacy commercial mortgage-backed securities (CMBS) as collateral for new loans in March 2010 and new-issue CMBS in June 2010. FRBNY issued about $71 billion in TALF loans, with most of them secured by credit card ABS, legacy CMBS, and auto loan ABS. Our analysis in our February 2010 report suggested that the securitization markets improved for the more frequently traded TALF-eligible sectors after the program’s first activity in March 2009. However, we did not find clear evidence that consumer credit rates changed significantly after TALF started. FRBNY officials said that it is possible that without TALF, interest rates on loans to consumers and small businesses could have been much higher. We reported in February 2010 that TALF contained a number of features to help reduce the risk of loss to TARP funds. Analyses by Treasury and FRBNY project minimal, if any, use of TARP funds for TALF-related losses, and Treasury currently anticipates a profit. We found that CMBS could pose a higher risk of loss than ABS, given the ongoing uncertainty in the commercial real estate market. For this reason, we recommended that Treasury give greater attention to risks in commercial real estate and CMBS markets. In response, Treasury developed internal tracking reports to assess these trends. We also found that at the outset of TALF, Treasury had not fully documented the rationale for final decisions that were made on managing risks associated with TALF—including decisions involving the Federal Reserve. We found that Treasury’s analysis of TALF-related risks sometimes differed from FRBNY’s and that Treasury lacked clear documentation on how it resolved discrepancies or made final decisions with the Federal Reserve and FRBNY. Documenting such rationales increases transparency and strengthens internal controls for decision making. Since the report, Treasury has created a process document that details how it assesses changes to TALF program terms proposed by the Federal Reserve, including specifying levels of management review and approval. In addition, Treasury has a formal process for assessing outside analyses it may request for assessing risks to TARP. Finally, while Treasury bears the first-loss risk from assets that TALF borrowers might surrender in conjunction with unpaid loans, it has not developed measures to analyze and publicly report on the potential purchase, management, and sale of such assets. Without such a plan, Treasury may not fully and publicly disclose how such surrendered assets are managed and financed, undermining Treasury’s efforts to be fully transparent about TARP activities. We recommended that Treasury review the data it might collect and publicly report on the event that any collateral is surrendered to TALF LLC and Treasury lends to it. To date, Treasury has not provided evidence that it has conducted such a review or established such a plan, though officials stated that they would hire an asset manager to assist in managing surrendered assets in order to protect taxpayer interests and noted that Treasury was committed to transparency regarding such assets. In anticipation of the upcoming decisions on the future of TARP, the need to unwind the extraordinary federal support across the board, and the fragile state of the economy, we made recommendations to Treasury in October 2009. Specifically, we suggested that any decision to extend TARP be made in coordination with relevant agencies and that Treasury use quantitative analysis whenever possible to support its reasons for doing so. We noted that without a robust analytic framework, Treasury could face challenges in effectively carrying out the next stages of its programs. Subsequently, on December 9, 2009, the Secretary of the Treasury notified Congress that he was extending the authority for TARP provided under the act until October 3, 2010. The extension involved winding down some programs while extending others, transforming the program to one focused primarily on preserving homeownership and improving financial conditions for small banks and businesses. As such, according to Treasury, new commitments through October 3, 2010 will be limited to programs, under the Making Home Affordable Program (MHA), including HAMP, and small business lending programs. The Dodd-Frank Wall Street Reform and Consumer Protection Act, passed by both the House and Senate and expected to be signed by the President this week, would (1) reduce Treasury’s authority to purchase or insure troubled assets to $475 billion and (2) prohibit Treasury, under the act, from incurring any additional obligations for a program or initiative unless the program or initiative had already been initiated prior to June 25, 2010. In reviewing the analytical process underpinning this decision to extend TARP, we reported that Treasury used a deliberative process that included sufficient interagency coordination and consultation and considered a number of qualitative and quantitative factors. However, we noted that the extent of coordination could be enhanced and formalized, specifically with the FDIC, for any upcoming decisions that would benefit from interagency collaboration. Although the economy is still fragile, a key priority will be to develop, coordinate, and communicate exit strategies to unwind the remaining programs and investments resulting from the extraordinary crisis-driven interventions. Because TARP will be unwinding concurrently with other important interventions by federal regulators, decisions about the sequencing of the exits from various federal programs will require bringing a larger body of regulators to the table to plan and sequence the continued unwinding of federal support. We also noted that Treasury could strengthen its analytical framework by identifying clear objectives for small business programs and explaining how relevant indicators motivated TARP program decisions. Finally, we recommended (1) formalizing coordination with FDIC for future TARP decisions and (2) improving the transparency and analytical basis for TARP program decisions. Though TARP will soon expire, Treasury will still need to work with other agencies to effectively conduct a coordinated exit from TARP and other government financial assistance. Many market observers have said that, taken together, the concerted actions by Treasury and others helped avert a more severe financial crisis, although some critics believe that the markets would have recovered without government support. Particular programs have been reported to have had the desired effects, especially if stabilizing the financial system and restoring confidence was considered to be the principal goal of the intervention. In our October 2009 and February 2010 reports we noted that some of the anticipated effects of TARP on credit markets and the economy had materialized and that some securitization markets had experienced a tentative recovery. During our review of the decision to extend TARP, Treasury noted that some programs that it believed had accomplished their goals would be terminated. For example, as noted earlier, Treasury ended CPP and CAP largely because of banks’ renewed ability to access capital markets. It also noted improvements in securitization markets and stabilization of certain legacy asset prices as motivating the closing of TALF and the Public Private Investment Program (PPIP). Indicators we have been monitoring suggest credit markets have been able to sustain their recovery despite the winding down of key programs initiated by the Federal Reserve, Treasury, FDIC and others. As shown in table 2 interbank, mortgage, corporate debt, and securitization markets continue to perform better than their pre-TARP lows. The cost of credit and perceptions of risk (as measured by premiums over Treasury securities) have fallen in interbank, mortgage, and corporate debt markets, and the volume of credit, as measured by new mortgage loans and asset- backed securities, has increased since the first TARP program, CPP. Basis point change since October 13, 2008 3-month London interbank offered rate (an average of interest rates offered on dollar-denominated loans) Unfortunately, rising foreclosures continue to highlight the challenges facing the U.S. economy. By any measure foreclosure and delinquency statistics for residential housing remain well above their historical averages despite programs such as HAMP. However, a slow recovery does not necessarily mean that TARP is ineffective, because in absence of TARP it is possible that foreclosure and delinquency rates would be higher. Moreover, full recovery will likely take some time given the build up of imbalances in the real estate, fiscal and household sectors over several years. Experience with past financial crises, coupled with analyses of the specifics of the current situation, has led the Congressional Budget Office to predict a modest recovery that will not be robust enough to appreciably improve weak labor markets through 2011. Weaknesses in labor markets will likely weigh on residential housing markets. Given that any new TARP activity will be limited to home ownership preservation and small business lending programs, we will continue to monitor indictors such as foreclosure and delinquencies as potential measures of the efficacy of these programs. Isolating the impact of TARP from general market forces and other foreclosure initiatives will be a challenge. This challenge will be compounded in the area of small business lending because Treasury has yet to set explicit objectives for its small business lending programs and because a lack of comprehensive data on new lending makes assessing credit conditions for small business particularly difficult. In recommending that Treasury improve the transparency and analytical basis for TARP program decisions, we specifically noted the need to set quantitative program objectives for its small business lending programs and identify any additional data needed to make program decisions. Mr. Chairman and Members of the Committee, I appreciate the opportunity to discuss these critically important issues and would be happy to answer any questions that you may have. Thank you. For further information on this testimony, please contact Richard J. Hillman on (202) 512-8678 or hillmanr@gao.gov, Orice Williams Brown on (202) 512-8678 or williamso@gao.gov, or Thomas McCool on (202) 512- 2642 or mccoolt@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Troubled Asset Relief Program: Treasury’s Framework for Deciding to Extend TARP Was Sufficient, but Could be Strengthened for Future Decisions. GAO-10-531. Washington, D.C.: June 30, 2010. Troubled Asset Relief Program: Further Actions Needed to Fully and Equitably Implement Foreclosure Mitigation Program. GAO-10-634. Washington, D.C.: June 24, 2010. Troubled Asset Relief Program: Update of Government Assistance Provided to AIG. GAO-10-475. Washington, D.C.: April 27, 2010. Troubled Asset Relief Program: Automaker Pension Funding and Multiple Federal Roles Pose Challenges for the Future. GAO-10-492. Washington, D.C.: April 6, 2010. Troubled Asset Relief Program: Home Affordable Modification Program Continues to Face Implementation Challenges. GAO-10-556T. Washington, D.C.: March 25, 2010. Troubled Asset Relief Program: Treasury Needs to Strengthen Its Decision-Making Process on the Term Asset-Backed Securities Loan Facility. GAO-10-25. Washington, D.C.: February 5, 2010. Troubled Asset Relief Program: The U.S. Government Role as Shareholder in AIG, Citigroup, Chrysler, and General Motors and Preliminary Views on its Investment Management Activities. GAO-10-325T. Washington, D.C.: December 16, 2009. Financial Audit: Office of Financial Stability (Troubled Asset Relief Program) Fiscal Year 2009 Financial Statements. GAO-10-301. Washington, D.C.: December 9, 2009. Troubled Asset Relief Program: Continued Stewardship Needed as Treasury Develops Strategies for Monitoring and Divesting Financial Interests in Chrysler and GM. GAO-10-151. Washington, D.C.: November 2, 2009. Troubled Asset Relief Program: One Year Later, Actions Are Needed to Address Remaining Transparency and Accountability Challenges. GAO-10-16. Washington, D.C.: October 8, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-1048T. Washington, D.C.: September 24, 2009. Troubled Asset Relief Program: Status of Government Assistance Provided to AIG. GAO-09-975. Washington, D.C.: September 21, 2009. Troubled Asset Relief Program: Treasury Actions Needed to Make the Home Affordable Modification Program More Transparent and Accountable. GAO-09-837. Washington, D.C.: July 23, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-920T. Washington, D.C.: July 22, 2009. Troubled Asset Relief Program: Status of Participants' Dividend Payments and Repurchases of Preferred Stock and Warrants. GAO-09-889T. Washington, D.C.: July 9, 2009. Troubled Asset Relief Program: June 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-658. Washington, D.C.: June 17, 2009. Auto Industry: Summary of Government Efforts and Automakers’ Restructuring to Date. GAO-09-553. Washington, D.C.: April 23, 2009. Troubled Asset Relief Program: March 2009 Status of Efforts to Address Transparency and Accountability Issues. GAO-09-504. Washington, D.C.: March 31, 2009. Federal Financial Assistance: Preliminary Observations on Assistance Provided to AIG. GAO-09-490T. Washington, D.C.: March 18, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-417T. Washington, D.C.: February 24, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-359T. Washington, D.C.: February 5, 2009. Troubled Asset Relief Program: Status of Efforts to Address Transparency and Accountability Issues. GAO-09-296. Washington, D.C.: January 30, 2009. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-266T. Washington, D.C.: December 10, 2008. Auto Industry: A Framework for Considering Federal Financial Assistance. GAO-09-242T . Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Status of Efforts to Address Defaults and Foreclosures on Home Mortgages. GAO-09-231T. Washington, D.C.: December 4, 2008. Troubled Asset Relief Program: Additional Actions Needed to Better Ensure Integrity, Accountability, and Transparency. GAO-09-161. Washington, D.C.: December 2, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony discusses our work on the Troubled Asset Relief Program (TARP), which Congress established on October 3, 2008 in response to the financial crisis that threatened the stability of the U.S. financial system and the solvency of many financial institutions. Under the original TARP legislation, the Department of the Treasury (Treasury) had the authority to purchase or insure $700 billion in troubled assets held by financial institutions. As we have seen, since TARP's inception Treasury has chosen to use those funds for a variety of activities, including injecting capital into key financial institutions, implementing programs to address problems in the securitization markets, providing assistance to the automobile industry and American International Group, Inc. (AIG), and working to help homeowners struggling to keep their homes. Today, some of these programs have been discontinued and others are winding down, but others--such as homeownership preservation programs--may continue for some time. Treasury has also seen some participating institutions repay their TARP funds as they recover their financial health. The prospect for repayment from some other institutions, both large and small, remains unclear. The Emergency Economic Stabilization Act (the act) that authorized TARP required GAO to report at least every 60 days on findings from our oversight of actions taken under the programs. We have been monitoring TARP programs since their inception and our reports have highlighted challenges facing many of these programs. To date, we have issued over 25 reports and testimonies related to TARP and made over 50 recommendations to improve the transparency and accountability of its operations. This statement today draws primarily on 7 reports we have issued since October 2009. Specifically, this statement focuses on (1) the nature and purpose of activities that have been initiated under TARP and ongoing challenges, (2) the process for making decisions related to unwinding TARP programs, and (3) indicators of credit conditions in markets targeted by TARP programs. To do our work, we reviewed our prior reports and other documents provided by Treasury's Office of Financial Stability (OFS) and conducted interviews with Treasury and OFS officials. In addition, we have updated the program's receipts and disbursements through June 30, 2010, and indicators of credit markets as of July 1, 2010. We conducted these performance audits between July 2009 and June 2010 and updated information in July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Treasury has initiated a number of programs under TARP, some of which have ended or are being unwound. Others will continue. Among the programs no longer making commitments are the Capital Purchase Program (CPP) and Targeted Investment Program (TIP), while the Home Affordable Modification Program (HAMP) and new small business lending initiatives are expected to continue for some time. Although Treasury has received significant repayments of the funding it provided to financial institutions, some investments and loans could still result in substantial losses to the government. We have been monitoring TARP programs since their inception. In particular, Chrysler Group LLC and General Motors Company (GM) have shown some indications of progress toward returning to profitability, such as doing better than they and Treasury had initially projected in terms of revenues, operating earnings, and cash flow. However, the extent to which the federal government will fully recoup its investment in the auto industry is uncertain, and the companies face several challenges in the coming years. Since early 2009, we have also been monitoring the status of federal assistance to AIG and its financial condition using indicators we developed. In April 2010, we reported that our indicators showed that AIG's financial condition has remained relatively stable largely due to the federal assistance provided by the Federal Reserve and Treasury, but the extent to which the federal government will recoup its investment remains uncertain and will not only depend on the AIG's financial condition but also other market factors such as the performance of the insurance sectors and the credit derivatives markets that are beyond the control of AIG or the government. Many of our reports have also highlighted the challenges facing TARP programs and made recommendations to enhance transparency and accountability of its programs. We reported that while Treasury had taken some steps to address these challenges it urgently needed to finalize and implement the various components of HAMP and ensure the transparency and accountability of these efforts. We will continue to monitor these programs and have ongoing work on several facets of TARP, including those initiatives that have a small business focus. We have also reviewed Treasury's framework for deciding to extend TARP beyond December, 31, 2009, and found that the process was sufficient but could be strengthened for similar decisions that will need to be made in the future. Specifically, we found that the extent of coordination could be enhanced and formalized between Treasury and the Federal Deposit Insurance Corporation (FDIC) and recommended that Treasury formalize coordination with FDIC for future decisions. Although the authority for TARP is set to expire soon, Treasury will continue to face decisions in winding down programs, and many of these decisions will require interagency coordination. Because TARP will be unwinding concurrently with other important regulatory interventions, decisions about the sequencing of the exits from the programs will require regulators to work closely together. We have noted in past reports that some of the anticipated effects of TARP on credit markets and the economy had materialized and that some securitization markets had experienced a tentative recovery. Indicators we have been monitoring suggest that credit markets have been able to sustain their recovery despite the winding down of key programs initiated by the Federal Reserve, Treasury, FDIC and others. However, a slow recovery does not necessarily mean that TARP is ineffective, because in absence of TARP it is possible that foreclosure and delinquency rates would be higher. Finally, because any new TARP activity will be limited to home ownership preservation and small business lending programs, we will also continue to monitor indicators such as foreclosure and delinquencies as potential measures of the programs' success.
The FDA Modernization Act of 1997 (FDAMA) established pediatric exclusivity for sponsors that conducted pediatric studies for drugs. In 1999, FDA implemented the Pediatric Rule, which required that sponsors include the results of pediatric studies when submitting certain new drug or biological product applications. However, in 2002, the Pediatric Rule was declared invalid by a federal court. In 2002, Congress reauthorized FDAMA’s pediatric exclusivity provisions in BPCA, and in 2003, Congress codified much of the Pediatric Rule in PREA, requiring that pediatric studies be conducted and that the results of those studies be included in certain new drug or biological product applications. In September 2007, Congress reauthorized both PREA and BPCA as a part of FDAAA, and in March 2010, Congress extended pediatric exclusivity and applicable BPCA provisions to biological products as a part of the Patient Protection and Affordable Care Act. PREA and BPCA are both set to expire on October 1, 2012. PREA requires that sponsors submit the results of pediatric studies in certain drug and biological product applications to FDA. Specifically, PREA applies to drug and biological product applications for any of the following: a new active ingredient, a new indication, a new dosage form, a new dosing regimen, or a new route of administration. In addition, PREA requires that pediatric studies be conducted for the indications described in the application—that is, the indications for which the sponsor plans to market the product—but not for any additional indications. The 2007 reauthorization of PREA established the Pediatric Review Committee (PeRC), an internal FDA committee responsible for providing assistance in the review of pediatric study results and increasing the consistency and quality of such reviews across the agency. The PeRC consists of approximately 40 FDA employees with a range of expertise, including pediatrics, biopharmacology, statistics, chemistry, legal issues, pediatric ethics, and others as pertinent to the pediatric product under review. FDA officials explained that the PeRC is divided into separate subcommittees for PREA and BPCA. When a sponsor completes all of the required studies for a drug or biological product, it submits an application to FDA. The application includes these study results and suggested labeling changes based on the pediatric studies’ findings, among other things. If the pediatric studies have not been completed, the application must include a request for a waiver or deferral of the pediatric studies. PREA established certain criteria under which, at the sponsors’ request, some or all of the required pediatric studies may either be deferred until a specified date after approval of the product’s application or waived altogether by FDA. FDA may also grant a deferral or waiver on its own initiative, under specified circumstances. For example, a study required under PREA may be deferred when additional data on the safety and effectiveness of the product in adults is needed before the product can be studied for use in children. If the sponsor requests a deferral, the product’s application must include, among other things, a description of the planned pediatric studies and a time frame for completion. The study may be waived when it is determined to be impossible or highly impracticable, such as when the number of pediatric patients with a disease that may be treated with that product is too small to study. Sponsors may conduct multiple studies per product, such as separate studies for subsets of pediatric populations like infants, children, and adolescents. FDA may grant waivers or deferrals for only one type of study, such as in one pediatric age group, or FDA may grant waivers or deferrals for all pediatric studies of the product. FDA’s review of an application under PREA is part of the agency’s broader review of the entire application. Once the sponsor submits its application, FDA directs the application to the agency’s appropriate division to review the entire application, including all adult study results, the pediatric study results, and requests for a waiver or deferral. FDA may determine that the application is incomplete and more information is necessary from the sponsor. Generally, when this happens, FDA notifies the sponsor and waits to finish reviewing the application until the information is received. According to FDA officials, toward the end of FDA’s review, the division provides requests for a waiver or a deferral and a summary of the relevant pediatric data to the PeRC for review. The PeRC provides recommendations on whether or not the pediatric portion of the application satisfies PREA requirements and whether to grant or deny a waiver or deferral. FDA then determines whether or not to approve the application. As a part of the review process, FDA is required by PREA to negotiate and reach an agreement with the sponsor on labeling changes based on pediatric studies within 180 days of the application’s submission. If FDA and the sponsor are unable to reach an agreement on labeling changes within 180 days, they are required by PREA to proceed to a formal dispute resolution process. The 2007 reauthorization of PREA provided FDA with authority to make labeling changes on its own initiative when a product has been studied for use in children, including when a study does not determine that the product is safe or effective in pediatric populations. Therefore, FDA can impose a labeling change unilaterally to describe FDA’s determination about the study results in the event that the agency cannot reach agreement with the sponsor. A sponsor can request that a drug or biological product that is required to be studied under PREA be studied under BPCA as well, to allow the sponsor of the product to be eligible to receive pediatric exclusivity. According to FDA officials, the sponsor can make this request through a proposed pediatric study request (PPSR). If FDA agrees, it issues a formal written request to the sponsor that outlines, among other things, the nature of the pediatric studies that the sponsor must conduct in order to qualify for pediatric exclusivity. (See fig. 1.) According to FDA officials, the pediatric studies requested under BPCA would generally also fulfill the PREA requirement; however, even if the sponsor does not complete the studies outlined in the BPCA written request, it is still required to complete any studies required under PREA. FDA officials said that pediatric studies conducted under BPCA are generally more extensive than those required under PREA. For example, the written request could include studies for indications in addition to those described by the sponsor in its application, such as those that are relevant to children. Under BPCA, sponsors receive pediatric exclusivity as an incentive to conduct studies of drug and biological products for use in children. The BPCA process formally begins when FDA determines that information related to the use of the product in a pediatric population may produce health benefits and issues a written request for pediatric studies to the sponsor of a product. Written requests may be issued for new, not previously marketed, drug or biological products or to products that are already on the market but still on-patent. FDA may issue a written request on its own initiative or after it has received and agreed to a PPSR from a sponsor to conduct a study under BPCA. The PeRC reviews all written requests and provides recommendations prior to their issuance to sponsors. According to FDA officials, in the written request, FDA may ask for more than one study of a single drug or biological product, such as studies for multiple indications or separate studies for different age groups, such as infants, children, and adolescents. BPCA requires that FDA take into account adequate representation of children of ethnic and racial minorities when developing written requests. (See app. II for information on FDA’s efforts to ensure the inclusion of racial and ethnic minorities in pediatric studies.) The sponsor must respond to FDA within 180 days of receiving the written request indicating whether the sponsor agrees to the request and, if so, when the pediatric study will be initiated. If the sponsor does not agree to the request, the sponsor must state the reasons for declining the request. When the pediatric studies are complete, the sponsor submits the results to FDA in an application, which must include any suggested labeling changes resulting from the studies’ findings. FDA recommends that the application be submitted 15 months prior to the end of the sponsor’s market exclusivity for the product in order to be considered for pediatric exclusivity. Once the sponsor submits its application, FDA is to review the sponsor’s application in order to (1) determine whether or not to approve the application, (2) negotiate and reach an agreement with the sponsor on pediatric labeling changes, and (3) grant or deny pediatric exclusivity. FDA is to grant pediatric exclusivity if the study meets the conditions outlined in the written request, regardless of the study’s findings. Specifically, in determining whether to grant or deny pediatric exclusivity, BPCA requires that FDA assess whether the studies fairly responded to the written request, were conducted in accordance with commonly accepted scientific principles and protocols, and were properly submitted. During FDA’s review of the application, the PeRC may review a summary of relevant pediatric data from the application and provide recommendations to FDA on whether or not to grant pediatric exclusivity. FDA then determines whether or not to approve the application. In addition, if FDA and the sponsor are unable to reach an agreement on the labeling changes within 180 days, they are required by BPCA to proceed to the same formal dispute resolution process that exists for PREA. The 2007 reauthorization of BPCA provided FDA with authority to make labeling changes on its own initiative when a product has been studied for use in children, including when a study does not determine that the product is safe or effective in pediatric populations. Therefore, FDA can impose a labeling change unilaterally to describe FDA’s determination about the study results in the event that the agency cannot reach agreement with the sponsor. BPCA includes provisions for the conduct of pediatric studies even if the sponsor declines the written request. If a sponsor declines a written request by FDA to study an on-patent drug or if a sponsor does not complete studies outlined in an accepted written request, FDA may refer the written request to FNIH if it determines that there is a continuing need for information relating to the use of the drug in the pediatric population. (See fig. 2.) If FNIH is not able to fund all studies, BPCA requires that FDA consider whether to require the studies described in the written request under PREA. The process under BPCA for off-patent products differs from the process for on-patent products. To further the study of off-patent products, NIH— in consultation with FDA and experts in pediatric research—is required to develop and publish a list of priority needs in pediatric therapeutics, including products or indications that require study, every 3 years. NIH publishes this list on its Web site and in the Federal Register. NIH may submit a PPSR to FDA for the study under BPCA of an indication of an off- patent product that is used for one of the pediatric therapeutic areas described on the NIH list of priority needs. FDA is then to determine whether to issue a written request in response to NIH’s PPSR to all sponsors of the drug or biological product, including the product’s original sponsor as well as any manufacturers of the generic product. The PeRC reviews all written requests and provides recommendations prior to their issuance to sponsors. If a sponsor were to accept the written request, it would conduct the studies outlined in the request and then submit the study results and any suggested labeling changes to FDA for review. However, according to FDA officials, a sponsor has not accepted a written request to study an off-patent product since the 2007 reauthorization. Off- patent products do not qualify for pediatric exclusivity, so there are few financial incentives to conduct the studies. Under the 2007 reauthorization of BPCA, if the sponsors were to decline or fail to respond to the written request for an off-patent product within 30 days, FDA can refer the written request to NIH to publish a request for proposals to conduct the studies. The sponsors of off-patent products are not required to respond to a written request. If within 30 days of FDA’s issuance of the written request the sponsors do not accept or decline the request, FDA considers the request declined. NIH can then award funds— for example, through grants or contracts—to entities that have the expertise and ability to conduct the studies described in the written request. When these studies are complete, the entity that completed the studies is to submit the study results to NIH and FDA for review. For off- patent studies conducted by a sponsor or funded by NIH, FDA is to negotiate and reach an agreement with the product’s sponsors on appropriate labeling changes resulting from the study findings within 180 days. (See fig. 3.) As is the case with on-patent products studied under PREA and BPCA, if FDA is unable to reach an agreement on the labeling changes for an off-patent product within that time, FDA is required by BPCA to proceed to the formal dispute resolution process. The Pediatric Advisory Committee (PAC) is an FDA advisory committee consisting of 14 voting members, who are appointed by the Commissioner of FDA and are knowledgeable in pediatric research, pediatric subspecialties, statistics, and/or biomedical ethics. The committee includes a representative from a pediatric health organization and a representative from a relevant patient advocacy organization. The PAC is responsible for reviewing reports of all adverse events reported for drug and biological products during a one-year period after a labeling change is made under PREA or BPCA and may review reports of pediatric adverse events in subsequent years. The committee makes recommendations to FDA on how to respond to the adverse events. PAC recommendations can include suggested labeling changes based on the adverse events, continued heightened monitoring of the product, the production or revision of a medication guide for consumers, or a return to routine monitoring of adverse events. In addition, as required by PREA and BPCA, the PAC is to assist in FDA’s dispute resolution if a proposed labeling change is not agreed upon by FDA and the sponsor within 180 days of submission of the application. If a labeling change enters dispute resolution, FDA is to first request that the sponsor make any labeling changes that FDA has determined to be appropriate. If the sponsor does not agree, FDA is to refer the matter to the PAC. The PAC is then to convene to review the results of the pediatric studies and provide recommendations to FDA on appropriate changes to the product’s labeling, if any. FDA is then to consider the committee’s recommendations and request that the sponsor make any labeling changes recommended by the PAC that FDA has determined to be appropriate. If the sponsor does not make the labeling change, FDA may deem the product misbranded. The Standards for Internal Control in the Federal Government provides the overall framework for establishing guidelines for internal control that help government managers achieve desired objectives. Internal control, which is synonymous with management control, comprises the plans, methods, and procedures used to meet missions, goals, and objectives. Internal control is not one event, but a series of actions and activities that occur throughout an entity’s operations on an ongoing basis. The responsibility of good internal control rests with managers; they set the objectives, put the control mechanisms and activities in place, and monitor and evaluate these mechanisms and activities. Internal control includes a variety of activities such as ensuring effective information sharing throughout the organization and conducting ongoing monitoring of agency activities. At least 130 products—80 products under PREA and 50 under BPCA— have been studied for use in children since the 2007 reauthorization. However, FDA does not know if additional products with pediatric studies are included in applications for which FDA reviews under PREA are incomplete. The products studied under PREA and BPCA represent a wide range of therapeutic areas. In addition, few drugs have been studied when sponsors have declined written requests. Since the 2007 reauthorization, at least 80 products have been studied under PREA, but FDA cannot be certain how many additional products may have been studied. FDA does not track and aggregate data about applications submitted under PREA until the PeRC has completed its review of information from the application. This generally occurs late in FDA’s overall review of the application. Therefore, FDA was unable to provide information about some applications that had been submitted to the agency that were subject to PREA. For example, FDA officials could not provide aggregate data about the total number of applications, whether the applications were complete or incomplete, or whether the application included pediatric studies or requests for waivers or deferrals. Therefore, FDA could not be certain how many additional applications for which it has not yet completed its review under PREA include pediatric studies or requests for waivers or deferrals. This lack of data during the review process about applications subject to PREA, hampers FDA’s ability to manage the review process, including whether FDA is meeting statutory requirements and whether the sponsor has complied with PREA’s requirements for pediatric studies. FDA officials said that approximately 830 applications submitted to FDA from September 27, 2007, through June 30, 2010, were subject to PREA, but could not provide a precise number. The PeRC has completed its review of information from 449 of these applications, 80 of which contained the results of pediatric studies. Fifty-nine were drugs and 21 were biological products. FDA could not provide information about the remaining 381 of the approximately 830 applications. Standards for internal control in the federal government provide that managers need certain data to determine whether they are meeting their agencies’ missions, goals, and objectives. This could include whether FDA is meeting PREA requirements and whether the sponsor has complied with PREA’s requirements for pediatric studies. FDA officials explained that these 381 applications were submitted to FDA, and were under consideration in the relevant FDA division, but had not yet been reviewed by the PeRC, which advises FDA in its review of pediatric studies or requests for waivers or deferrals. FDA officials said that they could not provide any details about these applications without locating each application individually within the agency and reviewing it to determine whether it included pediatric studies or requests for waivers or deferrals, but stated that it is likely that most of the approximately 381 applications are for products that sponsors plan to market in adult indications and, therefore, would include a request for a deferral of the pediatric studies rather than completed pediatric studies. Although FDA officials could not say how many, they said that some of the approximately 381 applications may be incomplete and awaiting further review upon the sponsor’s submission of additional materials, and that some of the applications may have been withdrawn by the sponsor. However, some of the applications could include the results of completed pediatric studies. Therefore, the total number of products with studies completed under PREA may be greater than 80. HHS officials stated in its comments on a draft of this report that an update to the Document Archiving Reporting and Regulatory Tracking System (DARRTS), completed in May 2011, will provide them with the capability to include a code to indicate whether an application is subject to PREA. However, the HHS comments do not state that this data system update would provide the internal controls necessary to track and aggregate data about applications that are currently under review, which would allow FDA to readily retrieve information to manage this program. In addition, HHS states that FDA does not currently plan to code applications retrospectively until they have ensured that there are available resources for such a project. Therefore, unless they do these things, FDA still will not know the status of the 381 applications, including whether the applications were complete or incomplete, or whether the applications included pediatric studies or requests for waivers or deferrals, until the review of those applications is compete. FDA has granted a full or partial waiver or deferral to more than half of the applications that it has reviewed under PREA. According to FDA officials, of the 449 applications for which FDA has completed its review, FDA granted sponsors 237 waivers and 131 deferrals. FDA officials noted that, generally, most sponsors request deferrals of pediatric studies in the product’s application rather than conduct the pediatric studies prior to submitting the product’s application. FDA sometimes granted a full or partial waiver and a deferral to a single application, therefore a single application could be included in both totals. FDA officials could not provide additional information about the remaining 381 applications submitted to FDA during this period but not reviewed by the PeRC. Waivers and deferrals were granted for multiple reasons. The reason most frequently cited for granting a waiver was that the drug or biological product studies were found to be impossible or highly impracticable. Waivers may be granted for this reason because, for example, the number of patients in that age group is too small. Most deferrals were granted because the product was ready to be approved for use in adults before pediatric studies had been completed. (See fig. 4). FDA officials also could not say how many studies are ongoing under PREA because the agency does not maintain a count of those studies. According to FDA, sponsors inform FDA of their plans for studies currently being conducted under PREA, but FDA does not aggregate data for these products until the sponsor completes the studies and the results are submitted to FDA for review. Fifty products have been studied under BPCA from the 2007 reauthorization through June 30, 2010; FDA has reviewed applications for 50 of these products, none of which were biological products. As noted earlier, sponsors submit studies to FDA as part of an application. According to FDA officials, FDA granted pediatric exclusivity to the sponsors of 44 of the 50 drugs. Sponsors of five of the six drugs that did not receive exclusivity submitted only partial responses to the written request. FDA officials explained that FDA reviews study results as they are submitted, but does not make a pediatric exclusivity determination until it receives a full response to the written request. Therefore, although FDA completed its review of the applications, the pediatric exclusivity determination is pending the completion of the remainder of the studies FDA requested. FDA officials stated that FDA denied pediatric exclusivity for one of the products prior to the 2007 reauthorization because the studies completed by the sponsor did not meet the conditions of the written request. Additionally, FDA officials told us that two additional drugs were studied between September 27, 2007, and June 30, 2010, but those studies were still undergoing FDA review. Since the 2007 reauthorization, according to FDA officials, FDA has issued 37 written requests for on-patent drug and biological products to sponsors under BPCA, 25 of which originated from a PPSR submitted to FDA by the sponsor since the 2007 reauthorization of BPCA. Sponsors agreed to 35 of the written requests. (See fig. 5.) FDA officials stated that the sponsors completed studies for two of the written requests; studies for the remaining 33 written requests are ongoing. The two other written requests were declined because the sponsors stated they would be unable to finish the studies by the completion date outlined in the written request. FDA officials stated that FDA is in the process of determining whether there is a continuing need for the studies described in the two declined written requests. If so, FDA will refer these studies to FNIH pending the availability of sufficient funding at FNIH. We previously reported that about 19 percent of on-patent written requests were declined from 2002 though 2005. Since the 2007 reauthorization, about 5 percent of written requests have been declined. Drug and biological products were studied under PREA and BPCA for their use in the treatment of a wide range of diseases in children, including those that are common or life threatening. FDA categorized the products studied under PREA and BPCA into 16 broad categories of disease, which include endocrinology, infectious diseases, and oncology; at our request, FDA also categorized the products studied under PREA. Some of the products studied were for the treatment of diseases that are common, including those for the treatment of asthma and allergies, while other products studied treat more life threatening diseases such as cancer or human immunodeficiency virus (HIV) infection. Additionally, some products studied were preventive vaccines. The largest numbers of products were studied for the treatment of neurological diseases and viral infectious diseases, with 23 products studied in each therapeutic area since the 2007 reauthorization. (See table 1.) This number includes both ongoing and completed studies that have been reviewed by FDA. Since the 2007 reauthorization, none of the on-patent products for which written requests were declined or not completed by sponsors have been funded for study by FNIH. A provision under BPCA allows FDA to refer declined written requests for on-patent products to FNIH pending the availability of sufficient funding. However, according to FNIH representatives, FNIH does not have sufficient funding because it is no longer raising funds for the study of on-patent drugs under BPCA. Since the 2007 reauthorization, FNIH has partially funded the study of two on- patent drugs for which written requests were declined by sponsors or not completed, but NIH initiated and also partially funded those studies prior to the 2007 reauthorization. FDA has not referred any on-patent drugs to FNIH since the 2007 reauthorization of BPCA. Since the 2007 reauthorization of BPCA, FDA has referred written requests for the study of two off-patent drugs that have been declined or not responded to by sponsors to NIH for funding. As of June 30, 2010, NIH initiated funding for the study of one of these two products, but NIH has not submitted any study results for this product to FDA. NIH has also funded 12 studies that are not product specific since the 2007 reauthorization of BPCA. Prior to the reauthorization of BPCA, FDA referred 15 written requests for the study of off-patent drugs that were declined, or not responded to, by sponsors to NIH for funding. Of these 15 drugs, NIH funded the study of 10 of these drugs. As of June 30, 2010, NIH has submitted study results for two of these off-patent drugs to FDA; however, NIH has not yet completely satisfied the requirements of any written request for the study of an off- patent drug under BPCA. NIH does not receive appropriations specifically to fund studies for products under BPCA. NIH officials said that NIH institutes and centers spend a total of $25 million annually on BPCA activities, which are coordinated by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. NIH officials have said that when FDA refers a written request for the study of a product under BPCA to NIH, NIH must determine if it is feasible to initiate funding for the product’s studies. This determination depends on the availability of funding and the feasibility of conducting the necessary pediatric studies. NIH officials stated that funding a clinical trial with approximately 200 patients costs an average of almost $10 million over 5 years. In addition, NIH annually spends $4.5 million of this $25 million it spends on BPCA activities on the contract for NIH’s BPCA data coordinating center. All of the drug and biological products with pediatric studies completed and applications reviewed since the 2007 reauthorization had labeling changes that included important pediatric information. FDA’s goals for the time it takes to review applications often differ from the requirement in PREA for reaching agreement on labeling changes with the sponsor. All of the 130 drug and biological products with studies completed and applications reviewed by FDA since the 2007 reauthorization had labeling changes. As a point of comparison, in the 9 years prior to the 2007 reauthorization, 256 products had pediatric study-related labeling changes agreed upon by FDA and the product’s sponsor. (See table 2.) In addition, we previously reported that not all products studied under BPCA had labeling changes. According to FDA officials, instances in which there were no labeling changes for products studied prior to the 2007 reauthorization were generally due to study results that did not establish that the products were safe and/or effective in children. The 2007 reauthorizations of PREA and BPCA provided FDA with authority to make labeling changes on its own initiative when a product has been studied in children, including when a study does not determine that the product is safe or effective in pediatric populations. The labeling changes for drug and biological products studied under PREA and BPCA reflected important pediatric information. FDA categorizes labeling changes into one or more of nine categories, and each drug or biological product can have more than one category of labeling change. These categories illustrate the important pediatric information provided in labeling changes, ranging from providing new or enhanced safety information to inserting a boxed warning for pediatric populations. Since the 2007 reauthorization, the most commonly implemented labeling change expanded the pediatric age groups for which a product was indicated. There were 99 instances of this type of labeling change. (See table 3.) For example, a labeling change for a drug treating gastroesophageal reflux disease extended the approved indication from adults only to pediatric patients 5 years of age and older. In addition, 28 labeling changes indicated that, though a study was conducted, safety and effectiveness had not been established in pediatric populations. For example, pediatric studies on a drug meant to treat osteogenesis imperfecta, a genetic disorder commonly known as brittle bone disease, did not show a reduction in the risk of bone fracture in children. Therefore, the drug’s labeling was changed to describe the study conducted and indicate that safety and effectiveness were not established in pediatric populations. Since the 2007 reauthorization, the PAC reviewed the adverse events reported for 74 drug and biological products and recommended additional labeling changes for 17 of those 74 products. (See fig. 6.) As of June, 30, 2010, FDA reported that it had approved 7 of the 17 PAC-recommended labeling changes. Of the remaining 10 PAC-recommended labeling changes, FDA was still considering whether to approve 5 labeling changes and had decided not to approve 5 labeling changes. According to FDA, these five PAC-recommended labeling changes were not approved because, after further review of the adverse events, FDA determined that labeling changes were not necessary. Reasons underlying these determinations include an insufficient link between the reported adverse events and the product and the presence of confounding factors, such as other preexisting conditions that may have contributed to the adverse event. FDA’s performance goal for the time it takes for FDA to review most PREA applications often differs from PREA’s requirement for the time FDA is to take to reach agreement with the sponsor on labeling changes. According to FDA officials, the agency cannot adequately consider and agree upon a labeling change until it completes its review of an application. FDA is required by both PREA and BPCA to negotiate and reach agreement with the sponsor on labeling changes based on pediatric study results within 180 days of submission of the application. If FDA is unable to reach agreement with the sponsor, it is required to enter the labeling change dispute resolution process. FDA’s review of suggested labeling changes is part of a broader review—FDA’s review to determine whether or not to approve the application—for which it has specific performance goals that include time periods within which it seeks to review applications. Under these performance goals, applications are classified as either priority or standard, depending on the characteristics of the application, and FDA has committed to completing its review of 90 percent of priority applications within 180 days of submission and 90 percent of standard applications within 300 days of submission. BPCA requires that applications submitted under BPCA that propose a labeling change, which are all BPCA applications, receive priority status. Therefore, all BPCA applications have been subject to 180-day review. However, according to FDA officials, only a subset of applications subject to PREA requirements—those that provide major advances in therapy or new therapies—receive priority status. All other applications submitted under PREA are to be reviewed within the standard 300 days of submission. For priority applications, FDA’s goal to complete its review of the application within 180 days is consistent with the labeling change requirements of PREA and BPCA since the two review periods—the application review goal and the labeling change review period—are both 180 days. However, for PREA applications subject to standard review, which includes most PREA applications, the goal and required review period are different. FDA’s goal to complete its review of the application within 300 days differs from PREA’s requirement to reach agreement on labeling changes within 180 days. FDA officials acknowledged that the agency has generally not agreed upon labeling changes within the required 180 days for PREA applications subject to standard review. However, as noted previously, FDA could not account for 381 applications submitted to the agency under PREA, making it difficult for FDA to determine whether it is meeting either the requirements of PREA or the agency’s goals for these applications. FDA has never initiated the labeling change dispute resolution process. According to FDA officials, the agency has been able to reach agreement with sponsors on labeling changes without needing to initiate this process. Stakeholders whom we interviewed described several challenges to conducting pediatric studies. One challenge stakeholders, including sponsors, identified was confusion about how to comply with PREA and BPCA due to a lack of current guidance from FDA. FDA officials acknowledged that the most recent PREA guidance is draft guidance from 2005 and that the most recent BPCA guidance was revised in 1999. FDA has not provided guidance for changes to the laws from the 2007 reauthorization for PREA or BPCA. FDA officials stated that they plan to publish updated guidance on PREA and BPCA. However, they have no timeline for when they plan to do so. FDA explained that officials can discuss study timelines and questions or concerns sponsors may have regarding their study submissions throughout the process. Stakeholders said another challenge is that reauthorizations of PREA and BPCA have led to uncertainty given the time required to conduct studies. They said that since PREA and BPCA are subject to reauthorization every 5 years, some of the statutory requirements for studies could change while studies are under way or as they are being planned; therefore, there is uncertainty as to the requirements that will apply when they conduct studies. Two sponsors stated this uncertainty makes it difficult to know what will be involved in developing products for use in children over the long term, which makes it difficult to plan studies. For the 50 drugs for which FDA has completed its review since the 2007 reauthorization of BPCA, the average amount of time from when FDA issued a written request through when it completed its review of a drug’s study results was 6 years. Based on this experience, PREA and BPCA would be reauthorized during the course of a drug or biological product study, possibly changing the requirements with which the sponsors must comply. For example, the 2007 BPCA reauthorization added the requirement that sponsors submit applications at least 9 months before the end of the product’s market exclusivity. Another challenge identified by stakeholders is complying simultaneously with the U.S. laws, PREA and BPCA, and the European Union’s (EU) Paediatric Regulation. (See app. III for a description of the Paediatric Regulation.) Stakeholders stated that it is common for a sponsor to seek approval of a drug or biological product in both the EU and the United States simultaneously, making it necessary for the study to comply with PREA or BPCA and the Paediatric Regulation if the sponsor wants to market the drug in the United States and in the EU. For example, in the EU, the sponsor submits a plan for the study of a product in pediatric populations that must be approved by the European Medicines Agency before studies are conducted. Stakeholders stated, in the United States, sponsors do not have formal contact with FDA regarding their pediatric study design for studies submitted under PREA until they submit completed study results to FDA. Therefore, sponsors cannot be certain that studies done to comply with the Paediatric Regulation will meet FDA requirements. Finally, stakeholders told us that the lack of economic incentives presents a challenge to sponsors’ willingness to conduct pediatric studies voluntarily, as under BPCA. Stakeholders, including industry representatives, told us that sponsors are reluctant to conduct studies for drug and biological products that are nearing the end of their market exclusivity or are off-patent because there is no economic benefit associated with conducting these studies. Once a drug or biological product is off-patent, the sponsor cannot receive pediatric exclusivity for conducting pediatric studies. Stakeholders told us that these drug and biological products are among the least likely to be studied in pediatric populations. Given the lack of economic incentive, a provision in BPCA gives NIH the responsibility of awarding funds to entities that have the expertise and ability to conduct studies of off-patent drug and biological products. However, stakeholders reported that NIH’s ability to conduct these studies is limited due to a lack of resources devoted to this type of research. At least 130 drug and biological products have been studied in pediatric populations under PREA and BPCA in a variety of therapeutic areas since the laws’ 2007 reauthorization, resulting in important labeling changes. While this illustrates the laws’ success in facilitating pediatric studies, we found that FDA did not have procedures in place to track and aggregate data about applications subject to PREA until the PeRC completed its review of the pediatric information included in the applications. Even though an application subject to PREA cannot be considered complete unless it contains pediatric study results or a request for a waiver or deferral, FDA has not been tracking whether these are included until information from the application is reviewed by the PeRC. According to FDA officials, the PeRC generally reviews information about pediatric studies submitted as part of the application near the end of FDA’s application review process. Because of the timing of this review, FDA staff managing the review process cannot be certain how many applications that have been submitted to the agency are subject to PREA, how many of those applications include pediatric studies, or how many applications include requests for waivers or deferrals, until FDA has almost completed its review of the entire application. FDA’s review of applications can last 300 days or more in some cases, depending on the specific attributes of the application. FDA lacks an important internal control that would allow it to manage its review process to ensure that the agency and sponsors are meeting the law’s requirements and that FDA is meeting its own mission, goals, and objectives during the period of its review of the application. Because several of the requirements of PREA and internal FDA goals focus on the amount of time FDA takes to conduct a review or make a decision and because some products studied under PREA may already be on the market for adult use, it is imperative that FDA have this information available to it throughout the review process. FDA’s inability to track how long it has had an application or whether or not an application includes pediatric study results until after the PeRC has completed its review could delay the dissemination of important pediatric study results. We recommend that the Commissioner of FDA move expeditiously to track applications upon their submission and throughout its review process and maintain aggregate data, including the total number of applications that are subject to PREA and whether those applications include pediatric studies. We provided a draft of this report to the Secretary of HHS for comment. In its comments, HHS noted that PREA and BPCA have been very successful in generating important pediatric labeling of drugs and biological products. HHS also agreed that better tracking of pediatric labeling and other information is needed and expressed the hope that future improvements in its databases will allow the agency to easily identify all pediatric studies contained in all applications. HHS acknowledged that such improvements could permit health care providers, the public, and other stakeholders to conduct more interactive and thorough searches for pediatric studies, indications, and other information relevant to pediatric patients. In its comments, HHS disagreed with our finding that FDA does not have a system to track data about applications under PREA. The comments note that the FDA Center for Biologics Evaluation and Research has a specific code in its Regulatory Management System for Biologics Licensing Application that allows it to track PREA-filed applications for biological products. HHS describes the FDA Center for Drug Evaluation and Research’s process for tracking applications using DARRTS and suggests that DARRTS allows FDA to track the status of any application at any given time. However, our recommendation is not based on FDA’s ability to determine the status of individual applications, but rather its lack of aggregate data on applications that are subject to PREA during its review of the applications so as to be able to better manage its review process. We clarified our discussion of our findings in this area and the wording of our recommendation. As discussed in this report, FDA was unable to determine how many of the applications that had been filed with the agency since PREA’s 2007 reauthorization were subject to PREA. We had initially requested this information in an effort to provide context to some of the other information that we reported about FDA’s implementation of PREA. FDA was able to report that approximately 830 applications were subject to PREA, but was unable to provide a precise number. Since this was considerably more than the 449 applications that had been reviewed by the PeRC, we sought additional information about the status of these applications. In response to our request, FDA officials explained that the agency did not maintain this information and that determining the status of these applications would require that they engage in a labor intensive manual process that would require an extensive investment of FDA resources and would take months to complete. We believe that FDA’s lack of aggregate data about an important program designed to enhance the safety of drug and biological products for use in children is inconsistent with sound internal controls because it does not provide FDA officials with the information they need to effectively manage the program to ensure that the review process is being implemented in accordance with statutory and other requirements until the process is almost complete. In its comments, HHS states that in May 2011, FDA made an improvement to DARRTS that was not in place during the time of our review. HHS states that the improvement will allow FDA to better track future applications that are subject to PREA. However, the comments do not state whether the improvement will allow FDA to determine during its review process whether applications include studies or requests for waivers or deferrals. While it remains unclear what data will be readily available to FDA officials as they manage this program, FDA’s efforts to improve its tracking of applications are consistent with the goal of our recommendation and should enable it to better track future applications. HHS’s comments state that FDA hopes to include enhanced information about applications in DARRTS retrospectively, but notes that the agency will have to ensure that there are available resources for such a project. Therefore, DARRTS will not include this improved data for applications that are currently undergoing review. HHS states that FDA maintains data about completed studies under PREA on its Web site. However, this data is compiled and placed on FDA’s Web site after FDA’s review of the applications is complete. Our finding and recommendation address the lack of data that FDA has available about PREA applications during the review process, which can last 300 days or more. We incorporated changes to the report to address HHS’s comments about FDA’s ability to track applications and incorporated technical comments as appropriate. HHS’s comments are reprinted in appendix IV. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or crossem@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. The Food and Drug Administration (FDA) Amendments Act of 2007 required that we describe the efforts made by FDA and the National Institutes of Health (NIH) to encourage that studies be conducted in children 4 weeks old or less, also known as neonates. This appendix describes the efforts of FDA and NIH to encourage studies in neonates and their efforts to ensure that those studies are safe. We also describe the number of products with completed and ongoing studies in neonates since the 2007 reauthorization of the Pediatric Research Equity Act (PREA) and the Best Pharmaceuticals for Children Act (BPCA). In addition, we describe the challenges to increasing the inclusion of neonates in pediatric drug studies identified by physicians. To describe the efforts of FDA and NIH to encourage studies in neonates, we interviewed FDA and NIH officials and examined FDA and NIH data to summarize the number of pediatric drug studies being conducted in neonates under PREA and BPCA. To assess the reliability of the data FDA and NIH provided, we interviewed agency officials. FDA and NIH officials described how they maintained data on pediatric studies, and the resulting labeling changes conducted under PREA and BPCA. We found the data reliable for our purposes. We also reviewed literature on studies conducted in neonates and barriers to these studies. We interviewed stakeholders including representatives from three trade groups, the Pharmaceutical Research and Manufacturers of America, the Biotechnology Industry Organization and the Generic Pharmaceutical Association. We also interviewed health advocacy organizations, including the American Academy of Pediatrics, the National Organization for Rare Disorders, the Elizabeth Glaser Pediatric AIDS Foundation, the Tufts Center for the Study of Drug Development, the Institute for Pediatric Innovation, and the Pediatric Pharmacy Advocacy Group. To describe the challenges to increasing the inclusion of neonates in pediatric drug studies identified by physicians, we convened two panel discussions; we were assisted in convening one of the panels by the American Association of Pediatrics and another by a director of neonatology at a large research hospital. The panelists in both instances were physicians who conducted pediatric drug studies in neonates. We also interviewed FDA and NIH officials. FDA’s efforts to encourage the inclusion of neonates in pediatric drug studies and its efforts to ensure that those studies are safe and effective have been focused on including neonates in its written requests. However, in some instances FDA has requested neonates’ inclusion but not required it. Since the 2007 reauthorization of BPCA, FDA has issued four written requests to drug sponsors that have mentioned neonates specifically. FDA required the inclusion of neonates in the written request for the study of one of the four drugs. FDA’s written requests for three other drugs asked for the inclusion of neonates in the study; however, the sponsors of these products had the option of not including neonates in the studies. The sponsors will inform FDA as to whether they included neonates in the studies when they submit completed study results to FDA for review. Sponsors have submitted completed studies to FDA that have included neonates for nine products—eight drugs and one biological product— since the 2007 reauthorization; FDA has reviewed all study results and labeling changes have been made reflecting neonate information for all of the products. Seven of these studies were submitted under BPCA; two were submitted under PREA. NIH has funded studies under BPCA for five drugs that have included neonates. These studies were initiated before the 2007 reauthorization, but are ongoing. Additionally, NIH has conducted several activities under BPCA to ensure the safety and effectiveness of drugs in neonates, including neonates that are premature. These activities include the 2009 co-funding of a large scale study of the diagnosis and treatment of hypotension in premature infants, funding of a study to determine outcome measures for chronic lung disease in premature infants, and the development of a small volume sampling technique for neonates with congenital heart disease. FDA officials explained that a limited number of the studies conducted under PREA have included neonates because PREA only requires that pediatric studies be conducted for the indication described on the drug application, which is typically applicable to adults and older pediatric populations that would not apply to neonates. Additionally, PREA provides sponsors with the option to request that required pediatric studies be waived by FDA when there is a valid reason. For some applications, FDA has agreed to waive studies after it has determined that including neonates in a drug study may be impossible or highly impracticable due to safety or ethical concerns. FDA and NIH officials explained that they face challenges in increasing the inclusion of neonates in pediatric studies under BPCA. BPCA authorizes FDA to provide an incentive of an additional 6 months of market exclusivity, known as pediatric exclusivity, to product sponsors that conduct pediatric studies requested by FDA. FDA officials explained that they have been granting pediatric exclusivity for the study of products in children older than one month, so it is difficult to have manufacturers go back and do the study in neonates because it may be difficult for them to receive additional pediatric exclusivity. FDA officials told us that the neonate population has diseases that are very different from other pediatric populations and that there are limited tools that can be used to study these diseases. FDA and NIH officials told us that there are also ethical issues that arise when working with this population that create a barrier. Based on our review of the literature, we found there is an ethical issue concerning whether neonates are a vulnerable population that should not be enrolled in trials where there may be increased risk to their health. The physicians that we spoke with as a part of our two panels explained that they encounter numerous challenges to conducting studies in neonates. One challenge the panelists described is obtaining informed consent from the parents, which is required for the neonate to be enrolled in a study. For example, one panelist stated that because the mother may be medicated from her delivery it may be difficult to obtain consent from her. One panelist stated that he encounters families for which English is their second language and he may need them to review and understand a complex 10- to 12- page study outline that is written in English. The panelist explained that while his hospital provides doctors who speak another language and may communicate in that language for families for which English is a second language, they may encounter another challenge if the family is not able to read in their native language. The panel explained that there are also scientific challenges to conducting studies in neonates. One scientific challenge is that the amount of blood in neonates is extremely limited. However, blood must be drawn to determine proper dosing of the products being tested, requiring doctors to do needle pricks to obtain blood from the neonate. These pricks are in addition to the pricks that must be done to monitor the health of the neonate and there may not be enough blood to test for both proper dosing and to monitor the neonate’s health. The panel went on to explain that the outcomes of the study must be observed in the neonate between 3 to 5 years after the study. This level of monitoring is costly to the sponsor and can be an economic disincentive to conducting studies in neonates. The panel also explained that neonates are heterogeneous—there can be a significant difference in a neonate born at 23 weeks than a neonate that is 40 weeks—and any study designed to include them must account for this, making it difficult to generalize the study results. Panelists said that another challenge to increasing the inclusion of neonates in studies involves FDA, stating that FDA sometimes seems to be creating barriers rather than working to include neonates in studies. For example, they said that FDA has required that a product be proven safe and effective for adults before it can be studied in neonates; however the panelists stated that because neonates often have illnesses that are specific to their age and condition, this requirement does not make sense. Furthermore, one panelist stated that she believed that FDA did not have enough neonatologists on staff to assist in preparing written requests. She also stated that it is important that study designs that include neonates be reviewed by neonatologists and not general pediatricians because neonatologists understand the issues that must be confronted in the neonatal intensive care unit. FDA’s Pediatric Review Committee, which reviews written requests and determines whether waivers and deferrals should be granted, has about 40 members. However, FDA officials we interviewed said that there is only one neonatologist on the Committee. Additionally, the FDA officials stated that there are three neonatologists in the two FDA divisions that review pediatric studies. FDA officials said that they do not have the resources to hire additional neonatologists. The Food and Drug Administration Amendments Act of 2007 (FDAAA) requires that FDA consider the adequate representation of children of ethnic and racial minorities when issuing written requests to sponsors to conduct pediatric studies for a product under the Best Pharmaceuticals for Children Act (BPCA). It is important to include minorities in pediatric studies because proteins, metabolizing enzymes, and genetic traits can differ among races and ethnicities. We previously reported that these differences may result in a product having adverse or unexpected side effects for users depending on their race or ethnicity. To examine how FDA considered the representation of ethnic and racial minority participants in product studies conducted under BPCA, we reviewed the 37 written requests that FDA issued to sponsors from the time of the 2007 reauthorization of BPCA on September 27, 2007 through June 30, 2010. FDA issued guidance in 2005 on the collection of race and ethnicity data in clinical trials recommending that sponsors use a standardized approach developed by the Office of Management and Budget to report the race and ethnicity of study participants. FDA’s 2005 guidance recommends, rather than requires, that sponsors use the specified categories because participants’ racial and ethnic data may not be able to be collected in some instances and because the specified categories may not be sufficient or appropriate for some studies. For example, when studies are conducted outside of the United States, the recommended categories may not adequately describe the racial and ethnic groups in foreign countries. FDA has issued 37 written requests to sponsors for the study of on-patent products under BPCA, since the 2007 reauthorization. In these 37 written requests, FDA asked that sponsors include information on the representation of ethnic and racial minorities for all participants using the standardized categories specified in agency guidance when responding to written requests. In all but two of the 37 written requests, FDA also requested that if the sponsor chose to use other categories, the sponsor obtain FDA’s agreement on the use of alternate categories. The European Union’s Paediatric Regulation for the development of drug and biological products in pediatric populations was implemented in January of 2007 in order to facilitate the development of, and improve the availability of information on, products for use in children. The European Union’s Paediatric Regulation is similar to laws on pediatric studies in the United States, some form of which has been in existence since 1997. To describe the European Union’s Paediatric Regulation for drugs and biological products, we examined European Medicines Agency literature, the Paediatric Regulation, United States laws, and additional sources regarding United States and European Union pediatric laws and regulations. We also interviewed FDA officials. The Paediatric Regulation requires sponsors to submit a plan for the study of a product in pediatric populations, known as a paediatric investigation plan (PIP), early in the development of a new product. PIPs are required to include the sponsor’s proposed timing and methods for conducting pediatric studies in all age groups. Sponsors must submit PIPs to the Paediatric Committee, which was created by the Paediatric Regulation. Sponsors submit to the Paediatric Committee through the European Medicines Agency. The Paediatric Committee reviews the PIP and determines whether to agree or refuse the study plan. The PIP is a binding agreement between the sponsor and the European Medicines Agency, but can be modified as necessary. The Paediatric Regulation allows for the agency to either defer pediatric studies until the product has been studied in adults or waive the studies altogether in certain circumstances. The Paediatric Committee is responsible for granting or denying deferrals and waivers. When studies are deferred, the sponsor must still submit a PIP that includes details on the pediatric studies that will be conducted and when those studies will begin, but when studies are waived, the requirement to submit a PIP is also waived. Once a new product is ready to be marketed, the sponsor submits a marketing authorization application to the European Medicines Agency that must include, among other things, the results of pediatric studies conducted in accordance with the PIP or proof that a waiver or deferral of the pediatric studies was granted. If the sponsor has conducted studies in compliance with the PIP, it is entitled to a six-month extension of the product’s market exclusivity. Additional information on the Paediatric Regulation can be found on the European Medicines Agency website. The European Union and the United States collaborate by exchanging information in order to ensure that pediatric studies are conducted in a scientifically rigorous and ethical manner and that pediatric patients are not exposed to duplicative studies. Stakeholders stated that it is common for a sponsor to seek approval of a drug or biological product in both the EU and the United States, making it necessary for a sponsor to comply with both the EU and United States’ pediatric study processes if it wants to market the drug in both locations. In addition, the European Medicines Agency and the FDA communicate and collaborate to share information such as the status of current studies, written requests, PIPs, waivers and deferrals, study results, safety concerns, and other topics. According to FDA’s Web site, from August 2007 to March 2009, the European Medicines Agency and the FDA discussed 144 products. This communication and information sharing between the European Medicines Agency and the FDA takes place through monthly teleconferences and by using a secure electronic system. In addition to the contact named above, key contributors to this report were Tom Conahan, Assistant Director; Rachel E. Batkins; Romonda McKinney Bumpus; Kathleen Diamond; Cathleen Hamann; Lisa Motley; Kathryn Richter; and Jessica C. Smith.
In 2007, Congress reauthorized two laws, the Pediatric Research Equity Act (PREA) and the Best Pharmaceuticals for Children Act (BPCA). PREA requires that sponsors conduct pediatric studies for certain products unless the Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) grants a waiver or deferral. Sponsors submit studies to FDA in applications for review. BPCA is voluntary for sponsors. The FDA Amendments Act of 2007 required that GAO describe the effect of these laws since the 2007 reauthorization. GAO (1) examined how many and what types of products have been studied; (2) described the number and type of labeling changes and FDA's review periods; and (3) described challenges identified by stakeholders to conducting studies. GAO examined data on the studies from the 2007 reauthorization through June 2010, reviewed statutory requirements, and interviewed stakeholders and agency officials. At least 130 products--80 products under PREA and 50 under BPCA--have been studied for use in children since the 2007 reauthorization. However, FDA cannot be certain how many additional products may have been studied because FDA does not track and aggregate data about applications submitted under PREA that would allow it to manage the review process. FDA was unable to provide information about some applications that had been submitted to the agency that were subject to PREA. Recent improvements to FDA's data system might assist the agency in tracking future applications. Under PREA, FDA has granted most of the study waivers and deferrals requested by sponsors since the 2007 reauthorization. Under BPCA, FDA granted pediatric exclusivity--an additional 6 months of market exclusivity, which generally delays marketing of generic forms of the product--to the sponsors of 44 of the 50 drugs in exchange for conducting pediatric studies. Because BPCA is voluntary, sponsors may decline FDA's request for pediatric studies. Although BPCA includes provisions to encourage the study of drugs when sponsors have declined FDA's request, few drugs have been studied under these provisions. Since the 2007 reauthorization, all of the 130 products with pediatric studies completed and applications reviewed under PREA and BPCA had labeling changes that included important pediatric information. The most commonly implemented labeling change expanded the pediatric age groups for which a product was indicated. The next most common type of labeling change indicated that safety and effectiveness had not been established in pediatric populations and provided a description of the study conducted. Additional labeling changes were recommended for products as a result of FDA's monitoring of adverse events associated with products after they had been approved for marketing. FDA officials said they need to complete their review of the application, including all studies, before they can reach agreement with the sponsor on labeling changes. Stakeholders, including sponsors, pediatricians, and health advocacy organizations, described challenges faced by sponsors that could limit the success of PREA and BPCA. Those challenges included confusion about how to comply with PREA and BPCA due to a lack of guidance from FDA for changes to the laws from the 2007 reauthorization of PREA or BPCA. FDA officials explained that they mitigate this lack of guidance by discussing questions or concerns that sponsors have regarding their pediatric studies with sponsors throughout the process. An additional challenge sponsors described was a lack of economic incentives to study products with no remaining market exclusivity. GAO recommends that the Commissioner of FDA track applications during its review process and maintain aggregate data on applications subject to PREA. HHS agreed that better tracking of information is needed but disagreed with GAO's finding that it does not track applications. While FDA is able to identify the status of individual applications during its review, it has not maintained data that would allow it to better manage its review process.
The definition of commercial acquisition has evolved over the last decade to mean the purchase of items customarily used by and sold (or offered) to the general public, including items with minor modifications of a type not customarily available in the commercial marketplace made to meet federal government requirements, or services of a type offered and sold competitively in substantial quantities in the commercial marketplace. The idea of increasing the government’s use of commercial acquisition is not new. Figure 1 identifies key legislation and federal-level commissions that emphasized the use of and expected benefits of commercial acquisition over the last several decades. 1984 act: Reqired promotion of the use of commercil prodctwhenever prcticable. Panel’ recommendation: Clled for the fcilittion of government ccess to commercil technolo- gie. 2003 act: Allowed different typeof contrct to e treted as commercicqition nder certin circnce. Commission’ recommenda- tion: DOD hold expnd the use of commercil prodct nd commercil-tyle competition. 1986 act: Reqired DOD to cqire nondevelopmentl item(commercil item) to the mximm extent prcticable. 1994 act: Expnded the commercil item definition to inclde nondevelop- mentl item, thoe not yet on the mrket, nd “of type” item nd nd-lone ervice. Exempted commercil item procrement from reqirement to submit certified cot or pricing d to the government nder certin condition. Provided preference for cqi- tion of commercil item nd tremlined mechni for their procrement. 1996 act: Exempt commercil item cqition from reqirement to submit certified cot or pricing d nd comply with cocconting ndrd. The National Defense Authorization Act for Fiscal Year 1987 required DOD to submit a report to Congress on its progress toward meeting the requirement to acquire commercial items to the maximum extent practicable. DOD’s subsequent report to Congress in response to the act’s requirement identified several impediments to the use of commercial acquisition, including a requirement that contractors provide cost or pricing data to the government. Identification of providing the government cost or pricing data as an impediment was in contrast to requirements in the Truth in Negotiations Act of 1962. This act generally requires contractors to submit cost or pricing data to the government before the award of a negotiated contract and certify that the data are accurate, complete, and current as a way to provide information parity between the contractor and the government. Because a primary maxim in contracting is that competition drives down prices, one of the purposes of the legislation was to provide the government with all the facts on the cost or pricing data the contractor used to prepare a proposal, including, as applicable here, when there is no competition. In that way, the government believed it would have the information necessary to protect itself from paying excessive prices. In the late 1980s and early 1990s, however, concerns about impediments that might prevent commercial companies from doing business with the government continued. The concern about requiring cost or pricing data in commercial acquisition was a factor in passing several laws in the 1990s designed to streamline acquisition in general, and commercial acquisition specifically, by more broadly exempting commercial acquisitions from the cost or pricing data requirement (see fig. 1). Although commercial acquisition regulations now preclude the government from obtaining cost or pricing data from contractors in commercial acquisitions, the government is permitted to obtain pricing information from sources other than the offering contractor. If this information proves inadequate, the government can require the offering contractor to provide additional information, known as information other than cost or pricing data, although the government must, to the maximum extent practicable, limit the scope of the request to include only information in a form regularly maintained by the offering contractor. In early 2001, OSD reemphasized to the military departments and defense agencies that commercial acquisition should be used to the maximum extent possible to effectively provide the technological advantages needed to win future conflicts. OSD concluded that the military departments and agencies must uniformly look first to the commercial marketplace before developing new systems, upgrading legacy systems or procuring spare parts and support services. To help ensure the increased use of commercial acquisition, OSD established and the Air Force implemented two commercial acquisition goals to be achieved by the end of fiscal year 2005. These were to double the dollar value of commercial acquisition contract actions awarded in 1999 (for the Air Force this meant going from about $3 billion to about $6 billion) and strive to increase the number of commercial contract actions awarded to 50 percent of all Air Force contract actions. In setting these goals, OSD expected that the increased use of commercial acquisition would provide DOD with greater access to commercial markets (products and service types) with increased competition, better prices, and new market entrants and/or technologies. Additional expected benefits of commercial acquisition are listed in appendix II. As its overall spending has increased, the Air Force has increased spending using commercial acquisition, from $4.8 billion in fiscal year 2001 to over $8 billion in fiscal year 2005 (see fig. 2). The Air Force also has had some success in achieving commercial acquisition goals; for example, it has doubled the amount spent using commercial acquisition since fiscal year 1999 (see fig. 3). However, it has not achieved the goal of making 50 percent of all contract actions commercial (see fig. 4). Nonetheless, the Air Force did not establish measures nor did it collect information to determine if the benefits expected from commercial acquisition were being achieved. As a result, it is unclear if or how the Air Force has benefited from increased use of commercial acquisition. The Air Force has used commercial acquisition to buy a broad range of goods and services, including major systems. For example, the Air Force used commercial acquisition to buy the Joint Primary Aircraft Training System and a range of goods and services such as radio and communication equipment, aircraft components, and repair services. However, our analysis indicates that for at least one of the expected benefits, attracting new market entrants, the expected benefit has not materialized. The majority of Air Force commercial contracts in fiscal years 2003-2004 were made to traditional defense contractors. The Air Force was able to achieve its goal of doubling spending using commercial acquisition by the end of fiscal year 2003 and has exceeded that goal through fiscal year 2005 (see fig. 3). However, the Air Force did not increase commercial contract actions awarded to 50 percent of all awards (see fig. 4). These goals expired at the end of fiscal year 2005 and were not extended or renewed at the time this report was published. An Office of Under Secretary of Defense, Defense Procurement and Acquisition Policy, senior procurement analyst noted that he believed the goals have essentially been met and that the current law stating that nondevelopmental items (commercial items) are to be used to the maximum extent practicable is sufficient. OSD has indicated that the increased use of commercial acquisition should bring about the benefits of greater access to commercial markets, including increased competition, getting better prices, and access to new market entrants (contractors) and/or technologies. Although the Air Force has increased the use of commercial acquisition, neither OSD nor the Air Force has attempted to measure if the benefits expected from this increased use are being achieved. The Air Force has stated that the appropriateness of the application of the FAR commercial item definition determines its use of the authority, not whether any benefits would be gained. A study sponsored by the Air Force and conducted by the RAND Corporation, a nonprofit research organization, in 2005 looked at Air Force commercial acquisition and found that the data needed to determine if the expected benefits of commercial acquisition were being realized were not available. The report concluded that this lack of data has made it difficult to measure whether this type of acquisition provides the benefits claimed or what challenges exist. With respect to anticipated cost and schedule savings, RAND reported that DOD provided no direction for tracking these expected benefits, and as a result, such data are not collected by either DOD or contractors. RAND also reported that DOD does not develop estimates of the benefits expected from using commercial acquisition versus other types of acquisitions prior to commencing contract award activities. RAND did not comment on the cost of quantifying commercial acquisition benefits. While the Air Force has used commercial acquisition to buy a broad range of goods and services, including major systems, it continues to do business mainly with traditional contractors. By increasing the use of commercial acquisition, OSD hoped the Air Force would be able to draw nontraditional contractors into defense contracting and gain greater access to new commercially developed technologies. Nontraditional contractors were expected to offer more efficient business practices and new technologies to meet government requirements. OSD commercial acquisition guidance emphasizes the need to incorporate commercial items into defense systems because the commercial sector often drives critical technologies. Even with this increased emphasis on commercial acquisition, the Air Force has primarily continued to award its commercial contracts to traditional defense contractors. To determine the extent to which the Air Force attracted nontraditional contractors using commercial acquisition, we reviewed acquisition data on the 98 contractors who received large (over $5 million) commercial contracts in fiscal years 2003 and 2004. We found that 87 of the 98 contractors, or 89 percent, were included on DOD’s Top 100 or Air Force Top 50 contractor lists or had previously received contracts with DOD since fiscal year 1996. Only 11 contractors had not previously received a contract or were not on either list (see fig. 5). Further, 7 of the 11 contractors that had not previously received large dollar contracts from DOD performed more routine services like transportation, housekeeping, or architect and engineering services. A list of the traditional and nontraditional contractors is included as appendix III. In a 2005 commercial acquisition study, RAND concluded that there is very little evidence that the use of commercial acquisition has encouraged greater numbers of civilian (non-DOD) commercial contractors to compete for DOD contracts for major military-unique items. In general, we found that commercial acquisition was used to buy a variety of goods and services. These include but are not limited to aircraft engines and structural components, telecommunication services, maintenance and repair of equipment, program management/support services, and housekeeping services. We also found three major Air Force acquisition programs for which commercial actions constituted at least 75 percent of contract dollars obligated. The three major acquisition programs are the latest version of the Air Force C-130 cargo aircraft; the Joint Primary Aircraft Training System, including a new trainer aircraft, the ground-based training system, and a training management system; and the National Airspace System to modernize DOD air traffic control facilities in parallel with the Federal Aviation Administration (FAA) to ensure safe operation of aircraft in accordance with statutes and DOD/FAA agreements, according to an Air Force official. Our work, that of DOD’s Inspector General, and that of others has shown that government contracting officials face challenges using commercial acquisition. For example, improperly classifying an acquisition as a commercial acquisition leaves the Air Force vulnerable to accepting prices that may not be the best value for the department because under commercial acquisition regulations, the government is precluded from requesting cost or pricing information. Our review of Air Force contract files and DOD Inspector General reports showed that Air Force officials disagreed about the designation of some acquisitions as commercial. Furthermore, the director of Defense Procurement and Acquisition Policy recently testified before the Federal Acquisition Advisory Panel that he is concerned about some items and services being identified as commercial that are not sold in an existing marketplace because there are no assurances that the price is reasonable. The Air Force use of commercial acquisition has been accompanied by an increased amount of dollars being awarded sole-source. Similar to misclassifying acquisitions as commercial, the lack of market-based competition may result in the Air Force’s acceptance of prices that may not be the best value for the department. OSD cites the general advantages of competition and in its policy urges contracting officials to avoid sole-source situations because sometimes contractors may attempt to exploit the lack of competitive markets and demand unreasonable prices. While OSD acknowledges some sole-source situations may be unavoidable, we found increasing sole-source spending on Air Force commercial contracts over the last 6 years. Also, of the 20 new commercial awards for products over $5 million in fiscal year 2004, half were awarded sole-source, with traditional contractors receiving most of those sole-source awards. Misclassification of items as commercial can leave the Air Force vulnerable to accepting prices that are not the best value for the department. Our review of Air Force contract files included two cases where there were internal Air Force disagreements regarding determinations of commerciality. The items in question were a C-130E and a C-130H aircraft. During our review, some Air Force officials also expressed concern, especially in sole-source situations, about their ability to determine whether the prices being charged are reasonable. A major difference between a Federal Acquisition Regulation (FAR) Part 15 “Contracting by Negotiation” and Part 12 “Acquisition of Commercial Items” is that under Part 12 the government is prohibited from obtaining cost or pricing data. Under FAR Part 15, the government is generally required to obtain cost or pricing data (unless certain exceptions apply) from contractors to help determine if it is getting a good price. DOD’s Inspector General has recently issued reports asserting that three Air Force acquisitions were inappropriately designated as commercial. The Inspector General concluded that three Air Force acquisitions—the C-130J cargo aircraft, the KC-767A tanker aircraft, and F-16 simulator services—should not have been planned or purchased as commercial acquisitions because they were unique to the military. For example, the Inspector General reported in March 2006 that the Air Force had improperly used commercial acquisition to buy F-16 simulator services because contracting officials misinterpreted the definition of commercial services. As a result, the Air Force placed itself at a disadvantage, restricting its ability to determine whether the price charged was reasonable. By using commercial acquisition, the Air Force was precluded from requesting certified cost or pricing data for a service in which the department is the sole customer. On the basis of the Inspector General’s report, the Air Force agreed, and has begun, to change its contracting approach from a commercial acquisition to a noncommercial acquisition. Other recent efforts to improve the government’s use of commercial acquisition include efforts by a high-level panel to consider changes to potentially clarify the definition of commercial acquisition as well as efforts by Air Force officials seeking similar regulatory changes. The Federal Acquisition Advisory Panel is examining, among other things, commercial acquisition practices. The Acquisition Advisory Panel is also reviewing preliminary recommendations to modify the commercial item definition found in federal regulation. The panel noted, in a briefing on its Web page, that in the private sector, competition in efficient markets is a principle relied on to a great extent to assure price reasonableness. The panel cites three government commercial acquisition practices related to the commercial item definition that depart from private-sector practices: First, commercial acquisition procedures are used for sole-source contracts; second, items are acquired commercially even when the government is the predominant or only buyer; and third, the “commercial item” definition is broad enough to admit items for which an efficient market does not exist to ensure price reasonableness. DOD’s Defense Procurement and Acquisition Policy Director recently addressed the Acquisition Advisory Panel and identified concerns that some acquisitions are being designated commercial that are not commercial. The Director expressed his view that a commercial item is one in which a marketplace exists, meaning the item has been sold to commercial companies (not just DOD). The Director stated that if someone is selling “to us (the government) and only to us, that’s not a commercial price.” In addition, the Director testified that DOD intends to create a tool, a decision matrix, that will enable contracting officials to identify the right contracting mechanism after completing their market research. The purpose is to have DOD and the military services use commercial acquisition effectively and correctly, in a consistent way. OSD guidance specifically states that commercial acquisition was not intended to allow military-unique items to be purchased commercially. Misclassification of items as commercial can leave the Air Force vulnerable to accepting prices that are not the best value for the department. When an item is designated as commercial, the Air Force should be able to determine if the price is reasonable on the basis of prices in the commercial market. If the Air Force designates an item as being commercial when it is not readily available in the commercial market, this limits its ability to assess the reasonableness of the contractor’s price because it might, especially in sole-source situations, have less information on prices to make its decision. Restrictions on the use of commercial acquisition to procure military unique major weapons systems were recently established in the Fiscal Year 2006 DOD Authorization Act. The act requires that to use commercial acquisition procedures for major weapon systems, the Secretary of Defense must now (1) determine the procurement meets the definition of “commercial item,” (2) determine that national security objectives necessitate the purchase of the system as a commercial item, and (3) give Congress at least 30 days notice before purchasing a major acquisition program using commercial acquisition. To implement this requirement, an interim Defense Federal Acquisition Regulation Supplement (DFARS) rule is pending publication. The Air Force intends to implement the DFARS rule by requiring requests for Secretary of Defense approval of major weapon systems to be purchased as commercial items, include a description of the benefits associated with increased competition, better prices, and new market entrants and/or technologies. When we discussed the purchase of major weapon systems using commercial acquisition with top DOD officials, they informed us there are plans to transition both the C-130J and the Joint Primary Aircraft Training System (JPATS) contracts, as well as a future contract for F-16 fighter aircraft simulator services, from commercial to noncommercial contracts. Further, a top DOD acquisition official said that in the future DOD will more carefully scrutinize the use of commercial acquisition, especially on major acquisition programs. Further, Air Force contracting officials have submitted proposals as cases to the Defense Acquisition Regulation Council and the Civilian Agency Acquisition Council seeking clarification of the definitions of “commercial item” and “cost or pricing data” related to commercial acquisition. While one case was closed, it highlights continued efforts to appropriately classify items as commercial. For example, the Air Force proposed a change to the DFARS, which was subsequently referred by Defense Acquisition Regulation Council as a case for the Federal Acquisition Regulation, to tighten the commercial item definition. The definition found in federal regulation states in part: “commercial item means any item, other than real property, that is of a type customarily used by the general public.” In an attachment to the 2001 memo instituting the commercial acquisition goals, OSD cautioned that the phrase “of a type” is not intended to allow the use of commercial acquisition to acquire sole- source, military-unique items that are not closely related to items already in the marketplace. A second FAR case attempts to address confusion about what qualifies as cost or pricing data in relation to commercial acquisition. The case, if made final, will clarify that the government can ask contractors for cost or pricing data, just not certified cost or pricing data. OSD emphasis on increasing the use of commercial acquisition includes guidance on limiting use of commercial acquisition for sole-source procurements. This guidance advises contracting officials to avoid sole- source commercial acquisitions, in part because sometimes contractors may attempt to exploit the lack of competition and demand unreasonable prices. When such situations are unavoidable, OSD advocates use of other price analysis tools outlined in federal regulation to mitigate risk. The FAR provides that adequate price competition on contracts is generally sufficient to determine price reasonableness. Adequate price competition means (1) the government receiving at least two offers submitted by responsible offerors, competing independently, that satisfy the government requirement; (2) there was a reasonable expectation of competition; or (3) a proposed price is clearly reasonable based on price analysis. In the event price competition is not sufficient, the government can seek additional information beginning with government and additional sources other than the offeror, and last from the offeror if necessary. There are circumstances when an acquisition, including one for commercial items, can be awarded without competition. These include instances in which (1) there is only one responsible source and there are no other supplies or services that will satisfy agency requirements, such as when a contractor has exclusive data rights and copyrights; (2) the government has an unusual and compelling urgent need for a product or service; or (3) the acquisition is required by statute or international agreement. Such awards, for other than full and open competition must be justified and approved in writing. Despite guidance directing the Air Force to avoid sole-source situations, from fiscal years 2000 through 2005, sole-source spending on Air Force commercial acquisition contracts more than doubled. Specifically, sole- source dollars as a percentage of total commercial acquisition dollars for awards over $5 million have increased from 12 percent in fiscal year 2000 to 26 percent in fiscal year 2005. This recent trend appears inconsistent with OSD guidance to avoid sole-source commercial acquisition situations. Our review found that of all 20 fiscal year 2004 commercial product acquisition awards over $5 million, 10 of the Air Force’s were made on a sole-source basis. Altogether, fiscal year 2004 obligations on the 20 contracts totaled $329 million. Obligations on the 10 sole-source awards totaled $172 million, or 52 percent (additional observations from our review of the 20 contracts are found in app. IV). Furthermore, at least one of the expected benefits of commercial acquisition—attracting new market entrants—has not materialized through the Air Force’s use of sole- source commercial acquisitions for products in fiscal year 2004. Specifically, traditional defense contractors were used on 8 of the 10 fiscal year 2004 sole-source product awards. By establishing goals that only measure use and not the benefits expected, the Air Force is unable to determine if it has benefited from increased use of commercial acquisition. The benefits to the government of commercial acquisition have not been demonstrated. Little evidence has been collected on the claimed benefits such as cost savings, better pricing, increased access to commercial vendors, and greater numbers of commercial firms to compete for Air Force contracts. Not only is it unclear whether commercial acquisition is bringing benefits to the Air Force, the Air Force may be increasing risk without knowing if the added risk is balanced by progress toward achieving benefits that may have the potential to demonstrate considerable savings. While recognizing that the Air Force may need to make some sole-source purchases using commercial acquisition, the trend of increasing sole-source spending appears contradictory to OSD guidance to limit situations where contractors may attempt to exploit the lack of competitive markets and demand unreasonable prices. When sole-source situations are necessary, contracting officials should be able to identify the benefits of using commercial acquisition for individual procurements that would otherwise be unattainable. To help ensure that the Air Force is able to measure the benefits expected from commercial acquisition, we recommend collecting information that would allow evaluating the extent of cost savings, increased access to commercial markets, and greater access to nontraditional contractors. For example, the Air Force could measure the number of nontraditional contractors it reaches using commercial acquisition. To help improve commercial acquisition and reduce the potential for risk by limiting situations where commercial acquisition contracts are being awarded sole-source, we also recommend that the Secretary of the Air Force strive to limit the acquisition of commercial products and services in sole-source environments in concert with OSD guidance. However, in the cases where it is necessary to award sole-source, the Secretary should collect the information necessary to evaluate the benefit(s) of awarding commercial verses a noncommercial contract. DOD provided written comments on a draft of this report. DOD agreed with the recommendations, in principle, and described the actions it will take to address our recommendations. The comments are discussed below and are reprinted in appendix VII. DOD partially agreed with our recommendation to measure the benefits expected from commercial acquisition by collecting information to evaluate the extent of cost savings, increased access to commercial markets, and greater access to nontraditional contractors. DOD stated that it agrees in principle it would be worthwhile to know whether the expected benefits from commercial acquisition are materializing and that it will examine ways to collect information on the number of nontraditional contractors it is reaching through commercial acquisition. However, DOD noted that the collection of information for the expected benefits would be expensive. We believe DOD is taking the first step necessary to evaluate whether it has benefited from the increased use of commercial acquisition. We encourage such efforts, and would expect that if DOD collects information on nontraditional contractors it reaches using commercial acquisition and it is still unable to evaluate whether significant benefits exist from using commercial acquisition, DOD will recognize the need to collect additional information. DOD’s comments included an attachment reflecting the Air Force views on our draft report. We incorporated those views where appropriate. We will send copies of this report to the Secretary of Defense, the Secretary of the Air Force, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or by e-mail at schinasik@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of the report. Other key contributors to this report were David E. Cooper, Director, Penny Berrier Augustine, Assistant Director, Lily Chin, Keith Hudson, Julia Kennon, Andrew Redd, Don Springman, Marie Ahearn, and Robert Swierczek. To conduct our work, we reviewed federal acquisition and commercial acquisition regulations, as well as the Office of Secretary of Defense (OSD), Air Force, and Air Force Materiel Command (AFMC) guidance pertaining to commercial acquisition. We also reviewed OSD and Air Force commercial acquisition goals since 2001 as well as expected benefits and risks associated with commercial acquisition. We met or held discussions with representatives of OSD and the Air Force to discuss various aspects of commercial acquisition including goals, progress toward achieving goals, benefits expected, and associated risks. In addition we met with Department of Defense (DOD) Inspector General officials to discuss audit report findings related to commercial acquisition. To understand the more recent determinations of commercial acquisition, we reviewed all 20 large (over $5 million) Air Force commercial contracts awarded for products in fiscal year 2004. We reviewed the contract files associated with these contracts at locations of AFMC including (1) Wright- Patterson Air Force Base, Ohio; (2) Tinker Air Force Base, Oklahoma; (3) Robins Air Force Base, Georgia; and (4) Hanscom Air Force Base, Massachusetts. We also reviewed a commercial contract (including two major modifications) for a major acquisition program called the Joint Primary Aircraft Training System. We held discussions with contracting officers and procurement management officials associated with the selected contracts. To examine the extent that Air Force commercial contracts were awarded to new market entrants, we utilized data from DOD’s procurement database (DD 350) for contract actions from fiscal year 1996 through fiscal year 2004, which was the last full year of data available at the time we performed our analysis. Query results were limited to contract actions greater than $5 million, as the Federal Acquisition Regulation (FAR) allowed actions below that threshold to employ simplified acquisition procedures. To determine the Air Force new market entrant contractors, we took the contractors with contract actions in fiscal years 2003 and 2004 and determined whether they had received any previous DOD military department contracts from fiscal year 1996 through fiscal year 2002. We considered contractors who had not received contracts during this period new to DOD. We also examined Federal Supply/Service Class codes to determine the nature of work performed by Air Force contractors. To determine the extent to which the Air Force competed its commercial contracts, we reviewed data the Air Force provided summarizing its sole- source commercial acquisitions from fiscal year 2000 through fiscal year 2005. We defined “sole-source” as those actions either not competed or not available for competition, according to DOD classification codes. Again, the data were for acquisitions over $5 million. For our analysis of the use of commercial acquisition in Air Force major acquisition programs, we included the Major Defense Acquisition Programs listed on OSD’s Selected Acquisition Report summary tables for fiscal years 2001 through 2005, except programs designated RDT&E (Research, Development, Test, and Evaluation). We also included joint programs from GAO’s 2006 Assessment of Selected Major Weapon Programs for which the Air Force was mentioned as the lead buyer. We queried the DD 350 database to determine commercial and total contract obligations on these major acquisition programs over the period constituting fiscal year 2004 through fiscal year 2005. We conducted our review from July 2005 to September 2006 in accordance with generally accepted government auditing standards. The government expected to benefit from the use of commercial acquisition instead of noncommercial acquisition. Several of the benefits expected include the government being able to rely on the contractor’s quality assurance processes and warranties in lieu of government inspections, decrease the amount of time it normally takes to award a contract, employ a streamlined contract clause structure, and use simplified acquisition procedures on high dollar amount contracts in certain circumstances. There are also several advantages to contractors of using commercial acquisition when doing business with the government. Generally contractors are not required to submit cost or pricing data to the government, not required to adhere to cost accounting standards on firm fixed-price not required to disclose more technical data to the government than they would customarily disclose to the public, able to propose more than one product that will meet the government’s need, and able to submit existing product literature in lieu of unique technical proposals. To examine the extent that Air Force commercial contracts were awarded to nontraditional contractors or new market entrants, we used data from DOD’s procurement database (DD 350) for contract actions from fiscal year 1996 through fiscal year 2004—the last full year of available data at the time of analysis. Query results were limited to Air Force contract actions greater than $5 million. We identified 98 contractors who received commercial Air Force awards in either fiscal year 2003 or fiscal year 2004. Forty-six of those 98 contractors also received large-dollar commercial awards in prior years back through fiscal year 2000 or were included on DOD Top 100 or Air Force Top 50 contractor lists. We considered them traditional contractors. For the remaining 52 contractors who did not receive large-dollar awards during that period (and who were not on DOD top 100 or Air Force top 50 contractor lists), we used DOD’s DD 350 procurement database to determine if they had performed any contracts above $25,000 for the Army, Navy, or Air Force military departments, from fiscal year 1996 through fiscal year 2002. Of the 52 contractors, 41 had received military department awards during this period and were therefore considered traditional contractors. We considered the 11 contractors who did not perform military department contracts during this period to be new to DOD. Table 1 lists the 87 total traditional contractors and the 11 new contractors according to our analysis. We reviewed 20 larger Air Force commercial contracts awarded in fiscal year 2004. We reviewed the contract files associated with these contracts at locations of the Air Force Materiel Command including (1) Wright- Patterson Air Force Base, Ohio; (2) Tinker Air Force Base, Oklahoma; (3) Robins Air Force Base, Georgia; and (4) Hanscom Air Force Base, Massachusetts. We held discussions with contracting officers and procurement management officials associated with most of the selected contracts. In three instances, parts for the C-5 military transport aircraft were procured under a system in which contractors produced a prototype or unique first article because these replacement parts did not already exist. These first articles were then subject to successful testing before the contractor was given approval to produce the remaining articles. As part of each contract, the government paid for the manufacturers to construct the unique first article and the various machine tooling they needed to produce the articles. In two other cases, there were internal Air Force disagreements regarding determinations of commerciality. The items in question were C-130E and C-130H aircraft procured by foreign governments from a sole-source contractor, with the U.S. government (via the Air Force) acting as an intermediary. Overall, 9.5 percent ($2.6 billion) of all Air Force contract dollars to major acquisitions were obligated under commercial acquisition from fiscal year 2004 through fiscal year 2005. We considered programs listed on OSD’s Selected Acquisition Report summary tables from fiscal year 2001 through fiscal year 2005 (except research and development programs) to be major acquisition programs. We also included joint programs from GAO’s 2006 Defense Acquisitions: Assessments of Selected Major Weapon Programs (GAO-06-391) for which the Air Force was listed as the lead buyer. We found three major acquisitions with Air Force involvement for which commercial actions constituted at least 75 percent of contract dollars obligated, and these acquisitions are shaded in table 3. Excluding these three acquisitions, commercial expenditures for the remaining 25 major acquisition programs with Air Force involvement constituted less than 1 percent of total program dollars spent. 1972–Commission on Government Procurement—See Report of the Commission on Government Procurement, Vol. 3, Pt. D, “Acquisition of Commercial Products,” (Dec. 1972). 1984–Competition in Contracting Act of 1984—Pub. L. No. 98-369, Div. B, Title VII. 1986–President’s Blue Ribbon Commission on Defense Management (Packard Commission)—A Quest for Excellence: Final Report to the President by the President’s Blue Ribbon Commission on Defense Management (June 1986), 60-64. 1986–National Defense Authorization Act for Fiscal Year 1987—Pub. L. No. 99-661, Div. A, Title IV, Sec. 907(a) (1986). 1993–Advisory Panel on Streamlining and Codifying Acquisition Laws (Sec. 800 Panel)–Established Pursuant to Section 800 of the National Defense Authorization Act for Fiscal Year 1991, Pub. L. No. 101-510 (1990); Streamlining Defense Acquisition Laws: Report of the Acquisition Law Advisory Panel to the U.S. Congress, Intro. I-9 (1993). 1994–Federal Acquisition Streamlining Act—Pub. L. No. 103-355, Section 1202 and Title VIII (1994). 1996–Clinger-Cohen Act of 1996–Pub. L. No. 104-106, Div. D (1996), formerly the Federal Acquisition Reform Act of 1996 and renamed in Treasury, Postal Service and General Government Appropriations Act, 1997, contained in Omnibus Consolidated Appropriations Act, 1997, Pub. L. No. 104-208, Section 808 (1996). 2003–Services Acquisition Reform Act of 2003—Pub. L. No. 108-136, Title XIV, Section 1431, 1432 (2003).
The Department of Defense (DOD) has been urged by commissions, legislation, and a panel to make increased use of commercial acquisition to achieve certain benefits. To help ensure the increased use of commercial acquisition, the Office of the Secretary of Defense (OSD) established and the Air Force implemented two commercial acquisition goals to be achieved by the end of fiscal year 2005. In setting these goals, OSD expected that the increased use of commercial acquisition would provide DOD with greater access to commercial markets (products and service types) with increased competition, better prices, and new market entrants and/or technologies. The committee asked GAO to identify (1) the extent to which the Air Force has increased its use of commercial acquisition to obtain expected benefits and (2) the risks that are associated with this use. From 2001 to 2005, the Air Force increased spending using commercial acquisition from $4.8 billion to over $8 billion in an effort to provide greater access to commercial markets to increase competition, obtain better prices, and attract new market entrants (nontraditional contractors) and/or technologies. Even though the Air Force has significantly increased this spending, it has not measured the extent to which this increased use resulted in the benefits that were expected. For example, our analysis shows that for at least one of the expected benefits, attracting new market entrants, the expected benefit has not materialized. For the most part, traditional defense contractors received these contracts. Government contracting officials face risks in using commercial acquisition. For example, improperly classifying an acquisition as a commercial acquisition can leave the Air Force vulnerable to accepting prices that may not be the best value for the department. A high-ranking DOD acquisition official testified that he is concerned about items and services being identified as commercial that are not sold in an existing marketplace because under these circumstances, the government lacks assurances that the price is reasonable. At times, Air Force officials have disagreed about the classification of some acquisitions as commercial. The Air Force's use of commercial acquisition has also been accompanied by an increased amount of dollars being awarded for sole-source contracts. Despite DOD policy to avoid sole-source commercial acquisitions because of increased risk, sole-source commercial acquisition dollars awarded by the Air Force have more than doubled from 2000 to 2005. Further, of the 20 larger Air Force commercial product awards in 2004, half were awarded as sole-source.
Advances in information technology and the explosion in computer interconnectivity have had far-reaching effects, including the transformation from a paper-based to an electronic business environment and the capability for rapid communication through e-mail. Although these developments have led to improvements in speed and productivity, they also pose challenges, including the need to manage those e-mail messages that may be federal records. Under the Federal Records Act, NARA is given general oversight responsibilities for records management as well as general responsibilities for archiving. This includes the preservation in the National Archives of the United States of permanent records documenting the activities of the government. NARA thus oversees agency management of temporary and permanent records used in everyday operations and ultimately takes control of permanent agency records judged to be of historic value. (Of the total number of federal records, less than 3 percent are designated permanent.) In particular, NARA is responsible for issuing records management guidance; working with agencies to implement effective controls over the creation, maintenance, and use of records in the conduct of agency business; providing oversight of agencies’ records management programs; approving the disposition (destruction or preservation) of records, and providing storage facilities for agency records. The act also gives NARA the responsibility for conducting inspections or surveys of agency records and records management programs. The act requires each federal agency to make and preserve records that (1) document the organization, functions, policies, decisions, procedures, and essential transactions of the agency and (2) provide the information necessary to protect the legal and financial rights of the government and of persons directly affected by the agency’s activities. These records, which include e-mail records, must be effectively managed. As used in this chapter, “records” includes all books, papers, maps, photographs, machine readable materials, or other documentary materials, regardless of physical form or characteristics, made or received by an agency of the United States Government under Federal law or in connection with the transaction of public business and preserved or appropriate for preservation by that agency or its legitimate successor as evidence of the organization, functions, policies, decisions, procedures, operations, or other activities of the Government or because of the informational value of data in them. Library and museum material made or acquired and preserved solely for reference or exhibition purposes, extra copies of documents preserved only for convenience of reference, and stocks of publications and of processed documents are not included. As the definition shows, although government documentary materials (including e-mails) may be “records” in this sense, many are not. For example, not all e-mails document government “organization, functions, policies, decisions, procedures, operations, or other activities” or contain data of informational value. According to NARA, the activities of an agency records management program include, briefly, the following identifying records and sources of records; developing a file plan for organizing records, including identifying the classes of records that the agency produces; developing records schedules—that is, proposing for each type of content where and how long records need to be retained and their final disposition (destruction or preservation) based on time, or event, or a combination of time and event; and providing records management guidance to agency staff, including agency-specific recordkeeping practices that establish what records need to be created in order to conduct agency business. Developing record schedules is a cornerstone of the records management process. Scheduling involves not individual documents or file folders, but rather broad categories of records. Traditionally, these were record series: that is, “records arranged according to a filing system or kept together because they relate to a particular subject or function, result from the same activity, document a specific kind of transaction, take a particular physical form, or have some other relationship arising out of their creation, receipt, or use, such as restrictions on access and use.” More recently, NARA introduced flexible scheduling, which allows so-called “big bucket” or large aggregation schedules for temporary and permanent records. Under this approach, the schedule applies not necessarily to records series, but to all records relating to a work process, group of work processes, or a broad program area to which the same retention time would be applied. To develop records schedules, agencies identify and inventory records, and NARA’s appraisal archivists work with agencies to appraise their value (which includes informational, evidential, and historical value), determine whether they are temporary or permanent, and determine how long the temporary records should be kept. NARA then approves the necessary records schedules. No record may be destroyed unless it has been scheduled, and for temporary records the schedule is of critical importance because it provides the authority to dispose of the record after a specified time period. Records schedules may be of two kinds: an agency-specific schedule or a general records schedule, which covers records common to several or all agencies. According to NARA, general records schedules cover about a third of all federal records. For the other two-thirds, NARA and the agencies must agree upon specific records schedules. Once a schedule has been approved, the agency is to issue it as a management directive, train employees in its use, apply its provisions to temporary and permanent records, and ensure proper implementation. The Federal Records Act covers documentary material regardless of physical form or media, but until the advent of computers, records management and archiving had been largely focused on handling paper documents. As information is increasingly created and stored electronically, records management has had to take into account the creation of records in varieties of electronic formats, including e-mail messages. NARA has promulgated regulations at 36 C.F.R. Part 1234 that provide guidance to agencies about the management of electronic records. This guidance is supplemented by the issuance of periodic NARA bulletins and other forms of guidance to agencies. To ensure that the management of agency electronic records is consistent with the Federal Records Act, NARA requires each agency to maintain an inventory of all agency information systems that identifies basic facts about each system and the information it contains, and it requires that agencies schedule the electronic records in its systems. Like other records, electronic records must be scheduled either under agency-specific schedules or pursuant to a general records schedule. According to the regulation, agencies are required to establish policies and procedures that provide for appropriate retention and disposition of electronic records. In addition to including general provisions on electronic records, agency procedures must specifically address e-mail records: that is, the creation, maintenance and use, and disposition of federal records created by individuals using electronic mail systems. “a document created or received on an electronic mail system including brief notes, more formal or substantive narrative documents, and any attachments, such as word processing and other electronic documents, which may be transmitted with the message.” The regulation requires e-mail records to be managed as are other potential federal records with regard to adequacy of documentation, recordkeeping requirements, agency records management responsibilities, and records disposition. This entails, in particular, ensuring that staff are aware that e-mails are potential records and training them in identifying which e-mails are records. Specific requirements for e-mail records include, for example, that for each e-mail record, agencies must preserve transmission data, including names of sender and addressees and message date, because these provide context that may be needed for the message to be understood. Further, except for a limited category of “transitory” e-mail records, agencies are not permitted to store the recordkeeping copy of e-mail records in the e- mail system, unless that system has all the features of a recordkeeping system; table 1 lists these required features. If agency e-mail systems do not have the required recordkeeping features, either agencies must copy e-mail records to a separate electronic recordkeeping system, or they must print e-mail messages (including associated transmission information that is needed for purposes of context) and file the copies in traditional paper recordkeeping files. NARA’s guidance allows agencies to use either paper or electronic recordkeeping systems for record copies of e-mail messages, depending on the agencies’ business needs. Each of the required features listed in table 1 is important because it helps ensure that e-mail records remain both accessible and usable during their useful lives. For example, it is essential to be able to classify records according to their business purpose so that they can be retrieved in case of mission need. Further, if records cannot be retrieved easily and quickly, or they are not retained in a usable format, they do not serve the mission or historical purpose that led to their being preserved. In many cases, e-mail systems do not have the features in the table. If e-mail records are retained in such systems and not in recordkeeping systems, they may be harder to find and use, as well as being at increased risk of loss from inadvertent or automatic deletion. Agencies must also have procedures that specifically address the destruction of e-mail records. In particular, e-mail records may not be deleted or otherwise disposed of without prior authority from NARA. (Recall that not all e-mail is record material. Agencies may destroy nonrecord e-mail.) Agencies can dispose of e-mail records in three situations: First, agencies are authorized to dispose of e-mail records with very short-term (transitory) value that are stored in e-mail systems at the end of their retention periods (as mentioned earlier). Second, for other records in e- mail systems, NARA authorizes agencies to delete the version in the e-mail system after the record has been preserved in a recordkeeping system along with all appropriate transmission data. Finally, agencies are authorized to dispose of e-mail records in the recordkeeping system in accordance with the appropriate records schedule. If the records in the recordkeeping system are not scheduled, the agency must schedule them before they can be disposed of. Because of its nature, e-mail can present particular challenges to records management. First, the information contained in e-mail records is not uniform. This is in contrast to many information systems, particularly those in computer centers engaged in large-scale data processing, which contain structured data that generally can be categorized into a relatively limited set of logical groupings. The information in e-mail systems, on the other hand, is not structured in this way: it may concern any subject or function and document various types of transactions. As a result, in many cases, decisions on which e-mail messages are records must be made individually. The kinds of considerations that may go into determining the record status of an e-mail message are illustrated in figure 1. As shown by the decision tree in the figure (developed at Sandia National Laboratories), agency staff have to be aware of the defining features of a record in order to make these decisions. Second, the transmission data associated with an e-mail record—including information about the senders and receivers of messages, the date the message was sent, and any attachments to the messages—provide context that may be crucial to understanding the message. Thus, as NARA’s e-mail regulations and guidance reflect, transmission data must be retained, and attachments are defined as part of the e-mail record. Third, a given message may be part of an exchange of messages between two or more people within or outside an agency, or even of a string (sometimes branching) of many messages sent and received on a given topic. In such cases, agency staff need to decide which message or messages should be considered records and who is responsible for storing them in a recordkeeping system. Finally, the large number of federal e-mail users and high volume of e- mails increase the management challenge. According to NARA, the use of e-mail results in more records being created than in the past, as it often replaces phone conversations and face-to-face meetings that might not have been otherwise recorded. E-mail may also replace other types of written communications, such as letters and memorandums. Whether agencies use paper-based or electronic recordkeeping systems, individual users generally make decisions (based on considerations such as those in the figure) on what messages they judge to be records. In paper-based systems, users then print and file e-mail records—with appropriate transmission data—in the appropriate file structure (generally corresponding to record series or schedule). In electronic systems, the particular steps to file the record would vary depending on the particular type of system and its degree of integration with the agency’s other information systems. Although details vary, an electronic recordkeeping system, like a paper-based system, requires that a filing structure has been established by which records can be associated with the appropriate series. The advantages of using a paper-based system for record copies of e-mails are that this approach takes advantage of the recordkeeping system already in place for the agency’s paper files and requires little or no technological investment. The disadvantages are that a paper-based approach depends on manual processes and requires electronic material to be converted to paper, potentially losing some features of the electronic original; these processes may be especially burdensome if the volume of e- mail records is large. The advantage of using an electronic recordkeeping system, besides avoiding the need to manage paper, is that it can be designed to capture certain required data (such as e-mail transmission data) automatically. Electronic recordkeeping systems also make searches for records on particular topics much more efficient. In addition, electronic systems that are integrated with other applications may have features that make it easier for the user to identify records and that potentially could provide automatic or partially automatic classification functions. However, as with other information technology investments, acquiring an electronic recordkeeping system requires careful planning and analysis of agency requirements and business processes; in addition, electronic recordkeeping raises the issue of maintaining electronic information in an accessible form throughout its useful life. Finally, like paper-based systems, electronic recordkeeping systems must be used properly by employees to be effective. These challenges have been recognized by NARA and the records management community in numerous studies and articles. A 2001 survey of federal recordkeeping practices conducted by a contractor—SRA International—for NARA concluded, among other things, that managing e- mail was a major records management problem and that the quality of recordkeeping varied considerably across agencies. The authors also commented on features of agency missions that lead to strong recordkeeping practices: “When agencies have a strong business need for good recordkeeping, such as the threat of litigation or an agency mission that revolves around maintaining ‘case’ files, then recordkeeping practices tend to be relatively strong with regard to the records involved.” In addition, the study concluded that for many federal employees, the concept of a “record” and what should be scheduled and preserved was not clear. A 2005 survey of federal agencies’ policy and practices for electronic records management, funded in part by NARA, concluded that procedures for managing e-mail were underdeveloped. The study found that most of the surveyed offices had not developed electronic recordkeeping systems, but were instead maintaining recordkeeping copies of e-mail and other electronic documents in paper format. However, all of the offices also maintained electronic records (frequently electronic duplicates of paper records). According to the study team, agencies did not establish electronic recordkeeping systems partly because of a lack of support and resources, and the complexity of implementing such systems increased with the size of the agency. As a result, organizations were maintaining unsynchronized parallel paper and electronic systems, resulting in extra work, confusion regarding which is the recordkeeping copy, and retention of many records beyond their disposition date. The study team also concluded that disposition of electronic records was too cumbersome and uncertain. According to the report, employees delete electronic records, such as e-mails, one at a time, a cumbersome process which may result in retention of too many records for too long or premature disposition that is inconsistent with approved retention schedules. (This is in contrast to records disposition in a recordkeeping system, in which categories of temporary records may be disposed of at the end of their retention periods.) The report also discussed NARA’s role in promoting agencies’ adoption of electronic recordkeeping systems. Commenting on these points, NARA expressed the view that for agencies that maintain paper as the record copy, the early destruction of electronic copies was not a significant problem because such copies generally have very short term retentions, and no information is lost. It considered that the overly long retention of electronic copies did raise concerns regarding legal discovery and compliance with requests under the Freedom of Information Act or the Privacy Act. In these circumstances, agencies are required to search for all information, not just information in recordkeeping systems; thus, maintaining large volumes of nonrecord material increases this burden. Most recently, a NARA study team examined in 2007 the experiences of five federal agencies (including itself) with electronic records management applications, with a particular emphasis on how these organizations used these applications to manage e-mail. The purpose of the study was to gather information on the strategies that organizations are using that may be useful to others. Among the major conclusions from the survey was that implementing an electronic records management application requires considerable effort in planning, testing, and implementation, and that although the functionality of the software product itself is important, other factors are also crucial, such as agency culture and the quality of the records management system in place. With regard to e-mail in particular, the survey concluded that for some agencies, the volume of e-mail messages created and received may be too overwhelming to be managed at the desktop by thousands of employees across many sites using a records management application alone, and that e-mail messages can constitute the most voluminous type of record that is filed into these applications. Finally, further study was recommended of technologies that are being used to manage e-mail and what federal agencies are doing with their record e-mail messages. NARA is planning to perform such a study in 2008. According to NARA, the study will take a close look at how selected agencies are implementing electronic recordkeeping for their program records, including those e-mail messages that need to be retained and managed as federal records. The study will look at electronic recordkeeping projects that have a records management application in place as well as other solutions that provide recordkeeping functionality. In both cases, NARA plans to explore how e- mail messages in particular are identified and managed as records. According to NARA officials, they have begun planning for the study and identifying agencies to be included; they expect to have the report completed by the end of September 2008. Such a study could provide useful information to help NARA develop additional guidance to agencies looking for electronic solutions for records management of e-mail and other electronic records. As the earlier studies suggest, implementing such solutions is not a simple or easy process. Although NARA has referred to the decision to move to electronic recordkeeping as inevitable, it emphasizes that the timing of the decision depends on an agency’s specific mission and circumstances. For the last several years, NARA’s records management program has increasingly reflected the importance of electronic records and recordkeeping. For example, NARA has undertaken a redesign of its records management activities, including (among other things) the following three activities, which are significant for management of electronic records, including e-mail: NARA established flexible scheduling (the so-called “big bucket” approach described earlier), under which agencies can schedule records at any level of aggregation that meets their business needs. By simplifying disposition instructions, “big bucket” schedules have advantages for electronic records management; filing e-mail records under a “big bucket” system, for example, is simplified because users can be presented with fewer filing categories. NARA developed e-mail regulations that eliminated the previous requirement to file transitory e-mail dealing with routine matters in a formal agency recordkeeping system. According to NARA, this change would allow agencies to focus their resources on managing e-mail that is important for long-term documentation of agency business. The change was reflected in a revision to General Records Schedule 23 that explicitly included very short-term temporary e-mail messages. The final rule became effective on March 23, 2006. NARA developed regulations and guidance to make retention schedules media neutral. According to NARA, its objective was to eliminate routine rescheduling work so that agencies and NARA could focus their resources on high records management priorities. Under its revised regulations, in effect as of December 2007, new records schedules would be media neutral unless otherwise specified. At the same time, NARA revised General Records Schedule 20 (which provides disposition authorities for electronic records) to expand agencies’ authority to apply previously approved schedules to electronic records and to dispose of hard copy records that have been converted to an electronic format, among other things. In July 1999, we reported that NARA and federal agencies were facing the substantial challenge of managing and preserving electronic records in an era of rapidly changing technology. In that report, we stated that in addition to handling the burgeoning volume of electronic records, NARA and the agencies would have to address several hardware and software issues to ensure that electronic records were properly created, maintained, secured, and retrievable in the future. We also noted that NARA did not have governmentwide data on the records management capabilities and programs of all federal agencies. As a result, we recommended that NARA conduct a governmentwide survey of agencies’ electronic records management programs and use the information as input to its efforts to reengineer its business processes. NARA subsequently undertook efforts to assess governmentwide records management practices and study the redesign of its business processes. As mentioned earlier, in 2001 NARA completed an assessment of governmentwide records management practices, as we had recommended. NARA’s assessment of the federal recordkeeping environment concluded that although agencies were creating and maintaining records appropriately, most electronic records remained unscheduled, and records of historical value were not being identified and provided to NARA for archiving. In 2002, we reported that factors contributing to the problems of managing and preserving electronic records included records management guidance that was inadequate in the current technological environment, the low priority often given to records management programs, and the lack of technology tools to manage electronic records. In addition, NARA did not perform systematic inspections of agency records management, so that it did not have comprehensive information on implementation issues and areas where guidance needed strengthening. Although NARA had plans to improve its guidance and address technology issues, these did not address the low priority generally given to records management programs nor the inspection issue. With regard to inspections, we noted that in 2000, NARA had replaced agency evaluations (inspections) with a new approach—targeted assistance—because it considered that its previous approach to evaluations had been flawed: it reached only a few agencies, it was often perceived negatively, and it resulted in a list of records management problems that agencies then had to resolve on their own. Under targeted assistance, NARA entered into partnerships with federal agencies to provide them with guidance, assistance, or training in any area of records management. Despite the possible benefits of such assistance to the targeted agencies, however, we concluded that it was not a substitute for systematic inspections. Only agencies requesting assistance were evaluated, and the scope and focus of the assistance were determined not by NARA but by the requesting agency. Thus, it did not provide systematic and comprehensive information for assessing progress over time. To address the low priority generally given to records management programs, we recommended that NARA develop a strategy for raising agency senior management awareness of and commitment to records management. To address the inspection issue, we recommended that NARA develop a strategy for conducting systematic inspections of agency records management programs to (1) periodically assess agency progress in improving records management programs and (2) evaluate the efficacy of NARA’s governmentwide guidance. In response to our recommendations, NARA devised a strategy for raising awareness among senior agency management of the importance of good federal records management, as well as a comprehensive approach to improving agency records management that included inspections and identification of risks and priorities. NARA also took steps to improve federal records management programs by updating its guidance to reflect new types of electronic records. In 2003, we testified that the plan for improving agency records management did not include provisions for using inspections to evaluate the efficacy of its governmentwide guidance, and an implementation plan for the approach had not yet been established. NARA later addressed these shortcomings by developing an implementation plan that included using agency inspections to evaluate the efficacy of its guidance, with such inspections to be undertaken based on a risk-based model, government studies, or media reports. Such an approach, if appropriately implemented, had the potential to help avoid the weaknesses in records management programs that led to the scheduling and disposition problems that we and NARA had described in earlier work. To fulfill its responsibility under the Federal Records Act for oversight of agency records management programs, NARA planned to conduct activities including inspections, studies, and reporting. However, despite NARA’s plans, in recent years its oversight activities have been primarily limited to performing studies. Although it has performed or sponsored six records management studies since 2003, it has not conducted any inspections since 2000. In addition, although NARA’s reporting to the Congress and OMB has generally described progress in improving records management at individual agencies and provided an overview of some of its major records management activities, it has not consistently provided evaluations of responses by federal agencies to its recommendations, as required, or details on records management problems or recommended practices that were discovered as a result of inspections, studies, or targeted assistance projects. Without a consistent oversight program that provides it with a governmentwide perspective, NARA has limited assurance that agencies are appropriately managing the records in their custody, thus increasing the risk that important records will be lost. Oversight is a key activity in governance that addresses whether organizations are carrying out their responsibilities and serves to detect other shortcomings. Our reports emphasize the importance of effective oversight of government operations by individual agency management, by agencies having governmentwide oversight responsibilities, and by the Congress. Various functions and activities may be part of oversight, including monitoring, evaluating, and reporting on the performance of organizations and their management and holding them accountable for results. The Federal Records Act gave NARA responsibility for oversight of agency records management programs by, among other functions, making it responsible for conducting inspections or surveys of agencies’ records and records management programs and practices; conducting records management studies; and reporting the results of these activities to the Congress and OMB. In particular, the reports are to include evaluations of responses by agencies to any recommendations resulting from inspections or studies that NARA conducts and, to the extent practicable, estimates of costs to the government if agencies do not implement such recommendations. According to NARA, it planned to carry out its oversight responsibilities using inspections, studies, and reporting. Specifically, in 2003, NARA stated that it would perform inspections of agency records and records management conduct studies that focus on cross-government issues, analyze and identify best practices, and use the results to develop governmentwide recommendations and guidance; and report to the Congress and OMB on problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. Although inspections were included in NARA’s oversight plans in 2003, NARA has not conducted any since 2000. NARA laid out a strategy for performing inspections and studies in 2003 as part of its records management redesign efforts. According to this strategy, NARA anticipated undertaking inspections only under what it termed exceptional circumstances: that is, if (1) agencies have high-level records management problems that put at risk federal records that protect rights, assure accountability, or document the national experience, and (2) agencies refuse targeted assistance from NARA and fail to mitigate or otherwise effectively deal with such risks. In other words, NARA considered inspections its tool of last resort: to be used when the risk to records was deemed high and other tools (such as targeted assistance and training) failed to mitigate the risk to records. Under this strategy, NARA planned to determine when to undertake inspections based on its risk-based resource allocation model (or when it learned through other means of a clear and egregious records management problem in an agency or line of business). Using this model, developed in 2003, NARA’s Resource Allocation Project performed a governmentwide assessment in 2004 of high-priority federal records and records programs. After reviewing program areas and work processes of the government (as opposed to organizational units), the project identified the business processes, subfunctions, and agency activities that were likely to generate the majority of high-priority records. Based on input and assessments from NARA staff with expertise in the subfunctions and associated agencies, the project then rated the subfunctions according to three criteria for establishing resource priorities: the risk to records (based on such factors as whether the subfunctions or associated agencies had experienced major scheduling issues or known problems, such as allegations of unauthorized destruction of records), the level of significance of the records to rights and accountability, and the likelihood that the subfunction would generate permanent records (and if so, their volume and significance). According to the final report on the project, this assessment showed that the risks to records were being addressed and managed by the Archives’ own records management activities and those of the agencies. As a result, the Resource Allocation Project did not lead to the identification of records management risks that met the new inspection criteria. Instead, NARA applied its resources to other activities that it considered more effective and less resource-intensive than the inspections it undertook in the past. These include regular contacts between appraisal archivists and agencies, updated guidance information, and training. However, the Resource Allocation Project was primarily based on NARA’s in-house information sources and expertise. Although this information and expertise may be considerable and collecting and assessing it potentially valuable, it is not a substitute for examinations of agency programs, surveys of practices, agency self-assessments, or other external sources of information. Further, although the final report on the 2004 project included important lessons learned for improving future assessments, NARA did not set up a process for continuing the effort and applying the lessons learned to updating the assessment or validating its results. Officials had also stated that targeted assistance was a tool that NARA would use in preference to inspections to solve urgent records management problems and that the results of the Resource Allocation Project were also to be used in determining where to use this tool. However, NARA’s use of targeted assistance has declined significantly over the past 5 years. (NARA reported that in 2002, 77 projects were opened and 76 completed; in contrast, 4 were opened and none completed in 2007.) Officials ascribed the reduced emphasis on targeted assistance projects to various factors, including competing demands (such as work on the development of its advanced electronic records archive and on helping agencies to schedule electronic records), the difficulty of getting agencies to devote resources to the projects, and the removal of numerical targets for targeted assistance projects, which occurred when NARA revised performance metrics to emphasize results rather than quantity. According to NARA, it also works with agencies to address critical records management issues outside formal targeted assistance arrangements. In addition, it identifies and investigates allegations of unauthorized destruction of federal records. Thus, neither inspections nor targeted assistance have made significant contributions to NARA’s oversight of agency records management. Without a more comprehensive method of evaluating agency records management programs, NARA lacks assurance that agencies are effectively managing records throughout their life cycle. NARA has performed records management studies in accordance with its 2003 plan. According to the plan, it was to conduct records management studies to focus on cross-government issues, to identify and analyze best practices, and to develop governmentwide recommendations and guidance. In addition, NARA planned to undertake records management studies when it believed an agency or agencies in a specific line of business were using records management practices that could benefit the rest of a specific line of business or the federal government as a whole. Since developing its 2003 plan, NARA has conducted or sponsored six records management studies (see table 2). Most of these studies were focused on records management issues with wide application. For example, two were related to helping NARA improve its guidance on particular types of records—health and safety records, and research and development (R&D) records. Another two were limited in scope to components of a single agency, but they addressed issues with potentially broad application and included conclusions regarding factors that needed to be considered in the appraisal of given types of records. Under the Federal Records Act, NARA is responsible for reporting the results of its records management activities to the Congress and OMB, including evaluations of responses by agencies to any recommendations resulting from its inspections or studies and (where practicable) estimates of costs if its recommendations are not implemented. Further, NARA’s plan for carrying out its oversight responsibilities states that it will report to the Congress and OMB on problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. According to NARA, it fulfills its statutory reporting requirement through annual Performance and Accountability Reports, which include sections on “Federal Records Management Evaluations.” However, although NARA has issued reports on its records management studies, the Federal Records Management Evaluations sections of the Performance and Accountability Reports have not included the studies’ results or evaluations of responses by agencies to its recommendations. Instead, the reports have generally provided an overview of NARA’s major records management activities, as well as describing noteworthy records management progress at individual agencies. For example, the report for fiscal year 2007 provided statistics on the appraisal and scheduling of electronic records systems and listed agencies that had scheduled electronic records or transferred permanent electronic records to NARA during the fiscal year. Elsewhere in the reports, NARA mentioned four of the six records management studies as part of its reporting on records management goals. However, it included few details on the results of these studies regarding the records management problems or recommended practices that they uncovered. For example, in the fiscal year 2005 Performance and Accountability Report, NARA reported that it had completed a January 2005 study on Air Force Headquarters offices (see table 2), but NARA did not discuss the results, and later reports did not discuss actions taken in response to its recommendations. Similarly, the fiscal year 2007 Performance and Accountability Report did not describe any actions that the Department of Energy had taken in response to an August 2006 study. Also, in 2007, NARA stopped reporting on its targeted assistance projects. In prior years, its Performance and Accountability Reports generally provided statistics on targeted assistance projects and described their general goals, although the reports did not generally discuss problems or recommended practices resulting from them. In the fiscal year 2007 report, NARA stated that the strategies described in its Strategic Directions, including targeted assistance, had become part of its standard business practices and would no longer be highlighted individually. However, as mentioned earlier, the number of targeted assistance projects had declined significantly by that time. The Director and senior officials from NARA’s Modern Records Program agreed that the annual reports did not specify the problems and recommended practices discovered as part of inspections, studies, and targeted assistance projects. According to these officials, the annual Performance and Accountability Reports have been focused on positive news, and NARA has struggled with developing an objective way to report negative news about agencies’ records management. The officials attributed this difficulty to the agency’s conservatism in this regard. NARA’s limited use of oversight tools and incomplete reporting on the specific results of its oversight activities can be attributed to an organizational preference for using persuasion and cooperation when working with agencies. This preferred approach is consistent with NARA’s reasons (as we noted in 2003) for replacing agency evaluations (inspections) with targeted assistance: among these reasons was that inspections were perceived negatively by agencies. NARA officials have said that they prefer to use “carrots, rather than sticks.” NARA officials added that full-scale inspections were resource intensive and took several years to complete, and that agencies took years to address NARA’s recommendations. Although, as described earlier, NARA regularly works with agencies on scheduling and disposition of records (activities related to the end of the records life cycle), officials agreed that these activities provide limited insight into records management at earlier stages—that is, creation, maintenance, and use. The officials also agreed that their work with agencies on scheduling records does not fulfill the Archivist’s responsibility under the Federal Records Act to conduct inspections or surveys of agency records and records management programs and practices. Further, by giving the Archivist the responsibility to report to the Congress and OMB on records management issues, the Federal Records Act provides NARA with a tool for holding agencies accountable, a key aspect of oversight. However, NARA has been reluctant to use this tool, limiting its ability to determine whether federal agencies are carrying out their records management responsibilities. Without more specific and comprehensive information about how agencies are managing their records and without the means to hold agencies accountable for shortcomings, NARA’s ability to identify and address common records management problems is impaired. As a result, there is reduced assurance that records are adequately managed and that important records are not being lost. The four agencies reviewed—the Department of Homeland Security (DHS); the Environmental Protection Agency (EPA); the Federal Trade Commission (FTC); and the Department of Housing and Urban Development (HUD)—generally preserved e-mail records through paper- based processes, although one agency—EPA—is in the process of deploying an electronic content management system that is to be used for managing e-mail messages that are agency records; two others have long- term plans to develop electronic recordkeeping. Three of the four agencies also used electronic systems to manage documents, correspondence, and so on, but these systems generally did not have recordkeeping features. Each of the business units that we reviewed (one at each agency) maintained “case” files to fulfill its mission that were used for recordkeeping. The practice at the units was to include e-mail printouts in the case files if they contained information necessary to document the case—that is, record material. These printouts included transmission data and distribution lists, as required. DHS: DHS primarily uses “print and file” recordkeeping for all records. None of the department’s e-mail systems is a recordkeeping system; accordingly, they may be used to store only transitory e-mail records. Officials from the Office of the DHS Chief Information Officer (CIO) told us that DHS e-mail systems house transitory e-mails and retain them for at least 90 days. In addition, according to the CIO office, although employees can currently access Web-based and Internet-accessible private e-mail systems, the department is taking steps to restrict or remove this access. Although its current recordkeeping is generally paper-based, DHS has begun planning for an enterprisewide Electronic Records Management System. According to the business case submitted by DHS to OMB to justify the proposed investment, the proposed system is to allow electronic storage and retrieval of records by authorized staff throughout DHS and permit the elimination of paper file copies. According to the department’s senior records officer, DHS’s current records schedules are now media neutral. DHS’s records management handbook also provides instructions for both electronic and paper e-mail recordkeeping. In addition, DHS CIO officials told us that the department has implemented several electronic knowledge and document management systems, at least two of which have recordkeeping features but are not used for e-mail recordkeeping. E-mail records were maintained in paper at the DHS business unit reviewed, the Washington Regional Office of Detention and Removal Operations under Immigration and Customs Enforcement (ICE). The primary responsibility of the Office of Detention and Removal Operations is to identify, apprehend, and remove illegal aliens from the United States. To fulfill its mission, the business unit maintained paper-based case files, and these files were used for recordkeeping. To store deportation case information, the unit uses the so-called “alien files” or “A-files.” These files are created by DHS’s Citizenship and Immigration Services for certain noncitizens, such as immigrants, to serve as the one central file for all of the noncitizen’s immigration-related applications and related documents that pertain to that person’s activities. The A-files are managed by Citizenship and Immigration Services and shared among DHS components as necessary. Because A- files are paper-based, they require physical transfer from one location to another. To track these files, DHS uses the National File Tracking System, an automated file-tracking system developed to enable all DHS staff at numerous DHS locations around the country to locate, request, receive, and transfer A-files. Each A-file has a National File Tracking System number. According to business unit officials, e-mails would not usually be found in the A-files because the primary use of e-mail was to share information within the business unit, and so it would rarely rise to the level of a record. The A-files mainly contain other kinds of information, including forms from agency information systems, investigation results, charging documents, conviction documents, photos, fingerprints, and memos. A deportation officer provided 10 active open case files for inspection (each officer is usually responsible for 40 to 60 active open immigration cases). The 10 case files contained a total of 18 e-mail records, which included transmittal data and distribution lists. EPA: EPA’s current recordkeeping is largely print and file, but the agency is undergoing a transition to electronic recordkeeping, beginning with e- mail records. According to EPA officials, the commitment to establish its Enterprise Content Management System (ECMS), which has recordkeeping features, was a result of an agency decision to develop a long-term solution to manage hurricane records electronically in the wake of Hurricanes Katrina and Rita. According to a memorandum sent to all EPA employees, the goal was to ensure that these records be placed in a recordkeeping system that met both EPA and NARA requirements, while allowing easy access to the records when needed. At the same time, the agency ordered that the automatic delete function in the agency’s e-mail system be deactivated so that no hurricane records could be deleted accidentally. According to agency officials, the e-mail capability of ECMS was available in fiscal year 2007, and the agency expects that by the end of fiscal year 2009, 50 percent of EPA staff and contractors will be using the system. The ECMS repository is an electronic recordkeeping system that uses commercial software that complies with a standard endorsed by NARA. According to officials, as part of its preparations for the transition, EPA recently updated its record schedules so that its treatment of records would be media neutral; this is to facilitate uploading records into ECMS. It has also developed materials, such as a brochure and a user guide, to support its transition. The agency’s e-mail systems are not currently used as recordkeeping systems and will not be under ECMS. Accordingly, they can be used to store only transitory e-mail records. Officials also told us that employees could access Web-based e-mail systems for limited personal use, but that they were not permitted to use these for official business. E-mail records were maintained in paper at the EPA business unit reviewed, the Assessment and Remediation Division of the Office of Superfund Remediation and Technology Innovation (part of EPA’s Office of Solid Waste and Emergency Response). Among other things, this division processes claims related to Superfund cleanup settlements. Officials from the Office of Superfund Remediation and Technology Innovation told us that recordkeeping for this office was print and file, but that employees were also directed to include all records (including e-mail records) into the office’s electronic Superfund Document Management System. This was not a recordkeeping system, but the plan was to integrate it with ECMS for long-term stewardship of Superfund files. According to these officials, they expect to be able to capture Superfund e- mail records in ECMS by fall 2008. Officials of the Assessment and Remediation Division stated that few e- mail messages would be considered records, because most official business regarding claims was conducted through correspondence on letterhead with an original signature. Although copies of these might be sent as e-mail attachments, these officials said, they would not be the official recordkeeping copy. However, division officials stated that e-mail records were more likely to be included in case files regarding “mixed funding” claims related to Superfund cleanup settlements, because these involved communication between regional offices and parties involved in the claims. (Mixed funding refers to the government assuming some proportion of cleanup expenses, with other parties assuming the rest.) According to officials, mixed funding documentation could include e-mail records documenting information to justify claims and facilitate payment. Officials provided a mixed funding case file for inspection, in which they had identified 10 e-mail records. All these records included transmission data and distribution lists, as required. FTC: FTC recordkeeping for e-mail and other records is print and file. The commission’s e-mail system is not a recordkeeping system, and the commission has not implemented the option allowed by NARA’s guidance to use the e-mail system for storing transitory e-mail records. The agency has no current plans to institute electronic recordkeeping. According to FTC officials, the commission’s processes are largely paper based. The commission’s records management guidance states that few e-mails are expected to rise to the level of a record. For example, agency officials explained that official decisions of the commission are generally reached jointly by the commissioners and recorded in documents such as memorandums, letters, and meeting minutes. According to officials, FTC uses a case management system to track work products (such as depositions, filings, and briefs), but this is not a document management or recordkeeping system. According to officials, about 80 percent of all FTC files are case files. The records manager said that the records schedules for FTC programs currently include instructions for e-mail disposition, but that the office is in the process of conducting a records inventory and reassessing records scheduling, with the next step being to do “big bucket” media-neutral scheduling. According to this official, this approach will provide flexibility in the event that FTC adopts electronic business processes in the future. According to FTC officials, the commission is currently assessing its needs for electronic document management tools, including an electronic recordkeeping system. The CIO told us that agency staff cannot directly access external Web- based e-mail through the agency’s Web browsers, and agency employees have been instructed not to use such systems for official FTC business. However, this official said that agency employees may use the commission’s remote application delivery environment to obtain limited access to external Web-based e-mail as a convenience. The business unit reviewed at FTC was the Division of Marketing Practices within the Consumer Protection Bureau, which responds to problems of consumer fraud in the marketplace, such as deceptive marketing schemes that use false and misleading information. The division enforces federal consumer protection laws by, among other things, developing rules to protect consumers and filing actions in federal district court for immediate and permanent orders to stop scams and get compensation for scam victims. The business unit follows the FTC’s print and file approach to recordkeeping, saving e-mails and other communications if they are related to a case. At this unit, cases are investigations of Internet fraud and marketing practices, each of which is assigned to a lead attorney. Officials provided one closed case file for inspection, consisting of four boxes of records. The case file provided contained about 65 e-mails, all of which included transmittal data and distribution lists. HUD: HUD currently uses a print and file approach to e-mail recordkeeping. The department’s e-mail system is not a recordkeeping system, and according to officials, they have not implemented the option allowed by NARA’s guidance to use the e-mail system for storing transitory e-mail records. However, as part of an overall modernization plan, HUD is undertaking an enterprise office system modernization project for its records and document management. According to the business case submitted by HUD to OMB to justify the modernization investment, the HUD Electronic Record System (HERS) will replace eight legacy systems and support the full life cycle of document management activities and correspondence management, including the creation and processing of records, record disposition, and retrieval of historical archived information. HUD plans to implement HERS by the fourth quarter of 2010. In the first phase of the plan, HUD is implementing modernized systems for tracking correspondence and Freedom of Information Act requests. Although the correspondence system is used for tracking e-mail correspondence, it is not a recordkeeping system for e-mail. The business unit reviewed at HUD was the Office of Healthy Homes and Lead Hazard Control. Among other things, this office manages grants related to lead hazard and conducts investigations to determine compliance with HUD’s Lead Disclosure Rule. HUD records management officials stated that each program area has a file plan, and that the Office of Healthy Homes and Lead Hazard Control has its own records schedule. According to officials from the office, most of their business is transacted via certified mail, so that relatively few e-mail messages would be record material. Two units provided active open files for inspection: nine grant files from six Government Technical Representatives in the Program Management and Assurance Division, and four lead hazard investigation case files from one inspector in the Compliance Assistance and Enforcement Division. The nine grant files included 120 e-mail messages, and the four investigation files included 5 e-mail messages, all in the same case file. All 125 of the e-mail records included transmittal data and distribution lists, as required. At three of the four agencies reviewed, the policies in place generally addressed the requirements for e-mail records management that we identified, but each was missing one of the nine requirements. At the fourth agency (HUD), the policies in place did not cover three of eight applicable requirements. According to NARA’s regulations on records management, agencies are required to establish policies and procedures that provide for appropriate retention and disposition of electronic records. In addition to including general provisions on electronic records, agency procedures must address specific requirements for e-mail records. The regulations provide minimum requirements, which allow agencies flexibility to establish processes for managing e-mail records that are appropriate to their business, size, and resources. According to the regulations, certain aspects of e-mail must be addressed in the instructions that agencies provide staff on identifying and preserving electronic mail messages, such as the need to preserve transmission data. Agencies are also required to address the use of external e-mail systems that are not controlled by the agency (such as private e-mail accounts on commercial systems such as Gmail, Hotmail, .Mac, etc.). Where agency staff have access to external systems, agencies must ensure that federal records sent or received on such systems are preserved in the appropriate recordkeeping system and that reasonable steps are taken to capture available transmission and receipt data needed by the agency for recordkeeping purposes. One of the four agencies (HUD) had its systems configured so that staff could not access external e- mail applications; thus, this requirement was not applicable for HUD. In summary, we extracted nine key requirements from the regulation. Agency records management policy and guidance with regard to e-mail must address these requirements, which are shown in table 3. The policies and guidance at three of the four agencies (DHS, FTC, and EPA) each omitted one applicable requirement. At DHS, the policies and guidance did not state that draft documents circulated on e-mail systems are potential federal records. Department officials told us that they recognized that their policies did not specifically address the need to assess the records status of draft documents, and said they planned to address the omission during an ongoing effort to revise the policies. At EPA and FTC, the e-mail management policy did not instruct staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems. According to officials at both agencies, such instructions were not included because agency employees were instructed not to use such accounts for agency business. However, whenever access to such external systems is available at an agency, the agency should provide these instructions. If agency records management policies and guidance are not complete, agency e-mail records may be at increased risk of loss. If agencies do not state that draft documents circulated on e-mail systems are potential records, agency officials may not preserve such record materials. If agencies do not instruct staff on the management and preservation of e- mail messages sent or received from nongovernmental e-mail systems, officials may create or receive e-mail records in external systems that may not be preserved in recordkeeping systems. In the course of our review at EPA, officials told us that this situation may have arisen: they had discovered that certain e-mail messages for a previous Administrator, possibly including records, had not been saved. According to these officials, they had discovered an e-mail message from a former Acting Administrator instructing a private consultant not to use the Administrator’s EPA e-mail account to discuss a sensitive government issue (World Trade Center issues) but to use a personal e-mail account. EPA officials reported this incident to NARA on April 11, 2008, in a letter that also described the agency’s response to the incident and planned safeguards to avoid such incidents in the future; these safeguards included the release of a policy statement prohibiting the use of non-EPA messaging systems for the conduct of agency business and a review of e-mail account auto-delete settings. NARA replied on April 30 that the safeguards EPA planned appeared appropriate. Finally, HUD’s policies and guidance did not include, or did not implement, three of eight applicable e-mail records management requirements. For one requirement, HUD’s policy was inconsistent with NARA’s regulations, and it was silent on two of the requirements. HUD did not fully implement the requirement to ensure that staff are capable of identifying federal records because its e-mail policy states that only the sender is responsible for reviewing the record status of an e-mail. However, NARA’s regulation defines e-mail messages as material either created or received on electronic mail systems. HUD officials acknowledged that the department’s policy omits the recipient’s responsibility for determining the record status of e-mail messages and stated that the e-mail policy fell short of fully implementing NARA regulations in this regard because the department’s practice is not to use e- mail for business matters in which official records would need to be created. However, this practice does not remove the requirement for agency employees to assess e-mail received for its record status, because the agency cannot know that employees will not receive e-mail with record status; the determination of record status depends on the content of the information, not its medium. In addition, two other requirements were missing from HUD’s policy: it did not state, as required, that recordkeeping copies of e-mail should not be stored in e-mail systems or that backup tapes should not be used for recordkeeping purposes. HUD officials stated that they considered that these requirements were met by a reference in their policy to the NARA regulations in which these requirements appear. However, this reference is too general to make clear to staff that e-mail systems and backup tapes are not to be used for recordkeeping. Table 4 summarizes the results for the four agencies. If requirements for e-mail management are not included in agency records management policies and guidance, agency e-mail records may be at increased risk of loss. The loss of records that are important for documenting government functions, activities, decisions, and other important transactions could potentially impair agencies’ ability to carry out their missions. E-mail messages that qualified as records were not being appropriately identified and preserved for 8 of the 15 senior officials we reviewed. Senior officials at three agencies did not consistently conform to key requirements in NARA’s regulations for e-mail records; only at FTC did the four senior officials fully follow these requirements. The other three agencies showed varying compliance: three officials at DHS, two officials at EPA, and three officials at HUD were not following required e-mail recordkeeping practices. Factors contributing to the inconsistent e-mail recordkeeping practices include inadequate training and oversight. Other factors included the difficulty of managing large volumes of e-mail in paper-based recordkeeping systems and the stated practice at one agency that e-mail would not be used for record material. As described, the four agencies primarily used “print and file” recordkeeping systems, which require agency staff to print out e-mail messages for filing as the official recordkeeping copies in designated filing systems. Each agency’s policy also required the preservation of e-mail transmission data, distribution lists, and acknowledgments. DHS. At DHS, our review covered three senior officials because, according to DHS officials, the Secretary of Homeland Security did not use e-mail: these officials told us that the Secretary did not have a DHS e-mail account, and that he did not conduct any official communications using external nongovernmental e-mail systems. For the remaining three officials, the e-mail management practices did not fully comply with the requirements. None of the e-mails of the senior officials were reviewed for their status as a record or filed in an appropriate recordkeeping system. Instead, the officials were using their e- mail accounts to store all e-mails. Two of the three officials personally managed their e-mail accounts; the third shared this responsibility with a member of his staff. The staff of one of the officials who managed his own e-mail had access to the official’s e-mail account, but the staff reviewed or accessed these only if instructed to do so by the official. The department said that the third official’s office administrator had access to calendar functions only. According to one of these senior officials, storing e-mails on the computer is convenient for searching and retrieving. It was this official’s opinion that this approach was safe from a legal standpoint because no e-mails were deleted. Nonetheless, using an e-mail system to retain all e-mails indefinitely increases the difficulty of performing searches based on categories of records; in contrast, such searches are facilitated by a true recordkeeping system. Further, if e-mail records are not stored in an appropriate recordkeeping system (paper or electronic), there is reduced assurance that they are useful and accessible to the agency as needed, or that they will be retained for the appropriate period. EPA: At EPA, the e-mail records of two of the four senior officials were being managed in accordance with key requirements reviewed. For these two senior officials, one of whom was the agency head, e-mail records were stored in paper-based recordkeeping systems. The EPA Administrator had two EPA e-mail accounts, one intended for messages from the public and one for communicating with select senior EPA officials (not intended for use by the public). In the paper-based recordkeeping system, of 25 e-mail records inspected, all included transmission data and distribution lists, as required. For the nonpublic account, staff provided eight e-mail records for inspection, all of which also included transmission data and distribution lists. According to EPA officials, the nonpublic account generated few records because the Administrator receives most of his information from other sources, including face-to-face briefings and meetings. For the second senior official, administrative staff told us that the official reviewed e-mail personally and forwarded records to the staff for printing and filing in a paper-based recordkeeping system that followed the agency’s records schedules. We selected 20 e-mails from the official’s files for examination. These files were associated with four EPA records schedules. All of the e-mails included transmission data and distribution lists as required. The e-mail records of two other senior officials were not being managed in compliance with requirements, because e-mail records were not being stored in appropriate recordkeeping systems, but rather in the e-mail system: One of these officials was in the process of migrating e-mail records from the e-mail system to ECMS. This official had been storing e-mail records in e-mail system folders since January 2006, in anticipation of the rollout of the ECMS, and had not been using a paper-based recordkeeping system in the interim. The e-mail system’s folders were organized according to the agency’s records schedules to facilitate the transfer, which was ongoing. Because this senior official did not store e-mail records in a paper-based recordkeeping system during this transition, the official’s e-mail account was being used as a recordkeeping system, which is contrary to regulation. However, when the transition to the electronic recordkeeping system is complete, the new system should provide the opportunity for this official’s recordkeeping practices to be brought into compliance with requirements. The second official was also saving all e-mail in the e-mail system. EPA officials stated that most of the senior official’s e-mail was sent to an administrative assistant, who was responsible for identifying and maintaining the records received and filing them accordingly. However, the administrative assistant for this official stated that although she had been briefed on maintaining and preserving the senior official’s calendar in a recordkeeping system, she had not received guidance or training in how to preserve or categorize the official’s e-mail for recordkeeping purposes. In addition, the assistant stated that all e-mails remained stored in the e- mail system where they could be retrieved if necessary. FTC: The four senior officials at FTC were managing e-mail in compliance with key requirements reviewed. These officials were the Chairman and three Commissioners. According to an FTC official, the Commissioners do not discuss substantive issues in e-mails to one another because of the possibility that such group e-mails could be construed as meetings subject to the Sunshine Act, which must be open to the public. FTC staff told us that the then-Chairman and two Commissioners delegated part or all of the responsibility for e-mail management; the remaining Commissioner personally managed e-mails. E-mails with record status were to be printed and filed in the commission’s paper-based recordkeeping systems. The FTC recordkeeping systems contained e-mail records of the four officials; of the 155 e-mail records inspected, all included the required distribution lists and transmission data. HUD: One of the four senior officials at HUD was managing e-mail in compliance with key requirements, but for the other three officials, e-mail records were not stored in appropriate recordkeeping systems. The e-mail records for the agency head were being managed in accordance with key requirements. According to HUD officials, management of e-mails for the agency head was delegated to staff: that is, the agency head’s e- mails were forwarded by his administrative assistant to the Office of the Executive Secretariat, where they were reviewed for record status and preserved as necessary in paper files. Staff from the Office of the Executive Secretariat flagged 10 e-mail records using the department’s correspondence tracking system, which were then retrieved from the paper-based recordkeeping system for inspection; all of these files included the required distribution lists and transmission data. The practices of the three other senior officials varied, except that for all three, they or their staff stated that the officials retained e-mail messages in the e-mail system. One senior official told us that he read his own e- mail and forwarded messages to staff to determine record status. Another official’s staff stated that the staff was responsible for managing e-mail, but that the official would determine what should be printed and filed. The third official’s staff stated that the official did not review e-mails for record status but forwarded all program-related e-mails to staff, who would decide which e-mails should be included in the program files as records. Neither the three senior officials nor several of their staff had received records management training. HUD provided copies of e-mail messages from one senior official for review, but there was no evidence that the messages were stored in an appropriate recordkeeping system, and HUD officials stated that the provided e-mails were not records. They offered to provide similar nonrecord messages for the two other officials, but we declined to review them because the messages would not have addressed the question of whether the officials were storing e-mail records in appropriate recordkeeping systems. Thus, for these three officials the department did not provide examples of printed e-mail records that had been stored in appropriate recordkeeping files. According to department officials, this situation is explained by HUD’s practice of not using e-mail for business matters that would produce records. According to department officials, official business is conducted through paper processes, some electronic processes (such as Web-based systems), but rarely through e-mail. Nonetheless, although e-mail may rarely rise to the level of a record under paper-based processes, it does not follow that no e-mail records are ever created or received, as shown by the e-mail records maintained by the department’s Executive Secretariat and the Office of Healthy Homes and Lead Hazard Control. The weakness in HUD’s policy regarding responsibility for determining which e-mails are records, combined with the lack of training in e-mail records management, reduces the department’s assurance that those e-mail messages that are records are being appropriately identified. Factors contributing to the inconsistent practices at the three agencies include inadequate training and oversight, as well as the difficulties of managing large volumes of e-mail with the tools and resources available, which in most cases do not include electronic recordkeeping systems. The regulations require agencies to develop adequate training to ensure that staff implement agency policies. All four agencies have issued guidance and developed training materials, and all state that they performed records management training. For example, according to DHS officials, all three senior officials and staff had received records management training as new employees. However, DHS and HUD had no documentation to indicate that employees had received such training, and our review of practices found instances in which staff did not understand their recordkeeping responsibilities for e-mail and stated that they had not been informed of them or received training. For example, three senior HUD officials had not received training on records management. Staff explained that formal briefings had last taken place at that time. Agencies must also periodically evaluate their records management programs, including periodic monitoring of staff determinations of the record status of materials. However, the three agencies have not fully developed and implemented oversight mechanisms, and do not determine the extent to which senior officials or other staff are following applicable requirements for e-mail records. According to DHS, it has initiated oversight and review activities, but these are not yet at the pilot stage because of other demands on records management staff, such as completion of records scheduling. EPA has developed an oversight plan and has pilot-tested a records management survey tool, but it has not yet begun agencywide reviews. It plans to fully deploy this tool when ECMS is fully implemented. HUD had not initiated oversight and review activities, according to officials, because of its practice of not using e-mail for matters that would necessitate the creation of official records. These officials stated that when the department’s modernized system for records and document management is in place, the department’s e-mail policies will be updated and appropriate oversight and review activities put in place. Unless agencies train staff adequately in records management and perform periodic evaluations or establish other controls to ensure that staff receive training and are carrying out their responsibilities, agencies have little assurance that e-mail records are appropriately identified, stored, and preserved. Further, keeping large numbers of record and nonrecord messages in e-mail systems potentially increases the time and effort needed to search for information in response to a business need or an outside inquiry, such as a Freedom of Information Act request. The volume of e-mail is also described as contributing to e-mail records management shortcomings. Agency officials and staff referred to the difficulty of managing large volumes of e-mail, suggesting that limited resources contributed to their inability to fully comply with records management and preservation policies. To help ensure that e-mail records are managed appropriately, it is helpful to incorporate recordkeeping into the process by which agency staff create and respond to mission-related e- mail. Because this process is electronic, the most straightforward approach is to perform e-mail recordkeeping electronically. All four agencies, however, still rely either entirely or primarily on paper for their recordkeeping systems, even for “born digital” records like e-mail. Weaknesses in the processes in place at three of the four agencies reviewed raise questions about the appropriateness of paper recordkeeping processes for their e-mail records. Simply devoting more resources to paper records management may be neither efficient nor cost- effective, and the agencies have recognized that this is not a tenable long- term solution. EPA is beginning a transition to electronic recordkeeping, and HUD and DHS have plans focused on future enterprisewide transitions. Managing electronic documents, including e-mail, in electronic recordkeeping systems would potentially provide the efficiencies of automation and avoid the expenditure of resources on duplicative manual processes and storage. It is important to recognize, however, that moving to electronic recordkeeping has proved not to be a simple or easy process and that projects at large agencies have presented the most significant challenges. For projects of all sizes, agencies must balance the potential benefits of electronic recordkeeping against the costs of redesigning business processes and investing in technology. NARA has called the decision to move to electronic recordkeeping inevitable. Nonetheless, like other information technology investments, such a move requires careful planning in the context of the specific agency’s circumstances, in addition to well-managed implementation. NARA’s limited performance of its oversight responsibilities leaves it with little assurance that agencies are effectively managing records, including e- mail records, throughout their life cycle. NARA has an organizational preference for partnering with and supporting agencies’ records management activities, which is appropriate for many of its guidance and assistance responsibilities. However, this preference has led NARA to avoid performing oversight activities that it judged to be perceived negatively—the full-scale inspections/evaluations that it performed in previous years. Although it has performed studies that provide it with insights into records management issues and it has taken action in response to the findings, it has not developed means to evaluate the state of federal records management programs and practices. As a result, NARA’s oversight of federal records management programs, including management of e-mail, has been limited. Further, NARA’s limited reporting on problems and solutions identified at individual agencies reduces its own ability to hold agencies accountable for addressing identified problems, as well as reducing the ability of agencies to learn from the experience of others. At the four agencies reviewed, e-mail records management policies were generally compliant with NARA regulations, with some exceptions. If policies do not fully conform to regulatory requirements, it increases the likelihood that those requirements will not be met in practice. Senior officials at three of the four agencies stored e-mail records in e-mail systems, rather than in recordkeeping systems, which is not in accordance with NARA’s regulations. Factors contributing to this noncompliance generally included insufficient training and oversight regarding recordkeeping practices, as well as the onerousness of handling large volumes of e-mail. Providing adequate training and oversight is a prerequisite for improvement, but real improvements in e-mail recordkeeping may require replacing the paper-based recordkeeping processes currently in place. Properly implemented, the transition to electronic recordkeeping of e-mail has the potential not only to reduce the burden of e-mail management but also to provide positive benefits in improving the usefulness and accessibility of records. To better ensure that federal records, including those that originated as e- mail messages, are appropriately identified, retained, and archived, we recommend that the Archivist of the United States develop and implement an approach to oversight of agency records management programs that provides adequate assurance that agencies are following NARA guidance, including developing various types of inspections, surveys, and other means to evaluate the state of agency records and records management programs; developing criteria for using these means of assessment that ensure that they are regularly performed; and regularly report to the Congress and OMB on the findings, recommendations, and agency responses to its oversight activities, as required by law. In addition, we recommend that the Administrator of the Environmental Protection Agency revise the agency’s policies to ensure that they appropriately reflect NARA’s requirement on instructing staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. We further recommend that the Chairman of the Federal Trade Commission revise the commission’s policies to ensure that they appropriately reflect NARA’s requirement to instruct staff on the management and preservation of e-mail messages sent or received from nongovernmental e-mail systems. We further recommend that the Secretary of Homeland Security revise the department’s policies to ensure that they appropriately reflect NARA’s requirement to state that draft documents circulated on e-mail systems are potential federal records and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. Finally, we recommend that the Secretary of Housing and Urban Development revise the department’s policies to ensure that they appropriately reflect NARA’s requirements to ensure that staff is capable of identifying federal records and to state that e-mail systems must not be used to store recordkeeping copies of e-mail records (other than those exceptions provided in the regulation) and that e-mail system backup tapes should not be used for recordkeeping purposes, and develop and apply oversight practices, such as reviews and monitoring of records management training and practices, that are adequate to ensure that policies are effective and that staff are adequately trained and are implementing policies appropriately. We provided a draft of this report to NARA, DHS, EPA, FTC, and HUD for review and comment. Three agencies provided written comments (which are reproduced in apps. II to IV), and two provided comments via e-mail. All five agencies indicated that they were implementing or intended to implement our recommendations. Three of the five agencies generally agreed with our findings and recommendations. One agency provided information about its use of outside e-mail accounts, and one agency agreed to implement our recommendations but questioned aspects of our report. In written comments, the Archivist of the United States stated that NARA generally agreed with our draft report and would develop an action plan to implement our recommendation. The Archivist also provided technical comments, and we clarified our report to address each of them. (see app. II). In e-mail comments, the Director, Records, Publications, and Mail Management at DHS, stated that the department agreed with our draft report and that it correctly represented the condition at the time of the review. The Director also said that future DHS records management policy documents would be revised to reflect our recommendations. In written comments, the Chief Information Officer of EPA stated that the agency accepted our two recommendations. In addition, she provided additional information on the EPA records management program. Finally, this official provided technical comments, which we addressed as appropriate; our assessment of these comments is contained in appendix III. In e-mail comments, an official from FTC’s Office of the General Counsel stated that FTC had instructed staff not to use outside e-mail accounts for official business, but it was nonetheless taking action to implement our recommendation by issuing a notice to staff regarding policies and procedures for e-mail records, which included a statement that work- related e-mails inadvertently sent or received from non-FTC accounts must be handled in accordance with the agency’s records preservation policies and procedures. Our draft recognized FTC’s instruction not to use outside accounts for official business, but also noted that that FTC did not totally prohibit access to such accounts. Because access to outside accounts was available, FTC was required by NARA regulations to provide staff with guidance on the proper handling of e-mail records sent or received through such accounts. FTC also provided technical comments, which we incorporated as appropriate. In written comments, HUD’s Acting Chief Information Officer stated that HUD planned to implement our recommendations, but also stated that our draft was inaccurate in three areas: The Acting CIO questioned the clarity of a figure we included to illustrate a decision process that could be used to decide if an e-mail message is a record. As noted in our draft, the illustration is provided as an example to illustrate the kinds of factors that may be considered when deciding whether an e-mail message is a record. The Acting CIO disagreed with our conclusions regarding HUD’s compliance with the requirements we reviewed, stating that the department’s records policies comply with all these requirements because they incorporate NARA’s regulations by reference. While our draft recognized the reference to NARA regulations in HUD’s policy, we concluded that such a reference was not adequate to comply with NARA regulations. As we stated in our draft, the reference in HUD’s policy is too general to make clear to HUD staff which practices are prohibited. In addition, HUD did not establish procedures to implement the requirements in question, as the regulations require. The Acting CIO questioned the accuracy of a statement on the number of senior officials whose files were reviewed. Our evidence shows that our statement was accurate, but we revised it to include further clarifying detail. We provide more detailed responses to these points in appendix IV. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this report. At that time, we will send copies of this report to the Archivist of the United States, the Administrator of the Environmental Protection Agency, the Chairman of the Federal Trade Commission, the Secretary of Homeland Security, and the Secretary of Housing and Urban Development. Copies will be made available to others on request. In addition, this report will be available at no charge on our Web site at www.gao.gov. If you have questions about this report, please contact me at (202) 512- 6240 or koontzl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. assess to what extent the National Archives and Records Administration (NARA) provides oversight of federal records management programs and practices, particularly with regard to e-mail, describe processes followed by selected federal agencies to manage assess to what extent the selected agencies’ e-mail records management policies comply with federal requirements, and assess compliance of selected senior officials with key e-mail recordkeeping requirements. To determine the extent to which NARA provides oversight of federal agencies for managing and preserving federal e-mail records, we analyzed applicable laws, regulations, and guidance; reviewed NARA’s oversight activities from 2003 to 2007, including its reports to OMB and the Congress on records management activities; reviewed recent NARA’s records management reports; and interviewed NARA officials. To address our other objectives, we judgmentally selected four agencies for review based upon several factors. First, we identified four general government functions from those functions that NARA identified in a 2004 resource allocation study as having records that had a direct and significant impact on the rights, welfare, and/or well-being of American citizens or foreign nationals: homeland security, health, economic development, and environmental management. (NARA classified these functions as high risk for rights/accountability.) Next, using NARA’s analysis, we compiled a list of the federal agencies and their components that performed those high-risk functions. For each identified agency, we further classified it according to agency structure (a department with component bureaus or agencies, a department with an office structure, an independent agency, or an independent commission) and size (a large department over 150,000 employees, a small department less than 11,000 employees, a small independent agency less than 1,100 employees, or a large independent agency over 18,000 employees). We then judgmentally selected four agencies from the high-risk list that presented various combinations of structure and size. These were as follows: Department of Homeland Security (U.S. Immigration and Customs Enforcement) Rated by NARA as high on rights and accountability for records in the Homeland Security: Immigrant and Non-Citizen Services function Department with component agencies Over 162,000 employees Department of Housing and Urban Development (Office of Healthy Homes and Lead Hazard Control) Rated by NARA as high on rights and accountability for records in the Health: Illness Prevention function Department with offices Less than 11,000 employees Rated by NARA as high on rights and accountability for records in the Environmental Management: Environmental Remediation function Independent agency Rated by NARA as high on rights and accountability for records in the Economic Development: Business, Trade, Trust, and Financial Oversight Independent commission At each of the four selected agencies, we assessed e-mail records management policies of the agency; described processes followed by agencies to manage e-mail records, specifically reviewing e-mail records management practices of a business unit associated with the high-risk function; and assessed compliance of four senior officials with key e-mail recordkeeping requirements. We selected a business unit from each organization that (1) performed the particular line of business we identified in our agency selection process and (2) had permanent records that NARA rated high on risk to accountability and citizen rights. Table 5 identifies the business unit we selected at each agency. We also selected four senior officials at each agency. At DHS, EPA, and HUD, we selected the head of the agency, the head of the office responsible for policy, a randomly selected senior official, and the most senior agency official associated with the business unit we inspected. At FTC, we selected the Chairman and three Commissioners. The selected senior officials are listed in table 6. To describe the agencies’ e-mail records management practices, we analyzed documents, interviewed appropriate officials at the agency (including business unit officials and staff), and performed limited inspections of selected e-mail records. To assess each agency’s e-mail records management policies, we reviewed the agency’s published policy documents, including formal policies and operational manuals, as well as agency-provided responses to a data collection instrument on e-mail management, and compared their contents to the e-mail related requirements in NARA’s records management regulations. To assess compliance of senior officials with key e-mail recordkeeping requirements, we analyzed documents, used data collection instruments to gather information from the senior officials, their staffs, or other appropriate officials, and inspected selected e-mail records. We asked each agency to provide examples of senior officials’ e-mail messages stored as records to corroborate their responses. We then analyzed the information provided by the agencies and assessed it against the e-mail requirements in NARA’s regulations on federal records. We did not attempt to assess the extent to which the agencies’ staff correctly identified e-mail records or the extent to which the agencies’ records appropriately included e-mail. The four data collection instruments we used are briefly described in table 7. We performed our work at agency offices in the Washington, D.C., metropolitan area. We conducted this performance audit from April 2007 to May 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. 1. We clarified our discussion of this topic. 2. We clarified our discussion of this topic. 3. We removed the reference to the 180 day limit. 4. In our discussion of the exchange between EPA and NARA on the incident involving possible loss of e-mail records, we included information on EPA’s plan to promulgate a policy on the use on non- EPA e-mail systems. 5. See comment 4. EPA plans to promulgate a policy prohibiting the use of non-EPA e-mail systems for EPA business. 6. We updated our discussion of this topic to reflect NARA’s response. 7. We do not use EPA’s terminology because we do not find “primary” and “secondary” to be useful descriptions. However, we revised our discussion to clarify the references. 8. See note 7. 9. If EPA implements the oversight mechanism we recommend, it will help ensure that e-mail records are properly identified and protected. 10. We updated our discussion to indicate when EPA plans to deploy its survey tool. The following are GAO’s comments on the on the HUD’s written response dated May 28, 2008, to our draft report. 1. As noted in our report, the described decision process is an example of one that could be used to determine whether an e-mail message is a record. We did not state that the process is a requirement that must be followed by any particular agency. 2. See comment 1. 3. See comment 5. 4. See comment 5. 5. Our draft noted that HUD incorporated Parts 1220, 1222, and 1228 of NARA’s regulations by reference. However, the policy requirements at issue are contained in Part 1234 of NARA’s regulations. In its comments, HUD argues that the Parts it cites incorporate Part 1234 by reference. We do not agree with HUD that this type of indirect reference is a sufficient or effective way of informing HUD staff of their e-mail recordkeeping responsibilities as well as of prohibited practices. In addition, HUD did not fully implement the applicable e- mail management requirements because it did not establish procedures to implement appropriate procedures that protect e-mail records. 6. See comment 5. 7. The text suggested by HUD is incorrect in that we requested copies of e-mail records from all three selected officials. We revised our report to provide additional detail on this. 8. We agree that enhancing HUD’s policies on e-mail records as we recommend could increase their usability by all HUD officials and staff; among other things, this could clarify for HUD staff which practices are prohibited. 9. We agree that not every e-mail is an official record, and we emphasized this point in our report. However, we also emphasized that the content of a communication, not its form, determines its record status. In addition to the individual named above, Mirko Dolak and James R. Sweetman, Jr. (Assistant Directors); Monica Anatalio; Timothy Case; Barbara Collier; Pamlutricia Greenleaf; Jennifer Franks; Tarunkant N. Mithani; Sushmita Srikanth; and Jennifer Stavros-Turner made key contributions to this report.
Federal agencies are increasingly using electronic mail (e-mail) for essential communication. In doing so, they are potentially creating messages that have the status of federal records, which must be managed and preserved in accordance with the Federal Records Act. Under the act, both the National Archives and Records Administration (NARA) and federal agencies have responsibilities for managing federal records, including e-mail records. In view of the importance that e-mail plays in documenting government activities, GAO was asked, among other things, to review the extent to which NARA provides oversight of federal records management, describe selected agencies' processes for managing e-mail records, and assess these agencies' e-mail policies and key practices. To do so, GAO examined NARA guidance, regulations, and oversight activities, as well as e-mail policies at four agencies (of contrasting sizes and structures) and the practices of selected officials. Although NARA has responsibilities for oversight of agencies' records and records management programs and practices, including conducting inspections or surveys, performing studies, and reporting results to the Congress and the Office of Management and Budget (OMB), in recent years NARA's oversight activities have been primarily limited to performing studies. NARA has conducted no inspections of agency records management programs since 2000, because it uses inspections only to address cases of the highest risk, and no recent cases have met its criteria. In addition, NARA has not consistently reported details on records management problems or recommended practices that were discovered as a result of its studies. Without more comprehensive evaluations of agency records management, NARA has limited assurance that agencies are appropriately managing the records in their custody and that important records are not lost. The four agencies reviewed generally managed e-mail records through paper-based processes, rather than using electronic recordkeeping. A transition to electronic recordkeeping was under way at one of the four agencies, and two had long-term plans to use electronic recordkeeping. (The fourth agency had no current plans to make such a transition.) Each of the business units that GAO reviewed (one at each agency) maintained "case" files to fulfill its mission and used these for recordkeeping. The practice at the units was to include e-mail printouts in the case files if the e-mail contained information necessary to document the case--that is, record material. These printouts included transmission data and distribution lists, as required. All four agencies had e-mail records management policies that addressed, with a few exceptions, the requirements in NARA's regulations. However, the practices of senior officials at those agencies did not always conform to requirements. Of the 15 senior officials whose practices were reviewed, the e-mail records for 7 (including all 4 at one agency) were managed in compliance with requirements. (One additional official was selected for review but did not use e-mail.) The other 8 officials generally kept e-mail messages, record or nonrecord, in e-mail systems that were not recordkeeping systems. (Among other things, recordkeeping systems allow related records to be categorized according to their business purposes.) If e-mail records are not kept in recordkeeping systems, they may be harder to find and use, as well as being at increased risk of loss from inadvertent or automatic deletion. Factors contributing to noncompliance included insufficient training and oversight as well as the difficulties of managing large volumes of e-mail. Without periodic evaluations of recordkeeping practices or other controls to ensure that staff are trained and carry out their responsibilities, agencies have little assurance that e-mail records are properly identified, stored, and preserved.
FHA was established in 1934 under the National Housing Act (P.L. 73-479). The primary purpose of FHA’s Fund is to insure private lenders against losses on mortgages that finance purchases of one to four housing units. There are two primary sources and three uses of cash for the Fund. The two sources of cash are income from mortgagees’ premiums and net proceeds from the sale of foreclosed properties. The three uses of cash are (1) payments associated with claims on foreclosed properties, (2) refunds of premiums on mortgages that are prepaid, and (3) administrative expenses for management of the program. To cover losses, FHA deposits insurance premiums from participating borrowers in the Fund. According to 12 U.S.C. 1711, the Fund must meet or endeavor to meet statutory capital ratio requirements; that is, it must contain sufficient reserves and funding to cover estimated future losses resulting from the payment of claims on defaulted mortgages and administrative costs. A determination of reserves and funding to cover estimated future losses requires the use of an accrual basis of accounting. The accrual concept is particularly important for an entity such as FHA (or any insurance enterprise) because the actual payout or collection of cash may precede or follow the event that gave rise to the cash transaction by a substantial time period. Thus, a favorable cash position, or positive cash flow, at any given point may not reflect the true financial position of the entity. The Fund remained relatively healthy until the 1980s, when losses were substantial primarily because foreclosure rates were high in economically stressed regions, particularly in the Rocky Mountain and Southwest regions. For example, in fiscal year 1988, the Fund lost $1.4 billion. If the Fund were to be exhausted, the U.S. Treasury would have to directly cover lenders’ claims and administrative costs. Reforms designed to restore financial stability to the Fund and to correct problems in loan origination and property disposition were initiated by the Congress and HUD. The Omnibus Budget Reconciliation Act of 1990 (P.L. 101-508), enacted in November 1990, contained reforms to FHA’s single-family mortgage insurance program designed to place the Fund on a financially sound basis. The legislation, among other things, required FHA borrowers to pay more in insurance premiums over the life of the loans by adding a risk-based annual premium to the one-time, up-front premium. It effectively raised the present value of the insurance premium from 3.8 percent of the loan amount to from 5.5 to 6.8 percent, depending on the amount of the down payment made. It accomplished this change via two actions: lowering the up-front premium from 3.8 to 2.25 percent of the loan amount over a 4-year transitional period and, during the same period, phasing in a new annual premium of 0.5 percent of the loan balances. Those borrowers who make higher down payments pay the annual premium for a shorter period. Other changes made by the legislation in response to the Fund’s financial problems included (1) limiting the loan-to-value ratio to a maximum of 97.75 percent of the value of homes appraised at more than $50,000 and (2) effectively suspending the payment of distributive shares (distribution of excess revenues to mortgagors) until the Fund is financially sound. The legislation also required the Secretary of HUD to endeavor to ensure a capital ratio of 2 percent by November 2000 and maintain that ratio or a higher one at all times thereafter. The act defined the capital ratio as the ratio of the Fund’s capital, or economic net worth, to its unamortized insurance-in-force. We and HUD’s Inspector General have been reporting on FHA’s management problems since the early 1980s. We have concluded in previous testimonies and reports that in addition to economic factors, poor program management and waste, fraud, and abuse contributed to the losses sustained by FHA’s Fund. For example, FHA did not have accounting data and internal controls in place to reconcile funds from the sales of government-owned properties with deposits to the U.S. Treasury. As a result, private real estate agents were able to steal millions of dollars by simply retaining the proceeds from the sale of FHA-owned properties rather than transferring the funds to the Treasury. HUD’s efforts to improve the financial stability of the Fund have consisted of initiating several audits of the Fund; modifying the program, primarily to tighten controls and improve monitoring; and developing automated systems. For example, to reduce problems with loan origination, HUD tightened its screening of applicants, took steps to improve how it targets its efforts to monitor lenders, and strengthened appraisal requirements. To reduce problems with property disposition, HUD, among other things, tightened controls over closing agents and area management brokers and took actions to improve property pricing and automated accounting and management systems. Any success achieved by HUD and FHA in reducing FHA’s losses through better management will improve the Fund’s financial health. The Fund had amortized insurance-in-force valued at about $305 billion as of September 30, 1994. To estimate the economic net worth of, and resulting capital ratio for, these loans over their life of up to 30 years, we developed an economic model of FHA’s home loan program. We generated three different economic scenarios, assuming for each a different rate of appreciation in house prices over the next 30 years. The actual economic net worth and capital ratios of the Fund and the validity of our estimates will depend on a number of future economic factors, including the rate of appreciation in house prices over the life of the FHA mortgages of up to 30 years. This factor is significant because, as house prices rise, the borrowers’ equity increases and the probability of defaults and subsequent foreclosures decreases. The house price appreciation, interest, and unemployment rates that we used were based on forecasts from DRI/McGraw-Hill, a private economic forecasting company. Table 1 presents our estimates of the economic net worth and resulting capital ratios for the FHA mortgage loans outstanding as of September 30, 1994, under each of our three economic scenarios. Although future rates of appreciation in house prices are uncertain, to be conservative, we placed greater reliance on our mid-range baseline economic scenario because it assumes slightly lower house price appreciation rates (1 percent annually) than the rates forecasted by DRI/McGraw-Hill. Under this scenario, we estimate that the Fund had an economic net worth of about $6.1 billion and resulting capital ratio of 2.02 percent at the end of fiscal year 1994. This estimate represents an improvement of about $8.8 billion from the lowest level reached by the Fund—a negative $2.7 billion economic net worth estimated by Price Waterhouse at the end of fiscal year 1990. Under our low-case economic scenario, which assumes house price rates of appreciation of 2 percentage points lower than our baseline and a higher unemployment rate, we estimate that the Fund’s economic net worth would be $3 billion. Conversely, under our high-case economic scenario, which assumes house price rates of appreciation of 2 percentage points higher than our baseline, we estimated that the Fund’s economic net worth would be $7.4 billion. We estimate that the economic net worth of the Fund increased under our baseline scenario by about $1.2 billion during fiscal year 1994, from $4.9 billion at the end of fiscal year 1993 to $6.1 billion at the end of fiscal year 1994. This increase occurred even though large numbers of FHA borrowers continued to lower their interest rates during fiscal year 1994 by refinancing their mortgages conventionally, which resulted in partial refunds of their insurance premiums. A detailed discussion of factors contributing to the $1.2 billion growth in the Fund’s economic net worth during fiscal year 1994 appears in appendix I. We estimate that FHA’s Fund, with a capital reserve ratio of 2.02 percent of the amortized insurance-in-force, surpassed the November 2000 capital ratio target of 2 percent during fiscal year 1994. Therefore, the Fund has sufficient capital reserves to meet the capital ratio target. Whether the Fund will be able to maintain the capital ratio will depend on a number of factors that will prevail in the future. These factors include (1) economic conditions, (2) changes to the program that affect the financial condition of the Fund, (3) the performance of FHA’s streamlined refinanced loans, and (4) risks associated with the demand for FHA’s loans. We did not attempt to project the economic net worth and capital ratio of the Fund to fiscal year 2000 because these factors are likely to change. As shown in table 1, our estimates are sensitive to future economic conditions, particularly house price appreciation rates. The Fund will not perform as well if the economic conditions that prevail over the next 30 years replicate those we assumed in our low-case economic scenario. Our estimate of the Fund’s economic net worth for our low-case economic scenario is about $3 billion, or 49 percent, less than that of our baseline scenario. Under economic scenarios having generally favorable economic conditions but lower rates of appreciation in house prices, such as our low-case economic scenario, FHA’s Fund would likely experience higher claims. As a result, its economic net worth would decline. FHA’s support of single-family mortgages could be altered by changes to the program proposed by the administration and others. The administration’s proposals, which are part of its efforts to “reinvent government,” would recreate FHA as a wholly owned government corporation. As such, the single-family insurance operations of a new FHA would be, among other things, free to introduce new product lines, enter into risk-sharing arrangements with private and public entities, and operate under more flexible personnel and procurement practices. Other proposals would limit FHA’s participation in single-family mortgages to low-income individuals and first-time home buyers only. Specific information on the customers that a new FHA single-family mortgage insurance program would serve, the relationship that a new program would establish with partners in the housing market, and the mix of products that a new program would offer is not yet known. The extent to which this or some other restructuring alternative is implemented will have to be decided by the Congress through the legislative and appropriation processes. However, no matter what form FHA takes, these changes will likely have an affect on the Fund’s economic net worth. The substantial refinancing of FHA’s loans that occurred during fiscal years 1992 through 1994 has created a growing class of FHA borrowers whose future behavior is more difficult to predict than the typical FHA borrower’s. FHA’s streamlined refinanced mortgages accounted for about 40 percent of the loans originated by FHA in fiscal year 1994. About 19 percent of FHA’s amortized insurance-in-force at the end of fiscal year 1994 consisted of streamlined refinanced mortgages for which there is little experience with the tendency for such loans to be foreclosed and/or prepaid. Because FHA insured properties for which mortgages were streamlined refinanced were not required to be appraised, the initial loan-to-value ratio of these loans—a key predictor of the probability of foreclosure—is unknown. The impact of these loans on the financial health of the Fund is probably positive, since they represent preexisting FHA business whose risk has been reduced through lower interest rates and lower monthly payments. However, the lack of experience with these loans increases the uncertainty associated with their expected foreclosure rates. This refinancing activity also raises questions about the credit quality of the loans that were not refinanced despite the fall in interest rates. Since, under these circumstances, most borrowers who could refinance would find it to their financial advantage to do so, those borrowers who did not refinance may not have been able to qualify for a new loan. This suggests that future foreclosure rates on these loans, which originated in previous years when interest rates were higher, may be greater than we have forecasted. As additional years of experience with these loans are gained, their effect on the Fund’s financial status will become more certain. New developments in the private mortgage insurance market may increase the average risk of future FHA-insured loans. Home buyers’ demand for FHA-insured loans depends, in part, on the alternatives available to them. Some private mortgage insurers recently began offering mortgage insurance coverage on conventional mortgages with a 97-percent loan-to-value ratio, which brings their terms closer to FHA’s 97.75-percent loan-to-value ratio on loans for properties exceeding $50,000 in appraised value. While potential home buyers may consider many other factors when financing their mortgages, such as the fact that FHA will finance the up-front premium as part of the mortgage loan, this action by private mortgage insurers could reduce the demand for FHA-insured mortgage loans. In particular, by lowering the required down payment, private mortgage insurers may attract some borrowers who might have otherwise insured their mortgages with FHA. If by selectively offering these low down payment loans, private mortgage insurers are able to attract FHA’s lower-risk borrowers, such as borrowers with better-than-average credit histories or payment-to-income ratios, new FHA loans may become more risky on average. If this effect is substantial, the economic net worth of the Fund may be adversely affected, and it may be more difficult for the Fund to maintain a 2-percent capital ratio. Price Waterhouse has performed annual actuarial reviews of the Fund for FHA since 1990. In its most recent report dated May 8, 1995, Price Waterhouse reported that the Fund had an economic net worth of about $6.68 billion—compared with our baseline estimate of $6.1 billion—and a resulting capital ratio of 1.99 percent of the unamortized insurance-in-force as of the end of fiscal year 1994—compared with our baseline estimate of 2.02 percent of the amortized insurance-in-force. It also reported that the Fund will meet the fiscal year 2000 capital ratio of 2 percent of the unamortized insurance-in-force with a capital ratio of 3.03 percent and that the economic net worth of the Fund will be about $15.2 billion. These projections are based on forecasted economic assumptions and the assumption that FHA does not change its premium and refund policies. Although our estimate of the Fund’s economic net worth is lower than Price Waterhouse’s estimate by about 9 percent, in view of the uncertainty associated with any forecast of the performance of the Fund’s loans over their life of up to 30 years, these estimates can be considered roughly equivalent. Each of us used somewhat different modeling techniques and assumptions that account for some of the $580 million difference. However, in general, our model and Price Waterhouse’s rely on many of the same key factors, such as the rates of appreciation in house prices and changes in mortgage interest rates, as important determinants of mortgage terminations and the economic net worth of the Fund. However, our estimate of the Fund’s capital ratio is slightly higher than Price Waterhouse’s estimate—2.02 percent compared with 1.99 percent—even though our estimate of economic net worth is lower that Price Waterhouse’s. The primary reason for this is the fact that we used a lower insurance-in-force amount ($305 billion of amortized insurance-in-force) to calculate the capital ratio than Price Waterhouse did ($335 billion of unamortized insurance-in-force). As discussed previously, the act defined the capital ratio as the ratio of the Fund’s economic net worth to its unamortized insurance-in-force. However, the act’s definition of unamortized insurance-in-force as the remaining obligation on outstanding mortgages is commonly understood to be the definition of the amortized insurance-in-force. The insurance-in-force amount that we used differs from the amount used by Price Waterhouse primarily because we deleted the loan principal payments made on mortgages to date to arrive at an amortized insurance-in-force amount of $305 billion. We calculated the capital ratio on the basis of the amortized insurance-in-force and not on unamortized insurance-in-force, as did Price Waterhouse. We used the amortized insurance-in-force for our calculations because FHA-insured mortgages are in fact fully amortized over the 30-year life of the loans. Therefore, the amortized insurance-in-force represents a better measure of the Fund’s potential liability. Price Waterhouse used the unamortized insurance-in-force for its calculations to be consistent with its previous reports and because the data on unamortized insurance-in-force are considered more reliable than the data on amortized insurance-in-force. However, Price Waterhouse also reported that its estimate of the capital ratio using the amortized insurance-in-force was 2.16 rather than 1.99. FHA’s Fund has accumulated the capital reserves needed to meet the legislative capital reserve target of 2 percent. Clearly, the legislative and other program changes have helped restore the Fund’s financial health and reverse the trend of the late 1980s and early 1990s toward insolvency. However, it should be recognized that fiscal year 1994 was a good year for FHA because actual economic conditions and forecasts of future economic conditions were favorable. Nevertheless, forecasting economic net worth and resulting capital ratios to determine whether FHA will have the funds it needs to cover its losses over the life of the loans it has insured for up to 30 years is uncertain. The performance of FHA’s loans, and therefore economic net worth and capital ratios, will depend on a number of economic and other factors, particularly on the rates of appreciation in house prices and the alternative, if any, that the Congress implements to restructure FHA. We provided a draft of this report to HUD and Price Waterhouse for their review and comment. We met with HUD officials, including the FHA Comptroller; HUD’s Director of the Program Evaluation Division, Office of Evaluation; and an official from HUD’s Office of Policy Development and Research, and obtained their comments. HUD officials generally agreed with the factual material and conclusions presented in the report. The comments by the Director, Program Evaluation Division, focused on (1) the effect of proposed changes to FHA’s program and (2) FHA’s actions to improve the financial health of the Fund. Specifically, the Director commented that the draft report’s discussion of proposed changes to the program implied that they would have an exclusively negative impact on the financial health of the Fund. He believes that many of the proposed changes will have a positive effect on the Fund’s financial health. The report was changed to eliminate this implication. The Director also commented that the draft report attributed progress in achieving the capital reserve target to favorable actual economic conditions and favorable forecasts of future economic conditions. He pointed out that FHA had taken many actions to improve the financial health of the program, such as revising the premium refund schedule, and that this contribution to economic health should be recognized in our report. We agree. As pointed out in our October 1994 report on the economic net worth of the Fund as of September 30, 1993, we estimated that if FHA had not revised its premium refund schedule, the economic net worth of the Fund would have been about $500 million (10 percent) less than our baseline estimate. We added to our report information on additional actions taken by FHA to improve the financial health of the Fund. However, we continue to believe that favorable prevailing and forecasted economic conditions were primarily responsible for this improvement. As noted in our report, under our low-case economic scenario, which assumes house price appreciation rates 2 percentage points lower than our baseline, we estimated that the Fund’s economic net worth would be $3 billion, rather than $6.1 billion. HUD’s Office of Policy Development and Research official commented that the methodology we used is fundamentally sound and provides a welcome second opinion to Price Waterhouse’s actuarial review. This official also provided technical comments on our model’s specification and interpretation of statistical results. He commented that if the technical comments cannot be addressed in the report and the Congress asks us to estimate the economic net worth of the Fund in the future, we should consult further with the Office of Policy Development and Research on our cash flow and economic models. We have revised the report to address many of the issues concerning the model’s specification and interpretation that were raised by HUD’s Office of Policy Development and Research. If the Congress asks us to do more work in this area, we will consult further with HUD on our models. We also met with a Price Waterhouse official, who commented that our economic model was solid. Price Waterhouse also provided technical comments, which we incorporated where appropriate. To estimate the economic net worth of FHA’s Fund as of September 30, 1994, and its resulting capital ratios under different economic scenarios, we examined existing studies on the single-family housing programs of both HUD and the Department of Veterans Affairs (VA), academic literature on the modeling of mortgage foreclosures and prepayments, and previous work performed by Price Waterhouse, HUD, VA, us, and others on modeling government mortgage programs. On the basis of this examination, we developed econometric and cash flow models to prepare our estimates. For these models, we used data supplied by FHA and DRI/McGraw-Hill, a private economic forecasting company. Our econometric analysis estimated the historical relationships between the probability of loan foreclosure and prepayment and key explanatory factors, such as the borrower’s equity and the interest rate. To estimate these relationships, we used data on the performance of FHA-insured home mortgage loans—such as data on foreclosure, prepayment, and loss rates—originated from fiscal year 1975 through fiscal year 1994. Also, using our estimates of these relationships and of economic conditions, we developed a baseline forecast of future loan performance to estimate the Fund’s economic net worth and resulting capital ratio. We then developed additional estimates that assumed higher and lower future rates of appreciation in house prices; the scenario with the lower rates of appreciation of house prices also assumed higher unemployment. To estimate the net present value of future cash flows of the Fund, we constructed a cash flow model to measure the primary sources and uses of cash for loans originated from fiscal year 1975 through fiscal year 1994. Our model was constructed to estimate cash flows for each policy year through the life of a mortgage. An important component of the model was the conversion of all income and expense streams—regardless of the period in which they actually occur—into 1994 dollars. In addition to estimating the economic net worth of the Fund as a whole, we also generated approximations of the economic net worth of the loans originated in the 2 most recent fiscal years. To conduct this analysis, it was necessary not only to project future cash flows but also to estimate the level of past cash flows. To test the validity of our model, we examined how well our model predicted the actual rates of FHA’s loan foreclosures and prepayments through fiscal year 1994. We found that our predicted rates closely resembled the actual rates. To compare our estimate of the Fund’s economic net worth with the estimate prepared for FHA by Price Waterhouse, we compared our economic model with the model developed by Price Waterhouse. We also discussed with Price Waterhouse officials differences in the models and methods for forecasting the Fund’s economic net worth. A detailed discussion of our models and methodology for forecasting the economic net worth of FHA’s Fund appears in appendix II. We conducted our work from April 1995 through February 1996 in accordance with generally accepted government auditing standards. Unless you announce its contents earlier, we plan no further distribution of this report until 10 days from the date of this letter. At that time, we will send copies to interested congressional committees; the Secretary of HUD; and the Director, Office of Management and Budget. We will make copies available to others on request. Please contact me at (202) 512-7631 if you or your staff have any questions. Major contributors to this report are listed in appendix III. We estimate that during fiscal year 1994, the economic net worth of the Federal Housing Administration’s (FHA) Mutual Mortgage Insurance Fund increased by about $1.2 billion. This increase is attributable to our estimates of positive contributions to economic net worth made by two factors—the inclusion in our estimate of some fiscal year 1993 loans that had been excluded from our fiscal year 1993 estimate and loans insured by FHA in fiscal year 1994. The increase in economic net worth attributable to these two factors was offset to some extent by our estimate of a decrease in the Fund’s economic net worth from loans insured by FHA in fiscal year 1993 and earlier years. Table I.1 summarizes these factors. Data provided by FHA last year and used in our September 30, 1993, economic net worth estimates did not include information on all loan originations and terminations occurring in fiscal year 1993. FHA subsequently updated its records to include the remaining fiscal year 1993 activity. As shown in table I.1, including this loan activity increases our estimate of the Fund’s economic net worth by about $260 million, resulting in a revised estimate of about $5.16 billion as of the end of fiscal year 1993. We also estimate that loans insured by FHA in fiscal year 1994 contributed about $1.4 billion to the economic net worth of the Fund. This represents the third consecutive year in which the Fund’s new loans made a positive contribution to the Fund’s economic net worth. However, this increase in economic net worth was reduced by a $460 million decrease in the economic net worth of loans insured by FHA in fiscal year 1993 and earlier years. As a result, a net increase of $940 million was realized in our baseline estimate during fiscal year 1994, bringing our baseline economic net worth estimate as of September 30, 1994, to $6.1 billion. The $460 million decrease in the Fund’s estimated economic net worth for loans insured by FHA in fiscal year 1993 and earlier years is the result of several factors, some of which involved large increases or decreases in economic net worth. Table I.2 summarizes the factors contributing to changes in the economic net worth of loans made in fiscal year 1993 and earlier years. We estimate that the economic net worth of the Fund’s loans made in fiscal year 1993 and earlier years increased by about $620 million because updated data showed that these loans performed better in fiscal year 1994 than previously forecasted. This occurred, in part, because during fiscal year 1994, house prices increased more rapidly, and the unemployment rate was lower than in previous economic forecasts. Interest earned on investments accounted for an estimated increase of $618 million. Offsetting these increases were several factors that resulted in a decrease in the estimated economic net worth of the Fund’s loans made in fiscal year 1993 and earlier years. A $235 million decrease in economic net worth is attributable to our revised forecasts for loan foreclosures and prepayments for these loans during fiscal year 1995 and beyond. These revisions resulted largely from revised assumptions of future economic conditions that in combination, had a less favorable financial effect on the Fund. A $953 million decrease occurred because our 1994 forecast uses an economic model different from the model we used to derive our fiscal year 1993 estimate. Our revised model uses a different statistical approach and recognizes the higher risks associated with the performance of refinanced and adjustable rate mortgages rather than treating these mortgages like other FHA mortgages, as we have done in the past. The Fund’s economic net worth was also reduced by about $273 million because we updated our calculation of the present value of future cash flows using fiscal year 1994 instead of 1993 as our base, which increases the present value of future cash flows (which are negative) because they are discounted by 1 less year of interest. That is, because we are 1 year closer to paying claims associated with future foreclosures, the present value of these claims against the Fund is larger. The remaining $237 million decrease was attributable to other factors. This appendix describes the econometric and cash flow models that we built and the analysis we conducted to estimate the economic net worth of the Federal Housing Administration’s (FHA) Mutual Mortgage Insurance Fund (Fund) as of the end of fiscal year 1994. The goal of the econometric analysis was to forecast mortgage foreclosure and prepayment activity, which affect the flow of cash into and out of the Fund. We forecasted activity for all loans active at the end of fiscal year 1994 for each year from fiscal year 1995 through fiscal year 2024 on the basis of assumptions stated in this appendix. We estimated equations from data covering fiscal years 1975 through 1994 that included all 50 states and the District of Columbia but excluded U.S. territories. Our forecasting models used observations on loan-quarters, that is, information on the characteristics and status of an insured loan during each quarter of its life to predict conditional foreclosure and prepayment probabilities. More specifically, our model used a continuous time estimation routine, CTM to jointly predict the probabilities of a loan terminating in a claim or a prepayment at a given time, as a function of interest and unemployment rates, the borrower’s equity (computed using a house’s price and current and contract interest rates as well as a loan’s duration), the loan-to-value (LTV) ratio, the house price, the geographic location of the house, and the length of time that the loan has been active. Cash flows out of the Fund when FHA pays a claim on a foreclosed mortgage and when a prepaid mortgage results in the partial refund of a premium. Cash flows into the Fund when FHA sells the foreclosed property and when borrowers pay the premium for the mortgage insurance. We forecasted the cash flows into and out of the Fund on the basis of our foreclosure and prepayment models and key economic variables provided by DRI/McGraw-Hill, a leading economic forecasting firm. We then used the forecasted cash flows, including an estimate of interest that would be earned or foregone, and the Fund’s capital resources to estimate the economic net worth of the Fund. We conducted separate estimations for investors’ mortgages, fixed-rate mortgages with terms of 25 years or more (hereafter referred to as 30-year mortgages), fixed-rate mortgages with terms of less than 25 years (hereafter referred to as 15-year mortgages), and adjustable-rate mortgages (ARMs). The 30-year fixed-rate mortgages and investor mortgages were further divided into new (purchase money) and refinancing mortgage samples. A complete description of the data we used, our models, and the results we obtained are discussed in detail in the following sections. For our analysis, we selected from FHA’s computerized files a random sample of 1.4 million mortgages insured by FHA from fiscal year 1975 through fiscal year 1994. From FHA’s records, we obtained information on the initial characteristics of each loan, such as the year of the loan’s origination and state in which the loan originated; the LTV ratio; the loan’s amount; and the contract’s interest rate. We categorized the loans as either foreclosed, prepaid, or active as of the end of fiscal year 1994. To describe macroeconomic conditions at the national and state levels, we obtained data from the 1995 Economic Report of the President on the implicit price deflator for personal consumption expenditures. The Federal Home Loan Mortgage Corporation’s quarterly interest rates for 30-year fixed-rate mortgages were used, along with DRI/McGraw-Hill’s data at the state level, on the median house price appreciation and civilian unemployment rates and on interest rates on 1-year and 10-year U.S. Treasury bonds. People buy homes for consumption and investment purposes. Normally, people do not plan to default on loans. However, conditions that lead to defaults occur. Defaults may be triggered by a number of events: unemployment, divorce, death, and so forth. These events are not likely to trigger foreclosure if the owner has positive equity in his/her home because the sale of the home with realization of a profit is better than the loss of the home through foreclosure. However, if the property is worth less than the mortgage, these events may trigger default. Prepayments to financial institutions may be triggered by other events—declining interest rates, which prompt refinancing; rising house prices, which prompt the take-out of accumulated equity; or the sale of the residence. Because FHA’s mortgages are assumable, the sale of a residence does not automatically trigger prepayment. For example, if interest rates have risen substantially since the time the mortgage was originated, a new purchaser may prefer to assume the seller’s mortgage. We hypothesized that foreclosure behavior is influenced by the level of unemployment, price of the house, value of the home, current interest rates, contract interest rates, home equity, and region of the country within which the home is located. We hypothesized that prepayment is influenced by (1) a function of the difference between the interest rate specified in the mortgage contract and the mortgage rates generally prevailing in each subsequent year, (2) the amount of accumulated equity, (3) the price of the house, and (4) the region of the country in which the home is located. The estimated model also allows for the presence of unobserved heterogeneity, that is, the possibility that individual borrowers will refinance (or default) at different interest rate differentials (or levels of equity) for reasons not recorded in the data. Such reasons might include differences in financial sophistication, differences in moving plans, or differences in the value attached to a good credit rating. In models that do not allow for the presence of heterogeneity, the impact of time on termination probabilities will be overstated, since the loans most likely to terminate will terminate first. Additionally, estimating a heterogeneity distribution provides a method of capturing the effect of refinancing waves, such as those that occurred during 1986-87 and 1992-93, on the termination probabilities of the mortgages that remain. Our first set of coefficients estimate conditional mortgage foreclosure probabilities as a function of a variety of explanatory variables. Our second set of coefficients estimate conditional prepayment probabilities. The model estimated is a competing risks hazard model. The probability of prepaying or terminating with a loss to the Fund over the course of a quarter is jointly estimated as a function of time (the baseline hazard) multiplied by a linear function of the independent variables. The baseline hazards are estimated as a Box-Cox transformation of time measured in months. - 1)/l when l is not zero, and ln(x) when l =0. termination (claim or prepayment), CTM also estimates a coefficient by which those points are multiplied, referred to as the factor loading. In effect, CTM estimates a distribution of intercepts for each termination probability. This incorporates the assumption that mortgage borrowers differ in their probabilities of mortgage termination in unobservable ways. While the different probabilities are not attached to individual borrowers, the heterogeneity parameters produce an estimate of the proportions of borrowers with high or low termination propensities. The methodology is analogous to a random effects model for the analysis of panel data. The variables we used to predict foreclosures and prepayments fall into two general categories: descriptions of states of the economy and characteristics of the loan. In choosing explanatory variables, we relied on the results of our own and others’ previous efforts to model foreclosure and prepayment probabilities and on implications drawn from economic principles. We allowed for many of the same variables to affect both foreclosure and prepayment. The single most important determinant of a loan’s foreclosure is the borrower’s equity in the property, which changes over time because (1) payments reduce the amount owed on the mortgage, (2) property values can increase or decrease, and (3) prevailing mortgage interest rates change, while the rate on a fixed-rate mortgage remains constant. Equity is a measure of the current value of a property compared with the current value of the mortgage on that property. Previous research strongly indicates that borrowers with small amounts of equity or even negative equity are more likely than other borrowers to default. We computed equity as the difference between the value of the property and the value of the mortgage, expressed as a percentage of the value of the property. For example, if the value of a property is $100,000 and the value of the mortgage is $80,000, then equity is 20 percent, or 0.2. To measure equity for modeling the foreclosure behavior of fixed-rate mortgages, we calculated the value of the mortgage as the present value of the remaining mortgage payments (up to a maximum of 10 years), evaluated at the current quarter’s fixed-rate mortgage interest rate, and added the book value of the mortgage at the end of 10 years, thus assuming a prepayment 10 years into the future. We calculated the value of the property by multiplying the value of the property at the time of the loan’s origination by the change in the state’s median nominal house price between the year of origination and the current year. Because the effects on claims of small changes in equity may differ depending on whether the level of equity is high or low, we used a pair of equity variables, LAGEQHI and LAGEQLOW, in our foreclosure regression. The effect of equity is lagged 1 year, since we are predicting the time of foreclosure, which usually occurs many months after a loan first defaults. We also included lagged equity in our prepayment regression. We anticipated that higher levels of equity would be associated with an increased likelihood of prepayment. Borrowers with substantial equity in their home may be interested in prepaying their existing mortgage and taking out a larger one to obtain cash for other purposes. Borrowers with little or no equity may be less likely to prepay because they may have to take money from other savings to pay off their loan and cover transaction costs. For the prepayment regression, we defined equity as book equity (LAGBKHI and LAGBKLOW). Book equity was defined as the estimated property value less the amortized balance of the loan. It is book value that the borrower must pay to retire the debt. Additionally, the effect of interest rate changes on prepayment are captured by the relative interest variables, RELEQHI and RELEQLO. In addition to LAGEQHI and LAGEQLOW, we included another variable in our regressions related to equity: the initial DOWNPAY, calculated as 1 minus the LTV ratio. In some years, FHA measured the LTV ratio as the loan amount less the financed portion of the mortgage insurance premium in the numerator and appraised value plus closing costs in the denominator. To reflect true economic LTV, we adjusted FHA’s measure by removing closing costs from the denominator and including financed premiums in the numerator. DOWNPAY measures a borrower’s initial equity, so we anticipate that it will be negatively related to the probability of foreclosure. One reason for including DOWNPAY is that it measures initial equity accurately. Our measures of current equity are less accurate because we do not have data on the rate of change for the price of each borrower’s house. Another reason for including DOWNPAY and expecting it to have a negative sign in our foreclosure equation is that it may capture the effects of income constraints. We are unable to include borrowers’ incomes or payment-to-income ratios directly because data on borrowers’ incomes were not available for every year in the sample period. However, it seems likely that borrowers with little or no down payment are more likely to be financially stretched in meeting their payments and, therefore, more likely to default. The anticipated relationship between DOWNPAY and the probability of prepayment is uncertain. We used the natural logarithm of the annual unemployment rate for each state for the period from 1975 through 1994 to describe the condition of the economy in the state where a loan was made. We anticipated that foreclosures would be higher in years and states with higher unemployment rates and that prepayments would be lower because property sales slow down during recessions. The actual variable we used in our regressions, LAGUNEMP, is defined as the logarithm of the preceding year’s unemployment rate in that state. We included the logarithm of the interest rate on the mortgage as an explanatory variable in the foreclosure equation. We expected a higher interest rate to be associated with a higher probability of foreclosure because a higher interest rate causes a higher monthly payment. However, in explaining the likelihood of prepayment, our model uses a function of the ratio of current mortgage rates to the contract rate on the borrower’s mortgage. A borrower’s incentive to prepay is high when the interest rate on a loan is greater than the rate at which money can now be borrowed, and it diminishes as current interest rates increase. To capture the relative attractiveness of prepaying, we calculated the present value of the mortgage payments over the remaining term of the mortgage (up to 10 years) using the currently prevailing mortgage interest rate to estimate the market value of the mortgage. This value was divided by the book value of the mortgage (the unpaid principal balance), and the relative balance was used as an explanatory variable for prepayment. In our prepayment regression, we used the two relative interest rate variables defined above, RELEQHI and RELEQLO, so that the effect of changes in relative interest rates could be different over different ranges. RELEQHI is defined as the ratio of the market value of the mortgage to the book value of the mortgage but is never smaller than 1. RELEQLO is also defined as the ratio of the market value of the mortgage to the book value but is never larger than 1. Thus, RELEQHI captures a borrower’s incentive to refinance, and RELEQLO captures a new buyer’s incentive to assume the seller’s mortgage. We created two variables, REFIN and REFIN2, that measure how many quarters have passed in which the borrower had not taken advantage of a refinancing opportunity. We defined a refinancing opportunity as having occurred if the interest rate on fixed-rate mortgages in any previous quarter in which a loan was active was at least 150 basis points below the contract rate on the mortgage. REFIN counts the number of quarters in which the loan has been active and a refinancing opportunity has not been seized, up to a maximum of eight quarters. REFIN2 counts the number of passed refinancing opportunities in excess of eight quarters, up to a maximum of eight more quarters. Several reasons might explain why borrowers passed up apparently profitable refinancing opportunities. For example, if they had been unemployed or their property had fallen in value, they might have had difficulty obtaining refinancing. This reasoning suggests that REFIN and REFIN2 would be positively related to the probability of foreclosure; that is, a borrower unable to obtain refinancing previously because of poor financial status might be more likely to default. Similar reasoning suggests a negative relationship between REFIN and REFIN2 and the probability of prepayment; a borrower unable to obtain refinancing previously might also be unlikely to obtain refinancing currently. A negative relationship might also exist if a borrower’s passing up of one profitable refinancing opportunity reflected a lack of financial sophistication that in turn, would be associated with passing up additional opportunities. However, a borrower who anticipated moving soon might pass up an apparently profitable refinancing opportunity to avoid the transaction costs associated with refinancing. A positive relationship might exist in this case, with the probability of prepayment if the borrower fulfilled his/her anticipation and moved, thereby prepaying the loan. Another explanatory variable is the volatility of interest rates, INTVOL, defined as the standard deviation of the monthly average of the Federal Home Loan Mortgage Corporation’s series of 30-year fixed-rate mortgage effective interest rates. We calculated the standard deviation over the previous 12 months. Financial theory predicts that borrowers are likely to refinance more slowly at times of volatile rates because there is a larger incentive to wait for a still-lower interest rate. We also included the slope of the yield curve, YIELDCUR, in our prepayment estimates, which we calculated as the difference between the 1-year and the 10-year Treasury rates of interest. We then subtracted 250 basis points from this difference and set differences that were less than zero to zero. This variable measured the relative attractiveness of adjustable-rate mortgages versus fixed-rate mortgages. When ARMs have low rates, borrowers with fixed-rate mortgages may be induced into refinancing into ARMs to lower their monthly payments. For adjustable-rate mortgages, we did not use relative equity variables as we did with fixed-rate mortgages. Instead, we defined four variables, CHANGEPOS, CHANGENEG, CAPPEDPOS, and CAPPEDNEG, to capture the relationship between current interest rates and the interest rate paid on each mortgage. CHANGEPOS measures how far the interest rate on the mortgage has increased since origination, with a minimum of zero, while CHANGENEG measures how far the rate has decreased, with a maximum of zero. CAPPEDPOS measures how much farther the interest rate on the mortgage will rise, if prevailing interest rates in the market do not change, while CAPPEDNEG measures how much farther the mortgage’s rate will fall if prevailing interest rates do not change. For example, if an ARM is originated at 7 percent and interest rates have increased by 250 basis points 1 year later, CHANGEPOS will equal 100 because FHA’s ARMs can increase by no more than 100 basis points in a year. CAPPEDPOS will equal 150 basis points, since the mortgage rate will eventually increase by another 150 basis points if market interest rates do not change, and CHANGENEG and CAPPEDNEG will equal zero. As interest rates have generally trended downwards since FHA introduced ARMs, there is very little experience with ARMs in an increasing interest rate environment. We created four 0-1 variables to reflect the geographic distribution of FHA loans and included them in both regressions. Locational differences may capture the effects of differences in borrowers’ income, rates of appreciation in house prices, underwriting standards by lenders, economic conditions not captured by the unemployment rate, or other factors that may affect foreclosure and prepayment rates. We assigned each loan to one of the four Bureau of the Census regions on the basis of the state in which the borrower resided. The West Region was the omitted category, that is, the regression coefficients show how each of the regions was different from the West Region. We also created a variable, JUDICIAL, to indicate states that allowed judicial foreclosure procedures in place of nonjudicial foreclosures. To obtain an insight into the differential effect of relatively larger loans on mortgage foreclosures and prepayments, we used the logarithm of the initial house price as an explanatory variable. This variable was divided into three ranges—below $60,000, $60,000 to $120,000, and $120,000 and over—to allow the effect of house price to change over its range. The three ranges were called LOGPRICL, LOGPRICM, and LOGPRICH, respectively. All dollar amounts are inflation adjusted and represent 1994 dollars. Finally, to capture the time pattern of foreclosures and prepayments (given the effects of equity and the other explanatory variables), we defined two variables on the basis of the number of quarters that had passed since the year of the loan’s origination. We refer to these variables as YEAR12 and YEAR34. YEAR12 counts the number of quarters since origination, up to the sixth quarter. YEAR34 counts the number of quarters since origination from the 7th to the 14th quarter. TIME measures the number of months elapsed since origination, and EXPONENT is the estimated value of a Box-Cox transformation of TIME. We created the variables YEAR12 and YEAR34 to allow for the passage of time to have much stronger impacts on termination probabilities in the early months of a mortgage’s life. Table II.1 summarizes the variables we used to predict claims and prepayments along with their corresponding means. These means are for investor mortgages, both for purchase and for refinancing purposes; 30-year fixed-rate mortgages, both for purchase and for refinancing purposes; 15-year fixed-rate mortgages; and adjustable-rate mortgages. Log of house price if ^æM\Q $60,000 Log of house price 0.32 $60,000 but $60,000 but ^æM\Q$120,000 Log of house price $120,000 Log of contract interest rate The volatility of mortgage rates, defined as the standard deviation of 30-year fixed mortgage rates over the prior 12 months The slope of the yield curve, defined as the difference between 1-year and 10-year Treasury interest rates minus 250 basis points, but not less than zero The ratio of the market value of the mortgage to the book value if the market value is below the book value, else 1 The ratio of the market value of the mortgage to the book value if the market value is above the book value, else 1 Number of quarters that the prevailing mortgage interest rate had been at least 150 basis points below the contract rate and the borrower had not refinanced, up to eight quarters Number of quarters that the above situation prevailed, beyond eight quarters (continued) The logarithm of the previous year’s unemployment rate in the state Number of quarters since origination, up to six Number of quarters since the 6th, up to 14 The down payment, expressed as a percentage of the purchase price of the house. The values reported in FHA’s database were adjusted to ensure that closing costs were included in the loan amount and excluded from the house price. The value of equity, defined as 1 minus the ratio of the present value of the loan balance, evaluated at the current mortgage interest rate, to the current estimated house price, if equity is less than 20 percent, else 20 percent The value of equity, defined as 1 minus the ratio of the present value of the loan balance, evaluated at the current mortgage interest rate, to the current estimated house price, minus 20 percent, but no less than zero The value of equity, defined as 1 minus the ratio of the amortized loan balance to the current estimated house price, if equity is less than 20 percent, else 20 percent (continued) The value of equity, defined as 1 minus the ratio of the amortized loan balance to the current estimated house price, minus 20 percent, but no less than zero 1, if the loan was in the East (Conn., Maine, Mass., N.H., N.J., N.Y., Pa., R.I., and Vt.), else zero 1, if the loan was in the South (Ala., Ark., D.C., Del., Ga., Ky., La., Md., Miss., N.C., Okla., S.C., Tenn., Tex., Va., and W.Va.), else zero 1, if the loan was in the Midwest (Ill., Ind., Iowa, Kans., Mich., Minn., Mo., Nebr., N.D., Ohio, S.D., and Wis.), else zero 1, if state allowed judicial foreclosure (list of states varies by year) N/A = not applicable. As described above, we used competing risks hazard rate models to estimate loan foreclosures and prepayments as a function of a variety of predictor variables. We estimated separate regressions for 30-year fixed-rate mortgages, 15-year fixed-rate mortgages, investors’ loans, and adjustable-rate mortgages originated (made) from fiscal year 1983 to fiscal year 1993. The 30-year fixed-rate mortgages and investors’ mortgages were further divided into samples of purchase money loans and loans made for the purpose of refinancing. Although FHA was given authority to insure streamlined refinancing loans in 1983, FHA’s database cannot reliably identify refinancing loans before 1991. Therefore, we placed any loan written after fiscal year 1982 with an LTV ratio of zero into the refinanced loan sample, along with loans that FHA’s database identified as refinancing loans. We estimated quarterly termination probabilities throughout the life of the loan or the end of fiscal year 1994, whichever came first. Tables II.2 and II.3 present the estimated coefficients for all of the predictor variables for foreclosure and prepayment equations. Table II.4 displays the estimated heterogeneity distributions for the regression results in the previous tables. ARM loan regression results are presented in table II.5. A heterogeneity distribution was not estimated for ARMs. All loan categories except for the refinanced investor loans were estimated with hundreds of thousands of observations, so most coefficients are significant at standard levels. In general, our results are consistent with the economic reasoning that underlies our models. Most importantly, the probability of foreclosure declines as current equity and down payment increase, and the probability of prepayment increases as the current mortgage interest rate falls below the contract mortgage interest rate. Both of these effects are very strong. As expected, the unemployment rate is positively related to the probability of foreclosure and negatively related to the probability of prepayment. Our results also indicate that the probability of foreclosure is higher when the contract rate of interest is higher. Mortgages on more-expensive houses have higher prepayment probabilities. For purchase money mortgages, foreclosure probability declines with the price of a house, but for refinanced mortgages foreclosure probability rises with price. For 30-year fixed mortgages and for investor mortgages, passing up a profitable refinancing opportunity raises the probability of foreclosure. For all mortgages, passing up profitable refinancing opportunities lowers prepayment probabilities. The heterogeneity distributions presented in table II.4 indicate substantial differences in intercepts among different classifications of borrowers. For instance, among new 30-year fixed-rate borrowers, 62.9 percent are estimated to have a foreclosure intercept of 17.739, 24.8 percent (87.7 percent minus 62.9 percent) are estimated to have a foreclosure intercept of 17.202 (a location of 0.169 times a factor loading of –3.179, added to the intercept of 17.739), 5.9 percent are estimated to have a foreclosure intercept of 16.503, and 6.4 percent are estimated to have a foreclosure intercept of 14.56. This indicates that about 6.4 percent of borrowers have substantially lower termination probabilities than do most borrowers. To test the validity of our model, we examined how well the model predicted actual patterns of FHA’s claim and prepayment rates through fiscal year 1994. Using a sample of 10 percent of FHA’s loans made from fiscal year 1975 through fiscal year 1994, we found that our predicted rates closely resembled actual rates. To predict the probabilities of claim payment and prepayment, we combined the model’s coefficients with the information on a loan’s characteristics and information on economic conditions described by our predictor variables in each quarter between a loan’s origination and fiscal year 1994. For each loan-quarter, we predicted termination probabilities and compared them with random numbers from a uniform distribution. If the termination probability was greater than the random number, the loan was assumed to terminate in that quarter. If our model predicted a foreclosure or prepayment termination, we determined the loan’s balance during that quarter to indicate the dollar amount associated with the foreclosure or prepayment. We estimated cumulative claim and prepayment rates by summing the predicted claim and prepayment dollar amounts for all loans originated in each of the fiscal years 1975 through 1994. We compared these predictions with the actual cumulative (through fiscal year 1994) claim and prepayment rates for the loans in our sample. Figure II.1 compares predicted and actual cumulative foreclosure rates, and figure II.2 compares predicted and actual cumulative prepayment rates. We then forecasted future loan activity (claims and prepayments) on the basis of the regression results described above and on DRI/McGraw-Hill’s forecasts of the key economic and housing market variables. DRI/McGraw-Hill forecasts the median sales price of existing housing, by state and year, through fiscal year 1998. We subtracted 2 percentage points per year to adjust for improvements in the quality of housing over time and the depreciation of individual housing units. After fiscal year 1998, we assumed that prices would rise at 3 percent per year. For our base case, we made DRI/McGraw-Hill’s forecasts of appreciation rates less optimistic by subtracting another 1 percentage point per year from the company’s forecasts. DRI/McGraw-Hill also forecast each state’s unemployment rate through fiscal year 2002. For our base case, we used DRI/McGraw-Hill’s forecasts of each state’s unemployment rate and assumed that rates from fiscal year 2003 on would equal the rate in fiscal year 2002. We also used DRI/McGraw-Hill’s forecasts of interest rates on 30-year fixed-rate mortgages. The economic value of the Fund is defined in the Omnibus Budget Reconciliation Act of 1990 as the “current cash available to the Fund, plus the net present value of all future cash inflows and outflows expected to result from the outstanding mortgages in the Fund.” Information on the capital resources of the Fund as of September 30, 1994, was obtained from the audited financial statements for fiscal year 1994. Capital resources were reported to be $10.8 billion. To estimate the net present value of future cash flows of the Fund, we constructed a cash flow model to measure the five primary sources and uses of cash for loans originated in fiscal years 1975 through 1994. The two sources of cash are income from mortgagees’ premiums and net proceeds from the sale of foreclosed properties. The three uses of cash are payments associated with claims on foreclosed properties, refunds of premiums on mortgages that are prepaid, and administrative expenses for management of the program. In addition to estimating the economic value of the Fund as a whole, we also generated approximations of the economic value of the loans originated in the 2 most recent fiscal years. To conduct this analysis, it was necessary not only to project future cash flows but also to estimate the level of past cash flows. Our model was constructed to estimate cash flows for each policy year through the life of a mortgage. An important component of the model is its ability to convert all income and expense streams—regardless of the period in which they actually occur—into a 1994 present value. We applied discount rates to match as closely as possible the rate of return that FHA likely earned in the past or would earn in the future from its investment in U.S. Treasury securities. As an approximation of what FHA earned for each book of business, we used a rate of return comparable to the yield on 7-year U.S. Treasury securities prevailing when that book was written to discount all cash flows occurring in the first 7 years of that book’s existence. We assumed that after 7 years, the Fund’s investment was rolled over into new Treasury securities at the interest rate prevailing at that time and used that rate to discount cash flows to the rollover date. For rollover dates occurring in fiscal year 1994 and beyond, we used 7 percent as the new discount rate. As an example, cash flows associated with the fiscal year 1992 book of business and occurring from fiscal year 1992 through fiscal year 1998 (i.e, the first 7 policy years) were discounted at the 7-year Treasury rate prevailing in fiscal year 1992. Cash flows associated with the fiscal year 1992 book of business but occurring in fiscal year 1999 and beyond are discounted at a rate of 7 percent. Our methodology for estimating each of the five principal cash flows is described below. Because FHA’s premium policy has changed over time, our calculations of premium income to the Fund changes depending on the date of the mortgage’s origination. For fiscal years 1975 through 1983: Premium = annual outstanding principal balance x 0.5%. For fiscal years 1984 through June 30, 1991: Premium = original loan amount x mortgage insurance premium. The mortgage insurance premium during this period is equal to 3.8 percent for 30-year mortgages and 2.4 percent for 15-year mortgages. For the purposes of this analysis, mortgages of other lengths of time are grouped with those they most closely approximate. Effective July 1, 1991, legislation mandated that FHA add an annual premium of 0.5 percent of the outstanding principal balance to its up-front premiums. The number of years for which a borrower would be liable for making premium payments depended on the LTV ratio at the time of origination. (See table II.6.) For the period July 1, 1991, through September 30, 1992: Premium = (original loan amount x 3.8%) + (annual outstanding principal balance x 0.5%). For the period October 1, 1992, through December 31, 1992: Premium = (original loan amount x 3.0%) + (annual outstanding principal balance x 0.5%). For the period January 1, 1993, through April 17, 1994: 30-year mortgages: Premium = (original loan amount x 3.0%) + (annual outstanding principal balance x 0.5%). 15-year mortgages: Premium = (original loan amount x 2.0%) + (annual outstanding principal balance x 0.25%). For the period April 18, 1994, through September 30, 1994: 30-year mortgages: Premium = (original loan amount x 2.25%) + (annual outstanding principal balance x 0.5%). 15-year mortgages: Premium = (original loan amount x 2.00%) + (annual outstanding principal balance x 0.25%). For 15-year mortgages, annual premiums are payable for 8, 4, or zero years depending on the LTV category of the mortgage at loan origination. Claims Payments = outstanding principal balance on foreclosed mortgages x acquisition cost ratio. We define the acquisition cost ratio as being equal to the total amount paid by FHA to settle a claim and acquire a property (i.e., FHA’s “acquisition cost” as reported in its database) divided by the outstanding principal balance on the mortgage at the time of foreclosure. For the purpose of our analysis, we calculated an average acquisition cost ratio for each year’s book of business using actual data for fiscal years 1975 through 1992 and applied that average to projected claims. Beginning in fiscal year 1993, FHA’s A43 database no longer contained the information needed to calculate the acquisition cost ratio. Therefore, we used the fiscal year 1992 ratio for fiscal years 1993 and 1994. (See tables II.7 and II.8.) Net proceeds = (5.9/12) x claims payments from previous period x (1 - loss ratio) + (6.1/12) x claims payments from current period x (1 - loss ratio). We assumed the lag time between the payment of a claim and the receipt of proceeds from the disposition of the property to be 5.9 months on the basis of the latest available information reported by Price Waterhouse in its fiscal year 1994 financial audit of FHA. We define the loss ratio as being equal to FHA’s reported dollar loss after the disposition of property divided by the reported acquisition cost. For forecast periods, we applied a loss rate of 38 percent, which is the average loss reported by FHA’s financial auditors for fiscal year 1994. This is comparable to the weighted average of losses for fiscal years 1975 through 1989. The amount of premium refunds paid by FHA’s Fund depends on the policy year in which the mortgage is prepaid and the type of mortgage. For mortgages prepaid from October 1, 1983, to December 31, 1993, we used the refund rate schedule that FHA published in the April 1984 edition of Mortgage Banking. In 1993, FHA changed its refund policy to affect mortgages prepaid on or after January 1, 1994. The refund rates that we used from the new schedule—which assume prepayment at mid-year—are found in table II.9. For loans prepaying through December 31, 1993: Refunds = original loan amount x refund rate. For loans prepaying on or after January 1, 1994: Refunds = up-front mortgage insurance premium x refund rate. Administrative expenses = outstanding principal balance x 0.1% Our estimate of administrative expenses as 0.1 percent of the outstanding principal balances was based on data in recent years’ financial statements. We conducted additional analyses to determine the sensitivity of our forecasts to the values of certain key variables. Because we found that projected losses from foreclosures are sensitive to the rate of unemployment and the rate of appreciation of house prices, we adjusted the forecasts of unemployment and price appreciation to provide a range of economic value estimates under alternative economic scenarios. Our starting points for forecasts of the key economic variables were forecasts made by DRI/McGraw-Hill. We used DRI/McGraw-Hill’s forecasts of house prices in each state, adjusted as described above, as the basis for our estimation of future equity. We subtracted 2 percentage points per year from DRI/McGraw-Hill’s projected price increases to adjust for quality improvements over time. For our base case, we made DRI/McGraw-Hill’s forecasts of appreciation rates less optimistic by subtracting 1 percentage point per year from its forecasts. For our high case, we added 2 percentage points per year to our base case. For our low case, we subtracted 2 percentage points from our base case. DRI/McGraw-Hill also forecast each state’s unemployment rate through fiscal year 2002. For our high case and our base case, we used DRI/McGraw-Hill’s forecasts of each state’s unemployment rate and assumed that rates from fiscal year 2003 on would equal the rate in fiscal year 2002. For our low case, we added 1 percentage point to the forecasted unemployment rate during 1995 and beyond. Table II.10 summarizes the three economic scenarios. The rates of house price appreciation and unemployment are based on DRI/McGraw-Hill’s forecasts. The numbers in the table are our weighted averages of DRI/McGraw-Hill’s state-level forecasts; each state’s number is weighted by the state’s share of FHA’s fiscal year 1993 business. To assess the impact of our assumptions of the loss and discount rates on the economic value of the Fund, we operated our cash flow model with alternative values for these variables. We found that for the economic scenario of our base case, a 1-percentage-point increase in the loss rate (from our assumption of 38 to 39 percent) resulted in a $201 million decline in our estimate of the economic value of the Fund. With respect to the discount rate, we found that for our base case economic scenario, a 1-percentage-point increase in the interest rate that was applied to most periods’ future cash flow (from our assumption of 7 to 8 percent) resulted in a $90 million increase in our estimate of economic value. Mortgage Financing: Financial Health of FHA’s Home Mortgage Insurance Program Has Improved (GAO/RCED-95-20, Oct. 18, 1994). Mortgage Financing: Financial Health of FHA’s Home Mortgage Insurance Program Has Improved (GAO/T-RCED-94-255, June 30, 1994). Homeownership: Actuarial Soundness of FHA’s Single-Family Mortgage Insurance Program (GAO/T-RCED-93-64, July 27, 1993). Homeownership: Loan Policy Changes Made to Strengthen FHA’s Mortgage Insurance Program (GAO/RCED-91-61, Mar. 1, 1991). Impact of FHA Loan Policy Changes on Financial Losses and Homebuyers (GAO/T-RCED-90-94, July 24, 1990). Impact of FHA Loan Policy Changes on Financial Losses and Homebuyers (GAO/T-RCED-90-95, July 10, 1990). Impact of FHA Loan Policy Changes on Its Cash Position (GAO/T-RCED-90-70, June 6, 1990). Impact of FHA Loan Policy Changes (GAO/T-RCED-90-17, Nov. 16, 1989). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Federal Housing Administration's (FHA) Mutual Mortgage Insurance Fund, focusing on: (1) an estimate of the Fund's economic net worth as of the end of fiscal year 1994; (2) the results of the legislatively prescribed capital reserve ratio that expresses economic net worth as a percentage of insurance-in-force; and (3) a comparison between the GAO estimate of the Fund's economic net worth and an estimate prepared by an accounting firm. GAO found that: (1) in 1994, the Fund's economic net worth continued to improve; (2) as of September 30, 1994, the Fund had $305 billion in outstanding mortgage loans; (3) using moderate house price appreciation rates and unemployment rates, the Fund had an economic net worth of about $6.1 billion and a resulting capital ratio of 2.02 percent; (4) using low house price appreciation rates and high unemployment rates, the Fund had an economic net worth of $3 billion; (5) using high house price appreciation rates and low unemployment rates, the Fund had an economic net worth of $7.4 billion; (6) during fiscal year 1994, the Fund's capital ratio of 2.02 percent of the amortized insurance-in-force exceeded the November 2000 capital ratio goal of 2 percent; (7) the firm estimated that the Fund had an economic net worth of about $6.68 billion and a resulting capital ratio of 1.99 percent at the end of fiscal year 1994, which was about the same as GAO estimate; and (8) the Fund's future economic net worth will depend on a number of economic factors including the appreciation rates in housing prices and whether FHA is restructured.
Historically, IPP projects were placed in one of three categories—Thrust 1, Thrust 2, and Thrust 3. DOE now only supports Thrust 2 projects. Specifically: Thrust 1 projects were geared toward technology identification and verification and focused on “laboratory-to-laboratory” collaboration, or direct contact between DOE’s national laboratories and weapons institutes and scientists in the former Soviet Union. These projects had no industry partner and, according to DOE, were entered into to quickly engage former Soviet weapons scientists and their institutes. DOE funded 447 Thrust 1 projects, 378 of which were completed. DOE no longer supports Thrust 1 projects. Thrust 2 projects involve a U.S. industry partner that agrees to share in the costs of the project with DOE to further develop potential technologies. The U.S. industry partner is expected to match the funds DOE provides, either by providing in-kind support, such as employee time and equipment, or by providing cash. Through October 2007, there were 479 IPP projects in the Thrust 2 category. Thrust 3 projects, with the exception of 1 project, did not receive any financial support from DOE and were intended to be self-sustaining business ventures. DOE no longer supports Thrust 3 projects. There were only three Thrust 3 projects and the last project was completed in 2001. All proposed IPP projects are reviewed by DOE’s national laboratories; the IPP program office; and other agencies, including Defense and State, before they are approved for funding. Initially, a national laboratory proposes a project for consideration. As the national laboratory prepares the proposal, the laboratory project manager, generally referred to as the “principal investigator,” is responsible for including, among other things, a list of intended participants and for designating the WMD experience for each participant. The proposed participants are assigned to one of the following three categories: Category I—direct experience in WMD research, development, design, production, or testing; Category II—indirect WMD experience in the underlying technologies of potential use in WMD; or Category III—no WMD-relevant experience. If the IPP project is approved, DOE transfers funding to the project participants using payment mechanisms at CRDF, ISTC, or STCU. To be paid by any of these entities, the project participants must self-declare whether they possess weapons experience and indicate a more specific category of WMD expertise, such as basic knowledge of nuclear weapons design, construction, and characteristics. The weapons category classifications these scientists declare are certified first by the foreign institute’s director and then by the foreign government ministry overseeing the institute. See appendix III for a more detailed list of the WMD categories used by DOE, CRDF, ISTC, and STCU. After the project passes an initial review within the proposing national laboratory, it is further analyzed by the ILAB and its technical committees, which then forward the project proposal to DOE headquarters for review. DOE, in turn, consults with State and other U.S. government agencies on policy, nonproliferation, and coordination considerations. The IPP program office at DOE headquarters is ultimately responsible for making final decisions, including funding, on all projects. DOE has not accurately portrayed the IPP program’s progress, according to our analysis of two key measures used to assess the program’s performance—the number of WMD scientists receiving DOE support and the number of long-term, private sector jobs created. Many of the scientists in Russia and other countries that DOE has paid through its IPP program did not claim to have WMD experience. Furthermore, DOE’s process for substantiating the weapons backgrounds of IPP project participants has several weaknesses, including limited information about the backgrounds of scientists proposed for an IPP project. In addition, DOE has overstated the rate at which weapons scientists have been employed in long-term, private sector jobs because it does not independently verify the data it receives on the number of jobs created, relies on estimates of job creation, and includes in its count a large number of part-time jobs that were created. Finally, DOE has not revised the IPP program’s performance metrics, which are currently based on a 1991 assessment of the threat posed by former Soviet weapons scientists. A major goal of the IPP program is to engage former Soviet weapons scientists, engineers, and technicians, and DOE claims to have supplemented the incomes of over 16,770 of these individuals since the program’s inception. However, this number is misleading because DOE officials told us that this figure includes both personnel with WMD experience and those without any WMD experience. We reviewed the payment records of 97 IPP projects, for which information was available and complete, and found that 54 percent, or 3,472, of the 6,453 participants in these projects did not claim to possess any WMD experience in the declarations they made concerning their backgrounds. Moreover, project participants who did not claim any WMD experience received 40 percent, or approximately $10.1 million, of the $25.1 million paid to personnel on these projects. For example, in 1 project to develop a high-power accelerator that was funded for $1 million, 88 percent, or 66, of the 75 participants who have received payments did not claim any previous weapons-related experience. On a project-by-project basis, we also found that DOE is not complying with a requirement of its own guidance for the IPP program—that is, each IPP project must have a minimum of 60 percent of the project’s participants possessing WMD-relevant experience prior to 1991 (i.e., Soviet-era WMD experience). According to our analysis of the payment records of 97 projects for which information was available and complete, we found that 60 percent, or 58, of the 97 projects did not meet this requirement. A factor contributing to this outcome may be a poor understanding of the IPP program guidance among the ILAB representatives of the 12 national laboratories participating in the program. During our interviews with national laboratory officials, we heard a range of opinions on the appropriate minimum percentage of WMD scientists on individual IPP projects. For example, ILAB representatives from 5 national laboratories indicated that they strive for a minimum of 50 percent of WMD scientists on each IPP project; the ILAB representative from the Pacific Northwest National Laboratory indicated a goal of 55 percent. The ILAB representative from the National Renewable Energy Laboratory indicated that he was not aware of any DOE policy establishing a minimum percentage of participants with WMD backgrounds on an IPP project. Finally, many IPP project participants that DOE supports are too young to have supported the Soviet Union’s WMD programs. Officials at 10 of the 22 Russian and Ukrainian institutes we interviewed said that IPP program funds have allowed their institutes to recruit, hire, and retain younger scientists. We found that 15 percent, or 972, of the 6,453 participants in the payment records of the 97 projects we reviewed were born in 1970 or later and, therefore, were unlikely to have contributed to Soviet-era WMD efforts. This group of younger participants received approximately 14 percent, or about $3.6 million, of $25.1 million paid to project participants in the 97 projects we reviewed. While DOE guidance for the IPP program does not specifically prohibit participation of younger scientists in IPP projects, DOE has not clearly stated the proliferation risk posed by younger scientists and the extent to which they should be a focus of the IPP program. The absence of a clear policy on this matter has contributed to confusion and lack of consensus among national laboratory officials involved in the program about the extent to which younger scientists, rather than older, more experienced WMD experts, should be involved in IPP projects. For example, the ILAB representative at the Argonne National Laboratory told us that it would be appropriate to question the participation of personnel born in the mid-1960s or later since they most likely lacked weapons-related experience. A representative at the Los Alamos National Laboratory who has been involved with the IPP program for over a decade said that the program should engage “second-generation” scientists born in 1980 or later because doing so can help create opportunities for “third- and fourth- generation” scientists at facilities in Russia and other countries in the future. Senior officials at the Lawrence Livermore National Laboratory told us that scientists in Russia and other countries, regardless of their age or actual experience in weapons-related programs, should be included in IPP projects because weapons expertise can be passed from one generation to the next. In 1999, we recommended that, to the extent possible, DOE should obtain more accurate data on the number and background of scientists participating in IPP program projects. DOE told us that it has made improvements in this area, including development of a classification system for WMD experts, hiring a full-time employee responsible for reviewing the WMD experience and backgrounds of IPP project participants, and conducting annual project reviews. DOE relies heavily on the statements of WMD experience that IPP project participants declare when they submit paperwork to receive payment for work on IPP projects. However, we found that DOE lacks an adequate and well-documented process for evaluating, verifying, and monitoring the number and WMD experience level of individuals participating in IPP projects. According to DOE officials, all IPP projects are scrutinized carefully and subjected to at least 8, and in some cases 10, stages of review to assess and validate the WMD experience of the project participants. Responsibility for verifying the WMD experience and backgrounds of IPP project participants rests not only with DOE, but with the national laboratories, other federal agencies, and the entities responsible for transmitting funding to the scientists in Russia and other countries (CRDF, ISTC, or STCU). However, the ultimate responsibility for this assessment rests with DOE’s IPP program office. Table 1 provides an overview of the different stages involved in DOE’s assessment of IPP project participants’ WMD backgrounds. In reviewing project documentation and in our discussions with officials responsible for conducting these reviews, we found limitations throughout this multistage assessment process. Specifically: DOE has limited information to verify the WMD experience of personnel proposed for IPP projects because government officials in Russia and other countries are reluctant to provide information about their countries’ scientists. For example, ISTC officials told us that the Russian government refuses to provide résumés for scientists involved in projects funded by the Science Centers program, including IPP projects that use the ISTC payment process; while CRDF officials indicated that both the Russian and Ukrainian governments have shown increasing resistance to the policy requiring the scientists to declare their WMD-related experience. Three national laboratory officials stated that it is illegal under Russian law to ask project participants about their backgrounds, and that instead they make judgments regarding the WMD experience of the project participants on the basis of their personal knowledge and anecdotal information. Some IPP project proposals may advance from the national laboratories for consideration by DOE with insufficient vetting or understanding of all personnel who are to be engaged on the project. Contrary to the process DOE laid out for the review of the WMD scientists’ backgrounds, senior representatives at five national laboratories told us that they and their project managers do not have sufficient time or the means to verify the credentials of the proposed project participants. Furthermore, they believe that DOE is primarily responsible for substantiating the weapons experience of the individuals who are to be engaged in the projects. DOE does not have a well-documented process for verifying the WMD experience of IPP project participants, and, as a result, it is unclear whether DOE has a reliable sense of the proliferation risk these individuals pose. DOE’s review of the WMD credentials of proposed project participants relies heavily on the determinations of the IPP program office. We examined the proposal review files that the program maintains, and we were unable to find adequate documentation to substantiate the depth or effectiveness of the program office’s review of the WMD experience of proposed IPP project participants. DOE officials noted that they do not usually check the weapons backgrounds of every individual listed in an IPP project proposal, but only the key project scientists and a few of the personnel working with them. Specifically, in none of the IPP project files that we reviewed did we find formal, written documentation analyzing and substantiating the WMD backgrounds and proliferation risks of the personnel to be engaged in those IPP projects. Each of these files did, however, contain a comprehensive formal assessment by DOE’s Office of International Regimes and Agreements analyzing export control issues and compliance with U.S. nonproliferation laws. Officials at the three organizations DOE uses to make tax-free payments for IPP projects—CRDF, ISTC, and STCU—also downplayed their organizations’ ability to validate the backgrounds of the scientists participating in IPP projects. CRDF officials stated that their organization has not independently validated any of the weapons backgrounds of the participating scientists, and they do not consider that a responsibility under CRDF’s contract with DOE. Similarly, ISTC officials told us that their organization cannot verify the backgrounds of scientists in projects funded by the Science Centers program, including IPP projects that use the ISTC payment process, and instead relies on the foreign institute’s certification of the project participants. Finally, STCU relies on the validation provided by the foreign institute’s director, and verifies this information in annual project reviews during which a sample of project participants are interviewed to confirm their WMD experience. Because it can be a matter of months or longer between development of an IPP project proposal and project implementation, the list of personnel who are actually paid on a project can differ substantially from the proposed list of scientists. For several IPP projects we reviewed, we did not find documentation in DOE’s project files indicating that the department was notified of the change of staff or had assessed the WMD backgrounds of the new project participants. For example, 1 IPP project— to discover new bioactive compounds in Russia and explore their commercial application—originally proposed 27 personnel and was funded at $1 million. However, 152 personnel were eventually paid under this project, and we did not find an updated list of the project personnel or any indication of a subsequent review of the additional personnel by DOE in the IPP project files. In another project to develop straw-fired boilers in Ukraine funded at $936,100, DOE reviewed the backgrounds of 18 personnel who were part of the project proposal. However, CRDF payment records indicated that 24 personnel were subsequently paid on the project, only 5 of whom were listed in the original proposal DOE had reviewed and approved. As a result, it is unclear whether DOE conducts sufficient oversight on changes in the number or composition of the workforce involved in IPP projects. For its part, CRDF informed us that when an institute requests a change in project staff and that change is approved by the participating national laboratory, CRDF does not report these changes to DOE, but relies on the national laboratory to notify relevant DOE officials. The limited information DOE obtains about IPP project participants and the weaknesses in DOE’s review of the backgrounds of these individuals leave the IPP program vulnerable to potential misallocation of funds. In our review, we found several examples that call into question DOE’s ability to adequately evaluate IPP project participants’ backgrounds before the projects are approved and funded. For example: A National Renewable Energy Laboratory official told us he was confident that a Russian institute involved in a $250,000 IPP project he oversaw to monitor microorganisms under environmental stress was supporting Soviet-era biological weapons scientists. However, during our visit to the institute in July 2007, the Russian project leader told us that neither he nor his institute was ever involved in biological weapons research. As a result of this meeting, DOE canceled this project on July 31, 2007. DOE’s cancellation letter stated that the information provided during our visit led to this action. It further stated, “it is well documented in statute and in the General Program Guidance that our projects must engage Russians, and others, with relevant weapons of mass destruction or strategic delivery means backgrounds. Violation of this requirement is an extremely serious matter.” In November 2006, DOE canceled a project in Ukraine intended to develop a new type of fuel combustion system, 18 months after approving the project and after spending about $76,000. DOE canceled this project when it discovered an inadequate number of personnel with WMD backgrounds involved in the project and after a Defense Contract Audit Agency (DCAA) audit revealed other irregularities, including a conflict of interest between the primary Ukrainian institute and the U.S. partner company. During the interagency review of the project proposal, State officials questioned the primary Ukrainian institute’s involvement in WMD. However, in our review of DOE’s project files, we did not find evidence that these concerns triggered a more-intensive evaluation of this institute by DOE prior to the project’s approval. A 2005 DCAA audit found that 90 percent of the participants on an IPP project administered by the Pacific Northwest National Laboratory lacked WMD experience. This project, which was designed to develop improved biological contamination detectors, was funded at $492,739. Officials at the national laboratory insisted that DCAA “was just plain wrong.” DOE and national laboratory officials asserted that the project participants were under instruction not to discuss their weapons involvement and, on the basis of their personal knowledge of the Russian project leader and the institute, they believed the project participants constituted a proliferation risk. However, according to the payment records we reviewed, the Russian project leader and other scientists involved in the project were not prevented from declaring their WMD backgrounds to CRDF. Such conflicting accounts, the absence of clear information, and the judgments made by IPP program officials in assessing the proliferation risks posed by IPP project participants underscore the difficulties the program faces and the possibility that the program is funding personnel who do not constitute a proliferation risk. Although a senior DOE official described commercialization as the “flagship” of the IPP program, we found that the program’s commercialization achievements have been overstated and are misleading, further eroding the perceived nonproliferation benefits of the program. In the most recent annual report for the IPP program available at the time of our review, DOE indicated that 50 projects had evolved to support 32 commercially successful activities. DOE reported that these 32 commercial successes had helped create or support 2,790 new private sector jobs for former weapon scientists in Russia and other countries. In reviewing these projects, we identified several factors that raise concerns over the validity of the IPP program’s reported commercial success and the numbers of scientists employed in private sector jobs. For example: The annual survey instrument that USIC distributes to collect information on job creation and other commercial successes of IPP projects relies on “good-faith” responses from U.S. industry partners and foreign institutes, which are not audited by DOE or USIC. In 9 of the 32 cases, we found that DOE based its job creation claims on estimates or other assumptions. For example, an official from a large U.S. company told us that the number of jobs it reported to have helped create was his own rough estimate. He told us he derived the job total by estimating the amount of money that the company was spending at Russian and Ukrainian institutes and dividing that total by the average salary for Russian engineers in the company’s Moscow office. We could not substantiate many of the jobs reported to have been created in our interviews with the U.S. companies and officials at the Russian and Ukrainian institutes where these commercial activities were reportedly developed, due to conflicting information and accounts. For example, officials from 1 U.S. company we interviewed claimed that 250 jobs at 2 institutes in Russia had been created, on the basis of 2 separate IPP projects. However, during our visit to the Scientific Research Institute of Measuring Systems to discuss one of these projects, we were told that the project is still under way, manufacturing of the product has not started, and none of the scientists have been reemployed in commercial production of the technology. Similarly, during our site visit, officials at the Institute of Nuclear Research of the Russian Academy of Sciences could not confirm the creation of 350 jobs they had reported as a result of several IPP projects relating to the production of radioisotopes. They indicated that no more than 160 personnel were employed at their institute in commercial activities stemming from those IPP projects, that most of these jobs were only part time, and that they could not account for jobs that may have been created at other institutes previously involved in the projects. “A product, process, or service is generating revenue from sales or other economic value added in the or the U.S., based on an IPP project (either completed or ongoing); and/or there is a private contractual relationship between the U.S. industry partner and the institute covering research and development work to be done by the institute for the U.S. industry partner growing out of an IPP project.” The lack of consensus among DOE and national laboratory officials involved in the IPP program on a common commercialization definition has created confusion and disagreement on which IPP projects should be considered commercially successful. For example, DOE counted as a commercial success one IPP project administered by the Pacific Northwest National Laboratory to facilitate biodegradation of oil spills. However, the national laboratory officials responsible for this project disagreed with DOE’s characterization, in part because the project has not generated any commercial revenues. Furthermore, DOE’s broad-based definition of commercialization has allowed it to overstate its commercialization accomplishments to include part-time jobs created from and revenues derived from grants or contract research. Specifically: DOE counts part-time private sector jobs created, even if the scientists employed in these part-time jobs also continue to work at the former Soviet weapons institute. DOE policy does not require scientists employed in a private sector activity resulting from an IPP project to sever their relationship with their institute. In fact, in our review of the 2,790 jobs created, we found that 898, or nearly one third, of these jobs were part-time jobs, meaning that the scientists in some cases may still be affiliated with the institutes and involved in weapons-applicable research. The sources of revenue for some commercially successful IPP projects also call into question the long-term sustainability of some of the jobs created. DOE reported that $22.1 million in total revenue was generated by the foreign institutes or their spin-off companies as a result of commercial activities stemming from IPP projects. Of this total, approximately $4.5 million, or 20 percent, consisted of grants (including grants from the Russian government); contract research; and other sources of income that appear to be of limited duration, that are not based on commercial sales, and that may not offer a sustainable long-term source of revenue. For example, DOE reported that 510 jobs were created at the Kurchatov Institute and other Russian institutes as the result of an IPP project to develop thorium-based fuels for use in nuclear reactors. However, we found that over 400 of those jobs were supported by a separate DOE contract to evaluate the use of thorium fuels for plutonium disposition. The Russian project participants told us that over 500 workers were supported while receiving funding from the 2 DOE sources, but the project is now completed, it has not been commercialized, and there are no more than 12 personnel currently involved in efforts related to the project. The IPP program’s long-term performance targets do not accurately reflect the size and nature of the threat the program is intended to address because DOE is basing the program’s performance measures on outdated information. DOE has established 2 long-term performance targets for the IPP program—to engage 17,000 weapons scientists annually by 2015 in either IPP grants or in private sector jobs resulting from IPP projects, and to create private sector jobs for 11,000 weapons scientists by 2019. However, DOE bases these targets on a 16-year-old, 1991 National Academy of Sciences (NAS) assessment that had estimated approximately 60,000 at-risk WMD experts in Russia and other countries in the former Soviet Union. DOE derived 17,000 scientists as its share of the total target population by subtracting from the NAS estimate the number of WMD scientists engaged by other U.S. government and international WMD scientist assistance programs (such as State’s Science Centers program) and making assumptions about attrition rates in the former Soviet WMD workforce. DOE officials acknowledged that the 1991 NAS study does not provide an accurate assessment of the current threat posed by WMD scientists in Russia and other countries. A 2005 DOE-commissioned study by the RAND Corporation estimated that the population of unemployed or underemployed weapons scientists in Russia and other former Soviet states had decreased significantly. The RAND study provided rough revised estimates of the number of WMD scientists in the former Soviet Union, and DOE acknowledged in 2006 that the target population of WMD experts in the former Soviet Union had dropped from the 1991 NAS estimate of 60,000 to approximately 35,000 individuals. However, DOE has not formally updated its performance metrics for the IPP program and, in its fiscal year 2008 budget justification, continued to base its long-term program targets on the 1991 NAS estimate. Moreover, DOE’s current metrics for the IPP program are not complete or meaningful indicators of the proliferation risk posed by weapons scientists in Russia and other countries and, therefore, do not provide sufficient information to the Congress on the program’s progress in reducing the threat posed by former Soviet WMD scientists. The total number of scientists supported by IPP grants or employed in private sector jobs conveys a level of program accomplishment, but these figures are broad measures that do not describe progress in redirecting WMD expertise within specific countries or at institutes of highest proliferation concern. DOE has recognized this weakness in the IPP program metrics and recently initiated the program’s first systematic analysis to understand the scope of the proliferation risk at individual institutes in the former Soviet Union. DOE believes that setting priorities for providing support to foreign institutes is necessary because (1) the economies in Russia and the other countries of the former Soviet Union have improved since the program’s inception, (2) former “at-risk” institutes are now solvent, and (3) the threat of mass migration of former Soviet weapons scientists has subsided. However, DOE believes that a concern remains over the “targeted recruitment” of scientists and former WMD personnel. DOE officials briefed us on their efforts in September 2007, but told us that the analysis is still under way, and that it would not be completed until 2008. As a result, we were unable to evaluate the results of DOE’s assessment. Russian government officials, representatives of Russian and Ukrainian institutes, and individuals at U.S. companies raised questions about the continuing need for the IPP program, particularly in Russia, whose economy has improved in recent years. However, DOE has yet to develop criteria for phasing-out the IPP program in Russia and other countries of the former Soviet Union. Meanwhile, DOE is departing from the program’s traditional focus on Russia and other former Soviet states to engage scientists in new countries, such as Iraq and Libya, and to fund projects that support a DOE-led initiative on nuclear energy, called the Global Nuclear Energy Partnership (GNEP). Officials from the Russian government, representatives of Russian and Ukrainian institutes, and individuals at U.S. companies who have been long-time program participants raised questions about the continuing need for the IPP program, given economic improvements in Russia and other countries of the former Soviet Union. Specifically: A senior Russian Atomic Energy Agency official told us in July 2007 that the IPP program is no longer relevant because Russia’s economy is strong and its scientists no longer pose a proliferation risk. Additionally, in September 2006, the Deputy Head of the Russian Atomic Energy Agency stated that Russia is no longer in need of U.S. assistance, and that it is easier and more convenient for Russia to pay for its own domestic nuclear security projects. Officials from 10 of the 22 Russian and Ukrainian institutes we interviewed told us that they do not see themselves or scientists at their institutes as a proliferation risk. Russian and Ukrainian officials at 14 of the 22 institutes we visited told us that salaries are regularly being paid, funding from the government and other sources has increased, and there is little danger of scientists migrating to countries of concern. However, many of these officials said that they are concerned about scientists emigrating to the United States and Western Europe, and that IPP program funds help them to retain key personnel. Furthermore, many of these officials noted that the program was particularly helpful during the difficult financial period in the late 1990s. Representatives of 5 of the 14 U.S. companies we interviewed told us that, due to Russia’s increased economic prosperity, the IPP program is no longer relevant as a nonproliferation program in that country. Some of these company officials believe that the program should be reassessed to determine if it is still needed. In economic terms, Russia has advanced significantly since the IPP program was created in 1994. Some of the measures of Russia’s economic strength include the following: massive gold and currency reserves, including more than $113 billion in a a dramatic decrease in the amount of foreign debt—from about 96 percent of Russia’s gross domestic product in 1999 to about 5 percent in April 2007; and rapid growth in gross domestic product—averaging about 6 percent per year from 1998 to 2006. In addition, the president of Russia recently pledged to invest substantial government resources in key industry sectors, including nuclear energy, nanotechnology, and aerospace technologies and aircraft production. Many of the Russian institutes involved in the IPP program could benefit substantially under these planned economic development initiatives, undercutting the need for future IPP program support. In fact, officials at many of the Russian institutes with whom we spoke told us that they hope to receive increased government funding from these new presidential initiatives. In another sign of economic improvement, many of the institutes we visited in Russia and Ukraine appeared to be in better physical condition and more financially stable, especially when compared with their condition during our previous review of the IPP program. In particular, at one institute in Russia—where during our 1998 visit we observed a deteriorated infrastructure and facilities—we toured a newly refurbished building that featured state-of-the-art equipment. Russian officials told us that the overall financial condition of the institute has improved markedly because of increased funding from the government as well as funds from DOE. In addition, one institute we visited in Ukraine had recently undergone a $500,000 renovation, complete with a marble foyer and a collection of fine art. Furthermore, we found that many institutes we visited have been able to develop commercial relationships with Russian, U.S., and other international companies on their own—outside of the IPP framework—leading to increased revenues and commercial opportunities. For example, officials at one Russian institute met with us immediately following their successful negotiation of a new contract for research and development activities with a large international energy company. However, DOE officials noted that the economic recovery throughout Russia has been uneven, and that DOE believes there are many facilities that remain vulnerable. Even so, DOE officials told us that their intent is to reorient the IPP program from assistance to cooperation, especially in Russia, given the recent improvements in that country’s economy. DOE has not developed an exit strategy for the IPP program, and it is unclear when the department expects that the program will have completed its mission. DOE officials told us in September 2007 that they do not believe that the program needs to develop an exit strategy at this time. However, DOE officials acknowledged that the IPP program’s long- term goal of finding employment for 17,000 WMD scientists in Russia and other countries does not represent an exit strategy. DOE has not developed criteria to determine when scientists, institutes, or countries should be “graduated” from the IPP program, and DOE officials believe that there is a continued need to engage Russian scientists. In contrast, State has already assessed participating institutes and developed a strategy—using a range of factors, such as the institute’s ability to pay salaries regularly and to attract funding from other sources—to graduate certain institutes from its Science Centers program. State and DOE officials told us that the Science Centers and IPP programs are complementary and well-coordinated. However, we found that the programs appear to have different approaches regarding continued U.S. government support at certain institutes. Specifically, DOE is currently supporting 35 IPP projects at 17 Russian and Ukrainian institutes that State considers to already be graduated from its Science Centers program and, therefore, no longer in need of U.S. assistance. For example, according to State documents, beginning in fiscal year 2003, State considered the Kurchatov Institute to be graduated from its Science Centers program and, according to the Deputy Executive Director of ISTC, the institute is financially well-off and no longer needs U.S. assistance. However, we found that since fiscal year 2003, DOE has funded 6 new IPP projects at the Kurchatov Institute and a related spin-off company. DOE officials acknowledged that coordination between State and DOE’s scientist assistance programs could be improved. Part of State’s exit strategy involves enhancing commercial opportunities at some institutes through the Commercialization Support Program. This program, which began in October 2005, is administered by ISTC with funding from the United States, through State’s Science Centers program. State aims to facilitate and strengthen long-term commercial self- sustainability efforts at institutes in Russia and other countries by providing training and equipment to help them bring commercially viable technologies to market through the Commercialization Support Program. According to ISTC officials, 17 commercialization initiatives at institutes in Russia have been supported through the program, 2 of which were completed as of July 2007. DOE, State, and ISTC officials told us the IPP program and the Commercialization Support Program have a similar goal of finding commercial opportunities for weapons scientists in Russia and other countries of the former Soviet Union. According to ISTC officials, a key difference in the programs is that the Commercialization Support Program can support infrastructure upgrades at foreign institutes, but, unlike the IPP program, it is not used to support research and development activities. DOE and State officials insisted that the programs are complementary, but acknowledged that they need to be better coordinated. DOE recently expanded its scientist assistance efforts on two fronts: DOE began providing assistance to scientists in Iraq and Libya, and the IPP program is working with DOE’s Office of Nuclear Energy to develop IPP projects that support GNEP—a DOE-led international effort to expand the use of civilian nuclear power. These new directions represent a significant departure from the IPP program’s traditional focus on the former Soviet Union. According to a senior DOE official, the expansion of the program’s scope was undertaken as a way to maintain its relevance as a nonproliferation program. DOE has expanded the IPP program’s efforts into these new areas without a clear mandate from the Congress and has suspended parts of its IPP program guidance for implementing projects in these new areas. Specifically: Although DOE briefed the Congress on its plans, DOE officials told us that they began efforts in Iraq and Libya without explicit congressional authorization to expand the program outside of the former Soviet Union. In contrast, other U.S. nonproliferation programs, such as Defense’s Cooperative Threat Reduction program, sought and received explicit congressional authorization before expanding their activities to countries outside of the former Soviet Union. DOE officials told us they plan to ask the Congress to include such language in future legislation. In Libya, DOE is deviating from IPP program guidance and its standard practice of limiting the amount of IPP program funds spent at DOE’s national laboratories for project oversight to not more than 35 percent of total expenditures. Regarding efforts to support GNEP, DOE has suspended part of the IPP program’s guidance that requires a U.S. industry partner’s participation, which is intended to ensure IPP projects’ commercial potential. Since 2004, DOE has been working to identify, contact, and find employment for Iraqi scientists in peaceful joint research and development projects. DOE’s efforts were undertaken at the request of State, which has overall responsibility for coordinating nonproliferation activities and scientist assistance efforts in Iraq. DOE and State coordinate their activities through regular meetings and correspondence, participation in weekly teleconferences, interagency proposal review meetings, and coordination on strategic planning and upcoming events. Through May 2007, DOE had spent about $2.7 million to support its activities in Iraq. DOE has approved 29 projects, the majority of which are administered by Sandia National Laboratories. These include projects on radon exposure, radionuclides in the Baghdad watershed, and the development of salt tolerant wheat strains. However, owing to the uncertain security situation in Iraq, DOE and national laboratory officials told us that these are short- term projects. Sandia National Laboratory officials acknowledged that most of the projects DOE is funding in Iraq have no commercialization potential. Similarly, DOE expanded its efforts to Libya at the request of State. DOE spent about $934,000 through May 2007 to support 5 projects in Libya, including projects involving water purification and desalination. However, DOE is deviating from its IPP program guidance and standard practices by placing no restrictions on the amount of IPP program funds that can be spent at DOE national laboratories for oversight of these projects. DOE limits spending at the national laboratories for IPP projects in all other countries to comply with section 3136(a)(1) of the National Defense Authorization Act for Fiscal Year 2000, which states the following: “Not more than 35 percent of funds available in any fiscal year after fiscal year 1999 for the IPP program may be obligated or expended by the DOE national laboratories to carry out or provide oversight of any activities under that program.” DOE officials acknowledged that more than 35 percent of IPP program funds for projects in Libya have been and will continue to be spent at the national laboratories. We found that through May 2007, DOE spent about $910,000 (97 percent) at the national laboratories, while spending about $24,000 (3 percent) in Libya. In a written response to us on September 7, 2007, DOE noted that the IPP program “will continue to operate in Libya on this basis [i.e., spending more than 35 percent of funds at the DOE national laboratories], while working with our legislative office to eliminate any perceived ambiguities .” DOE informed us on October 24, 2007, that these efforts are currently under way. DOE officials estimate that about 200 scientists in Libya have WMD knowledge and pose a proliferation risk. However, in contrast with its activities in Russia and other countries, DOE’s focus in Libya is not on engaging individual weapons scientists, but rather on converting former WMD manufacturing facilities, because, according to DOE, the Libyan government has made clear that it will continue to pay the salaries of its former WMD scientists and engineers. In collaboration with State, DOE is working to help scientists at Tajura, formerly the home of Libya’s nuclear research center, set up and transition to research in seawater desalination and analytical water chemistry. DOE and State coordinate on strategic planning for and implementation of scientist engagement efforts in Libya. According to State, coordination mechanisms include regular e-mail correspondences, weekly interagency and laboratory teleconferences, and quarterly meetings. DOE officials told us they plan to complete their efforts in Libya by 2009. In fiscal year 2007, DOE also expanded the efforts of the IPP program to provide support for GNEP—a DOE-led international effort to expand the use of civilian nuclear power. In October 2006, a senior DOE official told us that the department planned to use IPP projects to support GNEP as a way to maintain the program’s relevance as a nonproliferation program. On December 13, 2006, the IPP program office brought together national laboratory experts to propose new IPP projects that could support GNEP. Currently, six active or approved IPP projects are intended to support GNEP. According to IPP program officials, DOE’s Office of Nuclear Energy and Office of Science will be providing some funding to three of these projects. According to DOE officials, because these funds will come from other DOE offices and programs, they would not be subject to congressionally mandated limitations on the percentage of IPP program funds that can be spent at DOE national laboratories. As a result, DOE officials told us they plan to use funding provided by the Office of Nuclear Energy and the Office of Science to increase the amount spent at DOE national laboratories for technical review and oversight of GNEP-related IPP projects. DOE has suspended some key IPP program guidelines, such as the requirement for a U.S. industry partner, for IPP projects intended to support GNEP. DOE officials told us that most GNEP-related IPP projects do not have immediate commercial potential, but could attract industry in the future. Furthermore, they said that GNEP-related IPP projects are essentially collaborative research and development efforts between Russian institutes and DOE national laboratories. DOE has yet to develop separate written guidance for GNEP-related IPP projects, but told us it is planning to do so. As a result, national laboratory officials we interviewed told us that implementing procedures for GNEP-related IPP projects has been piecemeal and informal, which has created some confusion about how these projects will be managed and funded. In every fiscal year since 1998, DOE has carried over unspent funds in excess of the amount that the Congress provided for the IPP program, primarily because of DOE and its contractors’ lengthy and multilayered review and approval processes for paying former Soviet weapons scientists for IPP-related work and long delays in implementing some IPP projects. DOE and national laboratory officials told us they are attempting to improve financial oversight over the IPP program, in part, to address concerns about unspent program funds. To that end, DOE is developing a new program management system, which it expects to fully implement in 2008—14 years after the start of the program. Since fiscal year 1994, DOE has spent about $309 million to implement the IPP program, but has annually carried over large balances of unspent program funds. DOE officials have recognized that unspent funds are a persistent and continuing problem with the IPP program. Specifically, in every fiscal year after 1998, DOE has carried over unspent funds in excess of the amount that the Congress provided for the program the following year. For example, as of September 2007, DOE had carried over about $30 million in unspent funds—$2 million more than the $28 million that the Congress had appropriated for the IPP program in fiscal year 2007. In fact, as figure 1 shows, for 3 fiscal years—2003 through 2005—the amount of unspent funds was more than double the amount that the Congress appropriated for the program in those fiscal years, although the total amount of unspent funds has been declining since its peak in 2003. Two main factors have contributed to DOE’s large and persistent carryover of unspent funds: the lengthy and multilayered review and approval processes DOE uses to pay IPP project participants for their work, and long delays in implementing some IPP projects. DOE identified three distinct payment processes that it uses to transfer funds to individual scientists’ bank accounts in Russia and other countries—ISTC/STCU, CRDF subcontract, and CRDF master contract. These three processes involve up to seven internal DOE offices and external organizations that play a variety of roles, including reviewing project deliverables, approving funds, and processing invoices. DOE officials told us that these processes were originally introduced to ensure the program’s fiscal integrity, but they agreed that it was time to streamline these procedures. Regarding the first payment process, as figure 2 illustrates, before payment reaches project participants’ bank accounts, it passes from DOE headquarters (which includes the IPP program office and NNSA’s Budget Office), through DOE’s Energy Finance and Accounting Service Center, which records the obligation of funds. DOE then transfers funding to the Oak Ridge Financial Service Center, which pays the invoice by transferring funds to ISTC or STCU. The funds arrive at ISTC or STCU, which disburses them in quarterly payments to IPP project participants, upon receipt of project invoices, quarterly technical reports, and documentation from the participating former Soviet Union institutes that deliverables were sent to the national laboratories. However, DOE and national laboratory officials told us that this payment process has limitations. Specifically, these officials told us that if there is a problem with a deliverable, it is usually too late for DOE or the participating national laboratory to request that ISTC or STCU stop the payment to the project participants for the current quarter. The other two processes that DOE uses to make payments to IPP project participants involve CRDF. In most cases, DOE administers the CRDF payment process through a subcontract with the participating national laboratory. In some rare cases, DOE contracts directly with foreign institutes through a CRDF “master contract.” For projects that use CRDF to process payments, the entire amount of project funding is first transferred to the participating national laboratory, where it is placed in two separate accounts. The first account consists of no more than 30 percent of project funding for oversight costs incurred by the national laboratory. The second account has all funding for the foreign project participants, which is at least 70 percent of project funding. As figure 3 illustrates, before IPP project participants receive payment from CRDF, invoices and approvals of deliverables from the national laboratories, as well as CRDF forms, are sent to DOE headquarters for approval. DOE headquarters reviews the invoices against the contract and, if the amounts match, approves them and sends documentation to the DOE Procurement Office. DOE headquarters also notifies the participating national laboratory of its approval, and the laboratory sends the funds listed on the invoices to DOE’s Energy Finance and Accounting Service Center. The DOE Procurement Office approves payment on project invoices and notifies CRDF and DOE’s Energy Finance and Accounting Service Center that payments should be made. Funds are then transferred from the Energy Finance and Accounting Service Center to the Oak Ridge Financial Service Center and then to CRDF. Once CRDF has received the funds and the necessary approvals from DOE, it makes payments to the project participants’ bank accounts. DOE officials acknowledged the enormity of the problem that the lag time between the allocation of funds, placement of contracts, and payment for deliverables creates for the IPP program and told us they are taking steps to streamline their payment processes. In addition, Russian and Ukrainian scientists at 9 of the 22 institutes we interviewed told us that they experienced delays in payments ranging from 3 months to 1 year. Among the 207 projects we reviewed, we found several examples of payment delays. For example: In one project on the development and testing of a device to detect hidden explosives, the Lawrence Livermore National Laboratory official who heads the project told us that the U.S. industry partner had to pay Russian scientists’ salaries until IPP funding could be released. Lawrence Livermore officials involved in this project noted that delays in payments to project participants slowed the project’s completion. Officials at another Russian institute told us about two projects that experienced payment delays. On the project to develop nuclear material container security devices, they had shipped a deliverable to Sandia National Laboratories in October 2006, but it took more than 4 months for them to receive payment. On the project to produce a new computer modeling code for use in Russian nuclear reactor simulators, Russian institute officials told us payments were delayed 3 to 4 months. Officials said that when they asked Brookhaven National Laboratory officials about the delay, they were told it was due to DOE’s complex payment processing systems. Delays in implementing some IPP projects also contribute to DOE’s large and persistent carryover of unspent funds. According to officials from U.S. industry partners, national laboratories, and Russian and Ukrainian institutes, some IPP projects experience long implementation delays. As a result, project funds often remain as unspent balances until problems can be resolved. For example, the ILAB representative from the Argonne National Laboratory told us that, in his experience, IPP projects do not finish on schedule about 60 percent of the time owing to a variety of problems. These problems include implementation issues due to administrative problems, the withdrawal or bankruptcy of the U.S. industry partner, and turnover in key project participants. In our review of 207 IPP projects, we found several examples of projects that had experienced implementation delays. For example: One project to produce a low-cost artificial leg for use in developing countries had $245,000 in unspent funds as of April 2007—19 percent of the $1.3 million DOE allocated for the project. Because a testing device needed for the project was not properly labeled when it was sent from the United States, the Russian Customs Service rejected the device. Sandia National Laboratory officials told us that this rejection had delayed project implementation for nearly 1 year. About 3 years into a project to create banks of chemical compounds linked with computer databases for industrial use, the project’s U.S. industry partner was bought out by a larger company. The amount allocated for the project was nearly $1.4 million. The larger company lost interest in the project, and, according to the DOE project manager, the project sat idle for 3 or 4 years while DOE tried to get the company to take action. Ultimately, the project was finished 8 years after it began. Officials at one Russian institute we visited told us another IPP project to improve a material to help neutralize radioactive waste had experienced delays when the original U.S. industry partner went bankrupt, causing the project to be temporarily suspended. According to these officials, it took 2 years to find a new U.S. industry partner. Brookhaven National Laboratory officials described a delay of more than 6 months on a $740,000 project intended to develop new pattern recognition software. According to Brookhaven officials, these delays were caused by significant personnel turnover at the participating Russian institute, mostly through the loss of key personnel who found better, higher paying jobs outside of the institute. DOE is implementing a new system designed to better manage IPP projects’ contracts and finances. DOE officials told us that this action was undertaken in response to a recommendation we made in 2005 to improve the management and internal controls at NNSA. Specifically, we recommended in our August 2005 report, among other things, that NNSA’s program managers maintain quick access to key contract records, such as deliverables and invoices that relate to management controls, regardless of whether the records are located at a national laboratory or headquarters. Following our 2005 report, in 2006, DOE initiated an extensive review of IPP financial and procurement procedures at participating national laboratories. DOE and national laboratory officials told us that representatives from the IPP program office visited all of the participating national laboratories, except for the Kansas City Plant, and worked with each laboratory’s financial department to find ways to reduce unspent funds. DOE officials told us that, as a result, they were able to redirect about $15 million in unspent program funds for immediate use on existing IPP projects. In addition, DOE officials said that they have imposed new management controls to address project delays and reduce balances of unspent funds. These controls include implementing a management reengineering plan and enforcing control mechanisms, called “sunset” provisions, which require national laboratory officials to justify continuing any IPP project that experiences an implementation delay of 6 to 8 months. DOE has also begun to implement its new Expertise Accountability Tool (EXACT), a project and information management system that it launched in October 2006. DOE expects to fully implement the EXACT system in 2008— 14 years after the start of the IPP program. According to DOE officials, EXACT will allow instant sharing of IPP project data between DOE and the participating national laboratories. DOE officials believe that the EXACT system will allow the IPP program office to better monitor and oversee the progress of IPP projects at the national laboratories, including reviews of IPP project participants’ WMD backgrounds and tracking unspent funds at the national laboratories. In our view, the purpose and need for the IPP program must be reassessed. We believe that DOE has failed to clearly articulate the current threat posed by WMD scientists in Russia and other countries and has not adjusted the IPP program to account for the changed economic landscape in the region and improved conditions at many of the institutes involved in the program. Instead, DOE has continued to emphasize a broad strategy of engagement with foreign scientists and institutes, much as it did more than a decade ago, and it has not developed comprehensive plans for focusing on the most at-risk individuals and institutes or for developing an end- game for the program. We believe that DOE’s inability to establish a clear exit strategy for the IPP program has contributed to a perception among foreign recipients that the program is essentially open-ended, represents an indefinite commitment of U.S. support, and serves as a useful marketing tool to attract and retain young scientists who might otherwise emigrate to the United States or other western countries. We believe that it is time for DOE to reassess the program to explain to the Congress how the program should continue to operate in the future or to discuss whether the program should continue to operate at all. Without a reassessment of the program’s objectives, metrics, priorities, and exit strategy, the Congress cannot adequately determine at what level and for how long the program should continue to be supported. We believe that such a reassessment presents DOE with an opportunity to refocus the program on the most critical remaining tasks, with an eye toward reducing the program’s scope, budget, and number of participating organizations. Beyond reassessing the continuing need for the IPP program, a number of management problems are negatively affecting the program. Specifically: The fact that DOE has paid many scientists who claimed no WMD expertise is particularly troubling and, in our view, undermines the IPP program’s credibility as a nonproliferation program. The lack of documentation of DOE’s review of IPP project participants also raises concerns. DOE does not have reliable data on the commercialization results of IPP projects or a clear definition of what constitutes a commercially successful IPP project, preventing it from providing the Congress with a more accurate assessment of the program’s results and purported benefits. Regarding its efforts to expand the IPP program, DOE’s projects in Iraq and Libya represent a significant departure from the program’s original focus on the countries of the former Soviet Union. While there may be sound national security reasons for expanding efforts to these countries, we are concerned that, unlike other federal agencies, DOE did not receive explicit authorization from the Congress before expanding its program outside of the former Soviet Union. Furthermore, in its efforts in Libya, DOE is not adhering to its own guidance restricting the percentage of IPP program funds that can be spent at DOE’s national laboratories on oversight activities. The lack of clear, written guidance for IPP projects intended to support GNEP has led to confusion among national laboratory officials who implement the IPP program. Regarding the financial state of the IPP program, DOE’s long-standing problem with large balances of unspent program funds raises serious concerns about DOE’s ability to spend program resources in a timely manner and about the method DOE uses to develop requests for future budgets. Reform of the complex payment system used by the IPP program to pay foreign scientists could help address some of these concerns. Because Russian scientists and institutes benefit from the IPP program, it seems appropriate that DOE should seek to take advantage of Russia’s improved economic condition to ensure a greater commitment to jointly held nonproliferation objectives. The absence of a joint plan between DOE’s IPP program and ISTC’s Commercialization Support Program, which is funded by State, raises questions about the lack of coordination between these two U.S. government programs that share similar goals of finding peaceful commercial opportunities for foreign WMD scientists. We recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, reassess the IPP program to justify to the Congress the continued need for the program. Such a reassessment should, at a minimum, include a thorough analysis of the proliferation risk posed by weapons scientists in Russia and other countries; a well-defined strategy to more effectively target the scientists and institutes of highest proliferation concern; more accurate reporting of program accomplishments; and a clear exit strategy for the IPP program, including specific criteria to determine when specific countries, institutes, and individuals are ready to graduate from participation in the IPP program. This reassessment should be done in concert with, and include input from, other federal agencies, such as State; the U.S. intelligence community; officials in host governments where IPP projects are being implemented; the U.S. business community; and independent U.S. nongovernmental organizations. If DOE determines that the program is still needed, despite the increased economic prosperity in Russia and in light of the general trend toward cost-sharing in U.S. nonproliferation programs in that country, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, seek a commitment for cost-sharing from the Russian government for future IPP projects at Russian institutes. To address a number of management issues that need to be resolved so that the IPP program operates more effectively, we recommend that the Secretary of Energy, working with the Administrator of the National Nuclear Security Administration, immediately take the following eight actions: establish a more rigorous, objective, and well-documented process for verifying the WMD backgrounds and experiences of participating foreign scientists; develop more reliable data on the commercialization results of IPP projects, such as the number of jobs created; amend IPP program guidance to include a clear definition of what constitutes a commercially successful IPP project; seek explicit congressional authorization to expand IPP efforts outside of the former Soviet Union; for IPP efforts in Libya, ensure compliance with the statutory restriction on the percentage of IPP program funds spent on oversight activities at the DOE national laboratories to no more than 35 percent; develop clear and specific guidance for IPP projects that are intended to streamline the process through which foreign scientists receive IPP funds by eliminating unnecessary layers of review; and seek to reduce the large balances of unspent funds associated with the IPP program and adjust future budget requests accordingly. Finally, we recommend that the Secretaries of Energy and State, working with the Administrator of the National Nuclear Security Administration, develop a joint plan to better coordinate the efforts of DOE’s IPP program and ISTC’s Commercialization Support Program, which is funded by State. DOE and State provided written comments on a draft of this report, which are presented in appendixes V and VI, respectively. DOE agreed with 8 of our 11 recommendations to improve the overall management and oversight of the IPP program, including augmenting the department’s process for reviewing the WMD backgrounds of IPP project participants and developing more reliable data on the commercialization results of IPP projects. DOE disagreed with 2 of our recommendations and neither agreed nor disagreed with 1 recommendation. In addition, State concurred with our recommendation to improve coordination between DOE’s IPP program and ISTC’s Commercialization Support Program, which is funded by State. DOE and State also provided technical comments, which we incorporated in this report as appropriate. In its comments on our draft report, DOE raised concerns about our characterization of the IPP program’s accomplishments, requirements, and goals. DOE stated that we did not acknowledge actions the department was undertaking during the course of our review and asserted that our report does not provide a balanced critique of the IPP program because we relied on an analysis of a judgmental sample of IPP projects to support our findings. DOE also disagreed with our general conclusion and recommendation that the IPP program needs to be reassessed. In addition, DOE did not concur with our recommendation that the department ensure compliance with the statutory restriction on the percentage of IPP program funds spent on oversight activities at the DOE national laboratories to no more than 35 percent. DOE neither agreed nor disagreed with our recommendation that the department seek a commitment for cost-sharing from the Russian government for future IPP projects at Russian institutes. DOE is incorrect in its assertions that we failed to acknowledge actions it was undertaking during the course of our review, and that our report does not provide a balanced critique of the IPP program. Our report acknowledges actions DOE is taking to improve program management, such as the development of a new program and financial management system. Our review identified numerous problems and raised concerns about the IPP program’s scope, implementation, and performance that we believe should be addressed by DOE as part of a reassessment of the IPP program. However, DOE disagreed with our recommendation that the IPP program needs to undergo such a reassessment and noted in its comments that the department believes it has already conducted such an assessment of the program. We were aware that such broad internal reviews took place in 2004 and 2006, but these assessments were conducted not of the IPP program exclusively, but rather of all DOE efforts to assist weapons scientists, including a complementary DOE program to assist workers in Russia’s nuclear cities that has since been canceled. As a result, we believe these assessments are outdated because the IPP program operates under a significantly different set of circumstances today than when DOE conducted its previous internal assessments. Finally, DOE disagreed with our recommendation that the department ensure compliance with the statutory restriction on the percentage of IPP program funds spent on oversight activities at the DOE national laboratories to no more than 35 percent. We believe DOE has misconstrued our recommendation concerning its funding of projects in Libya. We did not recommend, nor did we mean to imply, that DOE should allocate 65 percent of total project funds to Libya for projects in that country. Instead, our recommendation urges the department to ensure that it complies with existing statutory restrictions on the percentage of IPP funds that can be spent on oversight activities by DOE national laboratories. Specifically, as DOE notes, section 3136 of the National Defense Authorization Act for Fiscal Year 2000 provides that not more than 35 percent of funds available in any fiscal year for the IPP program may be spent by DOE national laboratories to provide oversight of program activities. DOE’s IPP guidance and its standard practice have been to implement this provision of law on a project-by-project basis, so that no more than 35 percent of the funds for each project are spent by national laboratories. However, with respect to projects in Libya, DOE is deviating from its IPP guidance by placing no restrictions on the amount of IPP program funds that can be spent at DOE national laboratories for oversight of projects in Libya. We found that 97 percent of funds DOE spent on projects in Libya through May 2007 were spent at DOE’s national laboratories for project management and oversight. (See app. V for DOE’s comments and our responses.) As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested congressional committees; the Secretaries of Energy and State; the Administrator, National Nuclear Security Administration; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or aloisee@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are included in appendix VII. To review the Initiatives for Proliferation Prevention (IPP) program, we interviewed key officials and analyzed documentation, such as program guidance, project proposals, and financial information from the Departments of Energy (DOE), Defense, and State (State). We also interviewed representatives from each of the 12 national laboratories—the Argonne, Brookhaven, Idaho, Lawrence Berkeley, Lawrence Livermore, Los Alamos, National Renewable Energy, Oak Ridge, Pacific Northwest, Sandia, and Savannah River National Laboratories and the Kansas City Plant—that participate in the IPP program. Our interviews focused on general program plans, strategies, and policies as well as issues associated with specific IPP projects. We also interviewed and reviewed documentation provided by the U.S. Civilian Research and Development Foundation (CRDF) in Arlington, Virginia; the International Science and Technology Center (ISTC) in Moscow, Russia; and the Science and Technology Center in Ukraine (STCU) in Kyiv, Ukraine. We analyzed cost and budgetary information from DOE, DOE’s national laboratories, CRDF, ISTC, and STCU. Furthermore, we interviewed knowledgeable officials on the reliability of these data, including issues such as data entry, access, quality control procedures, and the accuracy and completeness of the data. We determined that these data were sufficiently reliable for the purposes of this review. We visited Russia and Ukraine to discuss the implementation of the IPP program with officials and personnel involved in IPP projects. While in Russia and Ukraine, we interviewed officials from 15 Russian and 7 Ukrainian institutes that participate in the IPP program. We met with officials from the Federal Agency for Atomic Energy of the Russian Federation, which oversees institutes involved in Russia’s nuclear weapons program. We also spoke with officials from the U.S. embassies in Moscow and Kyiv. Furthermore, we interviewed officials from 14 U.S. companies that participate in the IPP program to better understand their perspectives on the program’s goals, benefits, and challenges, and the results of specific projects for which they have served as industry partners. We interviewed the principal staff of the U.S. Industry Coalition, which represents companies that participate in the IPP program. We also met with 5 nongovernmental experts who have followed developments in the IPP and related nonproliferation programs to get their views on the program. To assess the reported accomplishments of the IPP program, we judgmentally selected for in-depth review 207 IPP projects, including draft, active, inactive, and completed projects, in the Thrust 1, Thrust 2, and Thrust 3 categories. These 207 projects represented over 22 percent of the 929 total IPP projects through September 2007. Of the projects that we reviewed, 180 were with Russia, 21 were with Ukraine, 3 were with Kazakhstan, and 3 were with Armenia. Because these projects were a judgmental sample, the findings associated with them cannot be applied generally to the IPP program as a whole. We used the IPP information system to identify and select IPP projects. This database, also referred to by DOE as the “Lotus Notes” system, was developed and maintained by the Los Alamos National Laboratory and is considered the program’s project proposal management system. The system contains data on all IPP projects, from draft proposals to completed projects, and includes such information as the project description, statement of work, information on participating scientists in the former Soviet Union and the U.S. industry partner, and financial expenditures. DOE notified us that it was developing a new IPP project management database, known as the Expertise Accountability Tool (EXACT), and that some IPP project information contained in Lotus Notes—especially pertaining to project expenditures and the number of scientists supported—might not be current, accurate, or complete. However, DOE officials told us that the EXACT system was not available during our project selection phase, and that it would not contain information on completed IPP projects. As a result, we used the Lotus Notes database to make our project selection. We selected projects on the basis of a number of criteria, such as project status, project funding, the type of institute involved in the project, geographic distribution, national laboratory representation, and the claimed commercial success of the project. We also received and used recommendations from DOE on criteria to consider in selecting projects for review. The status and dollar size of IPP projects were significant considerations in our project selection. For example, we focused primarily on active projects—that is, Thrust 2 projects that were approved, funded, or under way—regardless of their dollar value. We also considered draft and inactive Thrust 2 projects where proposed funding was over $800,000, as well as completed Thrust 1 and Thrust 2 projects that spent over $250,000. We also selected projects for review across a variety of institutes in the former Soviet Union, including facilities with backgrounds in nuclear, chemical, biological, and missile research and development. The foreign countries and institutes where we planned to conduct fieldwork also played a significant role in our project selection. Time and cost constraints, as well as Russian government restrictions on access to some facilities, limited the number and types of sites we were able to visit. We concentrated on projects at institutes in Russia and Ukraine because over 90 percent of all IPP projects are in these two countries. We focused on IPP projects at institutes in the Russian cities of Moscow, Nizhny Novgorod, and Sarov because these cities ranked high in our analysis of several variables, including the total number of IPP projects, the number of projects supporting commercial activities, and the total amount of funding proposed in IPP projects in those locations. We also focused on projects in the Ukrainian cities of Kyiv, because over 54 percent of IPP projects in Ukraine are there, and Kharkiv, because of its relative proximity to Kyiv and the number of projects there. We selected institutes in the Russian and Ukrainian cities for site visits on the basis of several criteria, including the total number of projects, the number of active projects, the type of institute, and the number of projects commercialized at each location. We also selected projects administered by each of the national laboratories and the Kansas City Plant that participate in the program as well as projects managed by DOE headquarters. The selected projects included 18 projects at Argonne, 22 at Brookhaven, 8 at Idaho, 18 at Lawrence Berkeley, 33 at Lawrence Livermore, 14 at Los Alamos, 11 at National Renewable Energy, 12 at Oak Ridge, 41 at Pacific Northwest, 15 at Sandia, and 2 at Savannah River; 9 projects at the Kansas City Plant; as well as 4 projects managed by DOE headquarters. The commercial success of an IPP project also played an important role in its selection. For example, we selected for review all 50 projects that DOE indicated as having led to commercially successful ventures identified in its Fiscal Year 2005 IPP Program Annual Report. We were able to review 48 of the 50 commercially successful projects with the sponsoring national laboratory, Russian or Ukrainian institute, or industry partner or some combination of these three entities. We also reviewed 11 IPP projects that had been identified as commercially successful in prior year annual reports, but that were not addressed in the fiscal year 2005 report. To assess the nonproliferation impact of the IPP program, we requested and evaluated available information on the personnel at institutes in the countries of the former Soviet Union participating in the projects we selected for review. To determine the percentage of personnel without weapons of mass destruction (WMD) experience, we added the total number of project personnel who did not claim prior WMD experience— based on the WMD experience codes the project personnel self-declared to one of the three IPP payment systems—and divided this figure against the total number of project participants. We followed a similar process to calculate the percentage of older personnel versus younger personnel. We classified workers born in 1970 or later as younger workers because they were unlikely to have contributed to Soviet-era WMD programs. We also calculated the total amount of funds paid to these four different categories of participants—those claiming WMD experience, those who did not, older workers, and younger participants. In some cases, birth dates were not available for some participants in the documentation we received; in those instances, those individuals and the payments made to them were tracked in separate categories. We collected this information by providing officials at each of the 12 participating national laboratories with a template, requesting that the laboratory project leader provide information on the personnel involved in each project in our sample, including each participant’s full name, institute affiliation, date of birth, WMD experience, and amount paid under the project. In instances where we did not receive complete information from the laboratories, we used payment records and other information on IPP project participants maintained by the three payment mechanisms— CRDF, ISTC, and STCU—to complete data missing from the templates, or to reconstruct payment records for the project participants in cases where the national laboratory did not provide any information on the project participants. Because of potential data reliability concerns raised by CRDF on older IPP projects for which it processed payments, we consulted with CRDF representatives and received recommendations on specific projects that we should exclude from our analysis. Among the 207 IPP projects we reviewed, no payments had yet been made on 42 projects and 14 projects were inactive. Of the remaining 151 IPP projects in our sample, we determined that 54 projects were too old for evaluation, because DOE did not collect rosters of individual project participants before 2000, or that sufficient and reliable information on the project participants was not readily available. Thus, our review of the backgrounds of the participants was conducted on 97 of the 207 projects in our sample. To assess the commercial results of IPP projects, we reviewed 48 of the 50 projects that contributed to the commercial successes presented in DOE’s fiscal year 2005 annual report for the IPP program, which was the most recent report available at the time of our review. DOE provided us with the list of IPP projects associated with those commercial successes, and we reviewed and evaluated the raw data collected by the U.S. Industry Coalition for each of those projects in its 2005 commercial success survey, which DOE used as the basis for the commercial successes cited in its fiscal year 2005 IPP annual report. In addition, for the 48 commercially successful projects we reviewed, we interviewed representatives from the sponsoring national laboratory, Russian or Ukrainian institute, or industry partner or some combination of these three entities to understand the commercial activities and other details associated with these projects. Specifically, we (1) met or conducted telephone interviews with 12 companies involved in the commercially successful projects, (2) interviewed representatives at the national laboratories for 46 of the 50 projects reported to be commercially successful, and (3) visited 6 of the institutes in Russia and Ukraine where IPP projects were reported to have been commercialized. To assess the IPP program’s future, we interviewed DOE and national laboratory officials. We also assessed State’s planned exit strategy for its Science Centers program. We discussed State’s strategy with DOE, State, and ISTC officials. Regarding the IPP program’s expansion, we met with officials and reviewed documentation from DOE, State, and the Lawrence Livermore, National Renewable Energy, and Sandia National Laboratories concerning the engagement of former weapons scientists in Iraq and Libya. Regarding the program’s support to the Global Nuclear Energy Partnership, we reviewed documents and interviewed officials from the IPP program office, DOE national laboratories, and DOE’s Office of Nuclear Energy. To assess the extent to which the IPP program has had annual carryover balances of unspent funds and the reasons for such carryover, we obtained financial data from DOE’s IPP program office, DOE’s National Nuclear Security Administration’s (NNSA) budget and finance office, and the national laboratories participating in the program. We discussed and reviewed these data with budget and program analysts at the IPP program office and NNSA’s budget and finance office. In addition, we interviewed knowledgeable officials on the reliability of these data, including issues such as data entry, access, quality control procedures, and the accuracy and completeness of the data. We determined that these data were sufficiently reliable for the purposes of this review. We conducted our review from October 2006 through December 2007 in accordance with generally accepted government auditing standards. During our review of the DOE’s IPP program, we interviewed officials from 15 institutes in Russia and 7 in Ukraine in July 2007. In July 2007, we met with Russian scientists and officials from institutes in Moscow, Nizhny Novgorod, Pushchino, and Troitsk, Russia, to discuss draft, active, inactive, and completed IPP projects. The Center for Ecological Research and BioResources Development was established in 2000 through a $1.5 million grant from the IPP program. It focuses on the discovery of novel bioactive compounds, biodiversity collection and identification, and environmental bioremediation. The center comprises 9 research institutes and is connected with 30 laboratories, with about 300 scientists. The center’s role is to coordinate the activities of the member institutes, organize workshops and visits, consult on the administration of IPP projects, provide report editing and translation, perform financial reporting and examinations, and export biomaterials to the United States and elsewhere. The center has shipped over 50,000 biological samples. We discussed 5 IPP projects, including 2 completed, 2 active, and 1 draft project. When we discussed IPP projects with the center, representatives from 2 partner institutes—the Institute of Biochemistry and Physiology of Microorganisms and the Scientific Center for Applied Microbiology and Biotechnology—were also present. The Gamaleya Scientific Research Institute of Epidemiology and Microbiology was founded in 1891 for research into infectious diseases in humans and manufactures more than 40 different pharmaceutical products, including a tuberculosis vaccine. Gamaleya officials told us that the institute employs 800 staff, including 120 scientists and 680 technicians and administrative personnel. We visited the institute during our first audit of the IPP program in 1999. We spoke with Gamaleya officials about 3 completed IPP projects. The institute is involved in marketing a veterinary drug and is just starting to market an antiparasite drug for honeybees. The third project is expected to produce a marketable product in 2 to 3 years. The Institute for Nuclear Research of the Russian Academy of Sciences, with branches in Moscow and Troitsk, was founded in 1970 to further development of fundamental research activities in the field of atomic, elementary particle, and cosmic ray physics and neutrino astrophysics. The institute, with a staff of about 1,300 specialists, was formed from 3 nuclear laboratories of the P.N. Lebedev Institute of Physics of the former Soviet Union’s National Academy of Sciences. About 600 people work in the Troitsk branch of the institute. We spoke with institute officials at this branch about 5 IPP projects—4 completed and 1 active. During the first audit of DOE IPP programs, in 1999, we visited the Moscow branch of this institute. The Institute of Applied Physics of the Russian Academy of Sciences in Nizhny Novgorod became an independent research facility in 1977. During this time, its primary focus was working with transmitting and detecting waves through different matters; in practical terms, this included work for the Soviet military on radar tracking of missiles and supporting Russian missile defense, materials science applications in radioelectronic equipment, and submarine detection using radar. Institute officials told us that since the beginning of the 1990s, the institute has reduced its staff from about 2,000 employees, to roughly 1,100. However, it has retained a large number of top-level researchers despite the fact that defense orders plummeted to zero. Officials told us that the institute was in good shape today, has adapted to the changing environment, and has created several successful spin-off companies. We discussed 4 IPP projects with institute officials—1 completed, 1 active, and 2 draft. The Institute of Biochemistry and Physiology of Microorganisms is 1 of 4 research institutes that make up the Center for Ecological Research and BioResources Development. This institute is not a weapons institute and never had a role in the Soviet biological weapons program. However, institute officials noted that some scientists at the institute had come from other institutes that were involved in biological warfare research. The institute is home to the “All Russia Biological Culture Collection.” We discussed 3 IPP projects—1 completed, 1 active, and 1 draft—with officials from the institute. These were 3 of the 4 IPP projects we discussed at the Center for Ecological Research and BioResources Development. The Institute of General Physics of the Russian Academy of Sciences was founded in 1983 by Nobel Prize winner Academician A.M. Prokhorov, who headed it until 1998 and now serves as the institute’s honorary director. The institute began as Division “A” of the Lebedev Physical Institute. It currently consists of 13 research departments and 5 research centers: (1) natural sciences, (2) laser materials and technologies, (3) wave research, (4) fiber optics, and (5) physical instrumentation. The institute has a staff of 1,264, including 600 researchers. Its principal research areas encompass quantum electronics and optics, solid state physics, micro- and nanoelectronics, integral and fiber optics, plasma physics and photoelectronics, radio physics and acoustics, laser medicine, and ecology. We discussed 6 IPP projects with institute officials—4 completed and 2 active. Krasnaya Zvezda was established in 1972 to combine other organizations that employed designers, developers, and manufacturers of space-based nuclear power systems. Krasnaya Zvezda officials told us that they continue to do some defense-related work. However, the institute now mostly focuses on the civilian sector and work on civilian nuclear energy projects, including radioactive waste management at civilian nuclear power plants. The financial situation has been relatively steady over the past years and officials anticipate that with the reorganization of the Federal Agency for Atomic Energy of the Russian Federation, Krasnaya Zvezda will be involved in many future civilian nuclear energy contracts. Krasnaya Zvezda maintains a close relationship with the Kurchatov Institute. We discussed 5 IPP projects— 3 completed and 2 draft—with Krasnaya Zvezda officials. The Kurchatov Institute is one of Russia’s leading nuclear research institutes. Through the mid-1950s, defense activities represented more than 80 percent of the institute’s budget. By 1965, the defense portion had been reduced to about 50 percent, and, although Kurchatov has scientists who were involved with nuclear weapons programs in the past, today there are virtually no defense-related contracts. The institute conducts research on controlled thermonuclear fusion, plasma physics, solid state physics, and superconductivity. It designs nuclear reactors for the Russian Navy, the Russian icebreaker fleet, and space applications. Nuclear experts from the Kurchatov Institute have helped set up and operate Soviet-exported research reactors, including one at Libya’s Tajura nuclear research center. In addition, the Kurchatov Institute is the subcontractor for DOE’s Material Protection, Control, and Accounting program with the Russian Navy and icebreaker fleet. We discussed 10 IPP projects with Kurchatov officials—7 completed and 3 active. In 1999, we visited the Kurchatov Institute during our first audit of DOE’s IPP program. One of the oldest Russian institutions of higher education, Moscow State University was established in 1755. According to DOE and national laboratory officials, Moscow State University departments of physics, chemistry, and biology were involved in research related to WMD. Specifically, according to DOE, when the Soviet Ministry of Defense needed certain expertise or research done, it called upon individuals at academic institutions, such as Moscow State University. We discussed 1 project DOE subsequently canceled and 1 draft IPP project with Moscow State University officials. The Radiophysical Research Institute of the Ministry of Education and Science was founded in 1956 in Nizhny Novgorod. Since then outreach efforts have been directed toward (1) supporting research in the fields of natural sciences and astronomy and (2) expanding interest in research work in such areas as astronomy, solar physics, the relationship between the Sun and the Earth, and the associated geophysics. We spoke with an official from the Radiophysical Research Institute, who was present during our interview with officials from the Scientific Research Institute of Measuring Systems. We discussed 1 project that ended in 2002 with this official. The Scientific Research Institute of Measuring Systems in Nizhny Novgorod, Russia, was established in 1966 to develop and produce electronics to support industry enterprises, including nuclear power plants as well as nuclear research and developments. Today, the institute researches, designs, and manufactures computer and semiconductor equipment, mostly for use in the Russian energy industry. The institute also develops and manufactures software and control systems for gas lines, and thermal and nuclear power stations. We discussed 3 IPP projects with officials—1 active and 2 completed projects. The State Unitary Enterprise I.I. Afrikantov Experimental Machine Building Design Bureau was founded in 1947 as a component of the Gorky Machine Building Plant Design Bureau to create equipment for nuclear industry. Later, as the mission expanded to the creation of various nuclear reactors, the design bureau was separated from the Gorky Machine Building Plant. Currently, the Afrikantov Experimental Machine Building Design Bureau employs about 3,400 staff and is one of the lead design organizations in the industry, supporting a large scientific and production center for nuclear power engineering. Since the 1960s, the institute has been the chief designer of ship-based reactor plants and fast neutron reactors. One of the institute’s significant achievements is the creation of innovative integral reactors with natural and forced coolant circulation. The institute actively participates in the creation of nuclear power installations abroad and has scientific and technical cooperative arrangements with the International Atomic Energy Agency, and national laboratories and companies in China, France, India, Japan, South Korea, and the United States. We discussed 2 draft IPP projects with officials from the institute. Soliton is a private company that was spun off from the Kurchatov Institute in the early 1990s. Soliton was formed by scientists from the Kurchatov Institute to convert defense technologies to civil purposes and to commercialize these technologies. Before working for Soliton, many Soliton employees were involved in weapons-related activities at the Kurchatov Institute, and most still retain some ties to Kurchatov. Soliton has official permission to use scientists from other institutes as part of the effort to commercialize former weapons laboratories. Soliton was organized so that small-scale nonweapons projects could be undertaken using the talents of several weapons scientists from a variety of institutes. We discussed 6 IPP projects with Soliton officials—5 completed and 1 active. In 1946, the Soviet government established the All-Russian Scientific Research Institute of Experimental Physics in Sarov, where the first Soviet nuclear bomb was designed and assembled. In Soviet times, the institute’s mission included the design of nuclear warheads and the development of experimental and prototype warheads. Today, the safety and reliability of the Russian nuclear stockpile are the institute’s primary missions. According to information provided by the institute, since 1990, it has increasingly developed international collaboration in unclassified science and technology areas. The institute employs about 24,000 people, approximately half of whom are scientists or engineers, and is the largest research institution in Russia that successfully handles defense, science, and national economic problems. Under the current nuclear testing moratorium, nuclear weapons research and development activities are concentrated at computational and theoretical, design, and test divisions of the institute. During our earlier audit of DOE’s IPP program, we interviewed officials from this institute in 1998. We discussed 10 IPP projects—5 active and 5 completed—with institute officials. The Zelinsky Institute of Organic Chemistry of the Russian Academy of Sciences, founded in 1934, is one of the world’s largest scientific centers in the fields of organic chemistry, organic catalysis, and chemistry of biologically active compounds. It employs about 600 people, although it had over 1,300 at its peak in the 1980s. In addition, about 150 students are engaged in graduate studies at the institute. Officials told us that until the early 1990s, the institute was involved in some defense-related activities, but it has not been involved in any WMD-related work since the early 1990s. The institute mostly worked on research related to high explosives and solid rocket fuel (not chemical weapons). We discussed 3 IPP projects—2 completed and 1 canceled—with institute officials. While in Ukraine, we met with representatives from 7 institutes based in Dnipropetrovsk, Kharkiv, and Kyiv and discussed 18 IPP projects with scientists and institute officials. The E.O. Paton Electric Welding Institute was founded in 1934, and has become one of the largest research institutes in the world, with about 8,000 employees (3,000 at the headquarters in Kyiv). The institute is a multidisciplinary scientific and technical complex involved in fundamental and applied research in the field of welding and related technologies; development of technologies, materials, equipment, and control systems; rational welded structures and parts; and methods and means for diagnostics and nondestructive testing. The institute undertakes research in all phases of electric welding and certain specialized related processes, such as brazing, explosive forming, electrometallurgy, and friction welding. The institute’s work covers welding of virtually all metals and alloys as well as ceramics in thicknesses varying from submicron to tens of centimeters. The institute also develops welding equipment, manufactures pilot plants, and develops welding consumables. We discussed 7 IPP projects—4 completed and 3 active—with E.O. Paton officials and Pratt and Whitney Kyiv employees at 3 Paton facilities in Kyiv. The International Center for Electron Beam Technology is a spin-off institute from the E.O. Paton Welding Institute and is located nearby in Kyiv. The center derives more than half of its funding from IPP funds and was created in the early 1990s by Paton employees specifically to take on projects with international organizations. According to institute officials, they do not receive any funding for their activities from the Ukrainian government. However, they also told us that financially, their situation is much better than 14 years ago, but that all of their research equipment is out of date. All of the IPP funds are used to pay scientists’ salaries, and they do not have other funds for new equipment. We discussed 2 IPP projects—1 completed and 1 active—during the interview. The Institute for Metal Physics is part of the Ukrainian Academy of Sciences and employs about 600 staff—about half researchers and half support staff. The number of staff is down from a peak of 1,600 in Soviet times but has been stable for the past 5 to 6 years, according to institute officials. These officials told us that during the Soviet era, about 80 percent of the institute’s work was related to missile delivery systems. The institute became completely divorced from weapons work in the mid 1980s. Today, virtually all work is commercial. During our visit, we discussed 1 active IPP project. The International Institute of Cell Biology is a nonprofit entity founded in 1992 by the Ukrainian Academy of Sciences. The International Institute of Cell Biology employs about 150 people, about one third of whom have doctorates. It is closely affiliated with the Institute of Cell Biology and Genetic Engineering, founded in 1988, and the Institute of Microbiology and Virology founded in 1928. The Institute of Cell Biology and Genetic Engineering is one of the key laboratories involved with plant genetic engineering in the former Soviet Union and offers substantial expertise in tissue culture initiation, preservation and maintenance, and gene transfer and expression. The Institute of Microbiology and Virology, with about 300 scientists, hosts the second largest collection of microorganisms in the countries of the former Soviet Union. The official we interviewed told us that the Institute of Microbiology and Virology was involved in defense efforts involving biological agents during Soviet times. Researchers from both of these institutes were involved in the International Institute of Cell Biology’s work with the IPP program. The deputy director told us that there has been a significant brain drain over the years. Over the last 15 years, 50 scientists left the institute and went to western-oriented countries, such as Germany and Australia. We discussed 1 completed IPP project. However, the deputy director told us that he is planning to apply for 2 more projects in the future. Registered as a private company in 2000, Intertek, Ltd., was founded by a man who was a professor of Aircraft Engines and Technology at the National Aerospace University in Kharkiv until 2004. We discussed an IPP project, at the draft stage, with Intertek’s director and a representative from a partner institute, the State Design Office Yuzhnoye. The director told us that Intertek currently employs about 15 people and would expand to 40 if the IPP project starts up. Most of the staff would be drawn from the National Aerospace University in Kharkiv. Kharkov Institute of Physics and Technology, one of the oldest and largest centers for physical science in Ukraine, was created in 1928 to research nuclear and solid-state physics. The institute, located in Kharkiv, Ukraine, currently has 2,500 employees, down from about 6,500 employees before 1991. Many young specialists left during the difficult financial period of the late 1990s for Brazil, Canada, France, Germany, Israel, the Netherlands, Sweden, the United Kingdom, and the United States. Institute officials are not aware of any specialists who have either left Ukraine for a country of concern or provided any information to such a country. Since 2004, the institute has been under the Ukrainian Academy of Sciences and is Ukraine’s lead organization on scientific programs for nuclear and radiation technologies. The institute’s economic condition has significantly improved over the past 10 years. It is receiving more direct funding from the Ukrainian federal budget and also receives grants from U.S. and European programs. Assistance partners include STCU and IPP. IPP funding makes up no more than 2 percent of the total budget. We discussed 6 IPP projects—1 draft, 2 active, and 3 completed—with institute officials. The State Design Office Yuzhnoye in Dnipropetrovsk was founded in 1954 for researching and engineering space and rocket technology. The institute has designed and manufactured many varieties of ballistic missile complexes, and designed and launched 70 types of spacecraft. Once Ukraine gained its independence in 1991, Yuzhnoye, the sole Soviet missile design facility located outside of the Russian Federation, discontinued its work on ballistic missiles. However, since 1994, Yuzhnoye personnel, under a contract with the Russian Strategic Rocket Forces, have continued to provide a wide range of services aimed at extending the service life of those missile complexes still in use. In addition, the institute has diversified its production to include agricultural machinery, such as combines; a line of food processing accessories; and trolleys. We met with an official from Yuzhnoye during our interview with Intertek, Ltd., and discussed 1 draft IPP project on which the 2 institutes are collaborating. This appendix provides information on the classification systems DOE and the three entities that make IPP project payments to recipients in Russia and other countries use to classify the WMD expertise of the personnel participating in an IPP project. DOE, for example, classifies personnel into one of three categories: 1. Direct experience in WMD design, production, or testing. 2. Experience in research and development of WMD underlying technology. 3. No WMD-relevant experience. DOE also requires that a preponderance of staff working on its projects have had WMD-relevant experience before 1991 (i.e., fall in categories 1 or 2 above). According to DOE, “the meaning of ‘preponderance’ is taken to be 60 percent, as a bare minimum. Two thirds would be better, and anything above that better still.” There is no consistent approach to categorizing the proposed project personnel by the national laboratories in the lists they submit in the proposal to DOE for review. In some cases, the proposed personnel are categorized using the DOE classifications. But in other cases, the individuals in the project proposal are classified using weapons experience codes of the intended payment mechanism. Some IPP project proposals classify personnel using both the DOE categories and the payment system codes. Each of the three payment entities have similar but slightly different lists of weapons experience codes that personnel on an IPP project use to designate their relevant WMD background. See table 2 for the weapons codes used by CRDF, ISTC, and STCU, by general type of weapons expertise. Table 3 provides information on the 50 IPP projects DOE indicated as contributing to commercial successes in its Fiscal Year 2005 IPP Program Annual Report. The following are GAO’s comments on the Department of Energy’s letter dated November 21, 2007. 1. We are aware that DOE conducted internal assessments in 2004 and 2006 of its overall efforts to engage WMD scientists in the former Soviet Union and other countries. However, these assessments did not evaluate the IPP program exclusively and were conducted at a time when the IPP program was complemented by and coordinated with a similar DOE program focused on downsizing facilities and creating new jobs for personnel in Russia’s nuclear cities. This complementary program—the Nuclear Cities Initiative—has since been canceled. As a result, the IPP program operates under a significantly different set of circumstances today than when DOE conducted its previous internal assessments. Moreover, we note that some recommendations and action items from DOE’s previous internal assessments, such as the development of an exit strategy, have not been implemented. Finally, during our review and as discussed in this report, we found numerous shortcomings and problems with the IPP program. We made a number of recommendations for improving the program, many of which DOE agreed with, including issues that should be addressed in the context of a program reassessment, such as the need to develop a program exit strategy. For these reasons, we are recommending that DOE undertake a fundamental reassessment of the IPP program, in concert with other agencies, to determine the continuing value of and need for the program. 2. DOE has incorrectly characterized how we collected information and conducted our analysis of the participants on IPP projects. Contrary to DOE’s assertion, we did not base our analysis of this issue on responses to questions we posed directly to officials at Russian and Ukrainian institutes. We used data and statements provided directly by DOE program officials to determine the total number of former Soviet weapons scientists, engineers, and technicians the program has engaged since its inception. Regarding the level and number of WMD experts involved in individual IPP projects, as explained in the scope and methodology section of our draft report, we used a number of methods for assessing these totals, including analyzing data provided by project managers at the national laboratories; reviewing payment records provided by CRDF, ISTC, and STCU; and assessing the reliability of data we received. 3. DOE has incorrectly asserted that we implied that DOE and State did not concur on the project in question, and that DOE ignored State’s concerns regarding the primary Ukrainian institute’s involvement in WMD. We used this case as an example of how DOE’s limited ability to assess the proposed participants on an IPP project can lead to misallocation of funding. In our view, a more thorough evaluation of the entities involved in the project by DOE during its proposal review might have uncovered the conflict-of-interest issues between the primary Ukrainian institute and the industry partner discovered by the Defense Contract Audit Agency after the project was under way and funds had been spent. 4. Our finding was based on an in-depth review of the personnel involved in 97 IPP projects, representing over 6,450 individuals, or over 38 percent of the total personnel DOE has reported to have engaged through the IPP program. We have no way of assessing the accuracy, reliability, or validity of DOE’s assertion that a majority of IPP project participants have WMD experience. However, we are skeptical that the department was able to conduct a thorough analysis of all IPP project payment records during the time it took to review and comment on our draft report. 5. During our visit to the Russian institute in question, institute officials told us that they were not the source for the reported job creation figure and could not substantiate the total number of jobs created as a result of the IPP projects we asked about. For this reason, we declined the institute official’s offer to obtain further documentation regarding the number of jobs created at other institutes involved in these projects. Although DOE claims to have received additional information from this institute to corroborate the number of jobs reported to have been created, DOE did not provide this information to us. As a result, we cannot determine the reliability or accuracy of DOE’s claim that the number of jobs it had reported as created is correct. 6. We have accurately described what we observed during our visit to the Ukrainian institute in question. Based on our observations, this institute clearly was not in dire financial straits or in poor physical condition like some of the institutes in the former Soviet Union we have visited in the past. The donation of funding to improve the physical condition of the institute has no material bearing on the facts that we presented in our draft report. 7. DOE has mischaracterized our findings and our process for evaluating the continued need for the program. As we pointed out in our draft report, officials at 10 of the 22 Russian and Ukrainian institutes we visited stated that they did not believe they or the other scientists at their institutes posed a proliferation risk, while officials at 14 of the 22 institutes also attested to the financial stability of their facilities. Moreover, a senior Russian Atomic Energy Agency official told us, in the presence of IPP program officials, in July 2007 that the program is no longer relevant. DOE asserted that we did not include endorsements of the program in our draft report. However, we do state that many officials at the Russian and Ukrainian institutes we visited noted that the program was especially helpful during the period of financial distress in the late 1990s. 8. DOE misstates the number of institutes that we included in our fieldwork in Russia and Ukraine. The correct number is 22. Regarding DOE’s comment, our draft report clearly stated that DOE policy does not require IPP project participants reemployed in peaceful activities to cut ties to their home institute. However, more than one institute we visited stated that they are still involved in some weapons-related work, and many institutes remain involved in research and technology development that could be applied to WMD or delivery systems for WMD. We do not believe it is possible for DOE to verify the full extent and intended purpose of all activities at the institutes where the IPP program is engaged. Moreover, we believe that DOE misrepresents the IPP program’s accomplishments by counting individuals who have been reemployed in private sector jobs but also are employed by their institutes and, therefore, may still be involved in weapons-related activities. In our view, the reemployment of former weapons scientists into new long-term, private sector jobs—one of the primary metrics DOE uses to measure progress of the IPP program—implies that these individuals have terminated their previous employment at the institutes and are dedicated solely to peaceful commercial activities outside of their institutes. 9. While there is no IPP program requirement to exclude former weapons scientists employed on a part-time basis from the total number of jobs created as a result of IPP projects, DOE’s reported job creation total fails to delineate between part-time and full-time jobs. By not more clearly distinguishing the number of jobs created in each category, this metric is misleading and also misrepresents the program’s accomplishments regarding the employment of weapons scientists in commercial activities. However, we have added information to our report that states that there is no IPP program requirement to exclude former weapons scientists employed on a part-time basis from the total number of jobs created as a result of IPP projects. 10. Our draft report stated that the IPP program does not prohibit participation of younger scientists in IPP projects. In our view, however, DOE has a mistaken and naïve impression of how institutes in the former Soviet Union view the benefits of allowing younger scientists to participate in the IPP program. DOE believes that participation of some younger generation scientists on IPP projects must be permitted to successfully implement projects. This practice has the unintended consequence of allowing former Soviet Union institutes to use the IPP program as a long-term recruitment tool for younger scientists and, thereby, may perpetuate the proliferation risk posed by scientists at these institutes. As we stated in our draft report, officials at 10 of the 22 institutes we visited in Russia and Ukraine said that the IPP program has allowed their institutes to recruit, hire, and retain younger scientists. In our view, this is contrary to the original intent of the program, which was to reduce the proliferation risk posed by Soviet-era weapons scientists. That is why, among other reasons, we are recommending that DOE conduct a reassessment of the IPP program that includes a thorough analysis of the proliferation risk posed by weapons scientists in Russia and other countries, a well- defined strategy to more effectively target the scientists and institutes of highest proliferation concern, more accurate reporting of program accomplishments, and a clear exit strategy for the program. 11. DOE incorrectly characterized our description of its program management system. Specifically, we stated in the draft report “DOE and national laboratory officials told us they are attempting to improve financial oversight over the IPP program, in part, to address concerns about unspent program funds. To that end, DOE is developing a new program management system, which it expects to fully implement in 2008—14 years after the start of the program.” Throughout our review, numerous DOE and national laboratory officials expressed concern about the existing systems that DOE used to manage IPP projects. Our description of DOE’s planned implementation of its new program management system is accurate. 12. DOE officials concurred with our recommendation of reducing large balances of unspent funds and adjusting future budget requests accordingly. The data we present are based on DOE’s own financial reporting and accurately reflect the state of the program’s uncosted balances (unspent funds) over the last 10 years. We noted in our draft report that the program’s uncosted balances are declining, but, as DOE officials acknowledge, uncosted balances remain a serious problem for the IPP program. 13. We are pleased that DOE concurs with our recommendation to improve coordination between the department’s IPP program and ISTC’s Commercialization Support Program, which is funded by State. In its comments, State also concurred with this recommendation. 14. We believe DOE has misconstrued our recommendation concerning its funding of projects in Libya. We did not recommend, nor did we mean to imply, that DOE should allocate 65 percent of project funds to Libya for projects in that country. Instead, our recommendation urges the department to ensure that it complies with existing statutory restrictions on the percentage of IPP funds that can be spent on oversight activities by DOE national laboratories. Specifically, as DOE notes, section 3136 of the National Defense Authorization Act for Fiscal Year 2000 provides that not more than 35 percent of funds available in any fiscal year for the IPP program may be spent by DOE national laboratories to provide oversight of program activities. As our report indicates, DOE’s IPP guidance and its standard practice have been to implement this provision of law on a project-by-project basis, so that no more than 35 percent of the funds for each project are spent by national laboratories. Our point in our report and in our recommendation is that, with respect to projects in Libya, DOE has not followed its IPP guidance restricting national laboratory expenditures. Instead, we found that 97 percent of funds DOE spent on projects in Libya through May 2007 were spent at DOE’s national laboratories for project management and oversight. In this regard, we note that DOE concurred with our recommendation that the department seek explicit congressional authorization to expand IPP efforts outside of the former Soviet Union. In seeking such authorization, DOE may wish to clarify the nature of other restrictions on the program, such as those set forth in section 3136 of the National Defense Authorization Act for Fiscal Year 2000. 15. DOE has mistakenly asserted that our selection of projects for review served as the sole basis for our conclusions and recommendations. As we explained in the draft report’s scope and methodology section, the selection and evaluation of a sample of IPP projects was one of several analytical tools we employed during our review. We not only conducted an in-depth assessment of over 200 IPP projects, but also met multiple times with DOE officials; analyzed program plans, policies, and procedures; interviewed representatives at each of the 12 national laboratories involved in the program; interviewed staff of the U.S. Industry Coalition and 14 U.S. industry partner companies with long-standing participation in the program; and had discussions with numerous recipients of IPP program assistance at 22 institutes in Russia and Ukraine. We also met several times with State officials who are responsible for funding a similar program; interviewed and assessed information provided by officials at CRDF, ISTC, and STCU; and met with nongovernmental experts familiar with the program. As further noted in our draft report, to develop our judgmental sample of 207 projects we used project selection criteria supplied by DOE and considered a variety of factors—such as project status, project funding, type and location of institutes where projects have been implemented, and a project’s commercial success—to ensure we addressed a broad cross-section of IPP projects. This comprehensive approach, consistent with generally accepted government auditing standards, served as the foundation for our assessment which was fair, balanced, and objective. Our extensive review identified legitimate questions concerning the IPP program’s scope, implementation, and performance that we believe should be addressed during the course of the fundamental reassessment of the program recommended in our draft report. In addition to the contact named above, Glen Levis (Assistant Director), R. Stockton Butler, David Fox, Preston Heard, and William Hoehn made key contributions to this report. Other technical assistance was provided by David Maurer; Carol Herrnstadt Shulman; Jay Smale, Jr.; and Paul Thompson. Nuclear Nonproliferation: Better Management Controls Needed for Some DOE Projects in Russia and Other Countries. GAO-05-828. Washington, D.C.: August 29, 2005. Weapons of Mass Destruction: Nonproliferation Programs Need Better Integration. GAO-05-157. Washington, D.C.: January 28, 2005. Nuclear Nonproliferation: DOE’s Effort to Close Russia’s Plutonium Production Reactors Faces Challenges, and Final Shutdown Is Uncertain. GAO-04-662. Washington, D.C.: June 4, 2004. Nuclear Nonproliferation: DOE’s Efforts to Secure Nuclear Material and Employ Weapons Scientists in Russia. GAO-01-726T. Washington, D.C.: May 15, 2001. Weapons of Mass Destruction: State Department Oversight of Science Centers Program. GAO-01-582. Washington, D.C.: May 10, 2001. Nuclear Nonproliferation: DOE’s Efforts to Assist Weapons Scientists in Russia’s Nuclear Cities Face Challenges. GAO-01-429. Washington, D.C.: May 3, 2001. Biological Weapons: Effort to Reduce Former Soviet Threat Offers Benefits, Poses New Risks. GAO/NSIAD-00-138. Washington, D.C.: April 28, 2000. Nuclear Nonproliferation: Concerns with DOE’s Efforts to Reduce the Risks Posed by Russia’s Unemployed Weapons Scientists. GAO/RCED-99-54. Washington, D.C.: February 19, 1999.
To address concerns about unemployed or underemployed Soviet-era weapons scientists in Russia and other countries, the Department of Energy (DOE) established the Initiatives for Proliferation Prevention (IPP) program in 1994 to engage former Soviet weapons scientists in nonmilitary work in the short term and create private sector jobs for these scientists in the long term. GAO assessed (1) DOE's reported accomplishments for the IPP program, (2) DOE's exit strategy for the program, and (3) the extent to which the program has experienced annual carryovers of unspent funds and the reasons for any such carryovers. To address these issues, GAO analyzed DOE policies, plans, and budgets and interviewed key program officials and representatives from 22 Russian and Ukrainian institutes. DOE has overstated accomplishments for the 2 critical measures it uses to assess the IPP program's progress and performance--the number of scientists receiving DOE support and the number of long-term, private sector jobs created. First, although DOE claims to have engaged over 16,770 scientists in Russia and other countries, this total includes both scientists with and without weapons-related experience. GAO's analysis of 97 IPP projects involving about 6,450 scientists showed that more than half did not claim to possess any weapons-related experience. Furthermore, officials from 10 Russian and Ukrainian institutes told GAO that the IPP program helps them attract, recruit, and retain younger scientists who might otherwise emigrate to the United States or other western countries and contributes to the continued operation of their facilities. This is contrary to the original intent of the program, which was to reduce the proliferation risk posed by Soviet-era weapons scientists. Second, although DOE asserts that the IPP program helped create 2,790 long-term, private sector jobs for former weapons scientists, the credibility of this number is uncertain because DOE relies on "good-faith" reporting from U.S. industry partners and foreign institutes on the number of jobs created and does not independently verify the number of jobs reported to have been created. DOE has not developed an exit strategy for the IPP program, even though officials from the Russian government, Russian and Ukrainian institutes, and U.S. companies raised questions about the continuing need for the program. Importantly, a senior Russian Atomic Energy Agency official told GAO that the IPP program is no longer relevant because Russia's economy is strong and its scientists no longer pose a proliferation risk. DOE has not developed criteria to determine when scientists, institutes, or countries should "graduate" from the program. In contrast, the Department of State (State), which supports a similar program to assist Soviet-era weapons scientists, has assessed participating institutes and developed a strategy to graduate certain institutes from its program. Instead of finding ways to phase out the IPP program, DOE has recently expanded the program to include new countries and areas. Specifically, in 2004, DOE began providing assistance to scientists in Iraq and Libya. In addition, the IPP program is working with DOE's Office of Nuclear Energy to develop projects that support the Global Nuclear Energy Partnership--a DOE-led international effort to expand the use of civilian nuclear power. In every fiscal year since 1998, DOE carried over unspent funds in excess of the amount that the Congress provided for the program. For example, as of September 2007, DOE carried over about $30 million in unspent funds--$2 million more than the $28 million that the Congress had appropriated for the IPP program in fiscal year 2007. Two main factors have contributed to this recurring problem--lengthy review and approval processes for paying former Soviet weapons scientists and delays in implementing some IPP projects.
Plans for the new convention center were initiated in 1993 by the District’s Hotel and Restaurant Associations, the Convention and Visitors Association, and the District of Columbia government. The Washington Convention Center Authority Act of 1994 (1994 Act) authorizes WCCA to construct, maintain, and operate the new convention center, as well as maintain and operate the existing convention center. The current design calls for a total of 2.1 million gross square feet, which includes approximately 730,000 square feet of prime exhibit space compared to the existing convention center which has a total of 800,000 gross square feet, including 381,000 gross square feet of exhibit space. According to the 1997 Market Demand Update for the Washington, D.C., Convention Center, the proposed new convention center is projected to rank eighth, based on the gross square feet of prime exhibit space, in the United States when completed, and the size of the proposed new convention center should remain highly marketable into the 21st century. In August 1994, when the District created WCCA, it also earmarked additional revenues to finance the project. Section 301 of the 1994 Act amended the District of Columbia Income and Franchise Tax Act of 1947 (1) to decrease the tax on the privilege of corporations, financial institutions, and unincorporated businesses to do business in the District from 10 percent of taxable income to 9.5 percent of taxable income and (2) to impose a surtax of 2.5 percent on the 9.5 percent tax. Amounts collected from the surtax are to be transferred to the WCCA. Sections 302 and 303 of the 1994 Act amended the District of Columbia Sales Tax Act and the District of Columbia Compensating Use Tax Act, respectively (1) to decrease the tax rate from 11 percent to 10.5 percent on hotel gross receipts and to add an additional tax of 2.5 percent on hotel gross receipts and (2) to add an additional tax of 1 percent to the 9 percent tax on the gross receipts from the sales of food or drink (including alcohol) to be consumed on the premises and from the rental of vehicles and trailers. Amounts collected from the additional taxes are to be transferred to the WCCA. Section 304 of the 1994 Act amended the Hotel Occupancy and Surtax on Corporations and Unincorporated Business Tax Act of 1977 to provide 40 percent of the $1.50 tax already being collected on the occupancy of each hotel room to the WCCA and to allocate the remaining 60 percent as follows: 50 percent to the Washington Convention and Visitors Association, 37.5 percent to the Mayor’s Committee to Promote Washington, and 12.5 percent to WCCA for advertising and promotion. The fourth sentence of section 446 of the Home Rule Act, D.C. Code Ann. 47-304 (1981), as amended, provides that “...no amount may be obligated or expended by any officer or employee of the District of Columbia government unless such amount has been approved by an act of Congress, and then only according to such act.” Section 101 of the District of Columbia Convention Center and Sports Arena Authorization Act of 1995, (Public Law No. 104-28, 109 Stat. 267 (1995), D.C. Code sec. 47-396.1 (1981, 1996 supp.)), authorized WCCA’s use of the revenues attributable to sections 301-304 of the 1994 Act to financing the operation and maintenance of the existing convention center and for the preconstruction activities relating to the new convention center. According to WCCA officials, the proposed new convention center is intended to allow the District to compete for larger conventions and trade shows. A 1993 feasibility study by Deloitte & Touche, commissioned by the local hospitality industry, stated that even though the District is viewed as a desirable location, the existing convention center, which has about 381,000 gross square feet of exhibit space, is small compared to the convention centers of other cities such as Atlanta, New York, Chicago, and Philadelphia. The original proposal from the 1993 feasibility study called for building a new convention center in two phases with the first phase to be completed at the end of 1997 with approximately 554,000 gross square feet of exhibit space and the second phase to be completed at the end of 1999 with another 254,000 gross square feet of exhibit space. Since that study was completed, WCCA has ruled out a two-phase development project because the first phase essentially would not provide enough exhibit space to compete with cities with larger convention centers. The cost of both phases was estimated at $521 million. In addition, the 1993 feasibility study, which was projected through the year 2003, estimated that direct and indirect economic benefits to the District from the construction of the project would include 560 new jobs, $4 million in increased revenue, and $260 million in other economic output such as spending related to convention center operations and development. For the Washington metropolitan area, the study projected 1,600 jobs, $28 million in incremental taxes, and $558 million in economic activity. Projected long-term benefits by the fifth year of operation of the new convention center included 2,500 permanent jobs, $44 million in incremental taxes, and $640 million in incremental economic output for the District. WCCA contracted with a management consulting firm to update the 1993 feasibility study, which will include an update of the economic benefits to the District. The study is expected to be completed by September 30, 1997. The current master plan calls for constructing a new convention center at Mount Vernon Square, the legislatively preferred site, located at Ninth Street and Mount Vernon Place, North West. In the 1993 feasibility study, eight potential sites were identified and evaluated against certain criteria such as physical and location characteristics, historic preservation, parking, and cost, including land acquisition and construction. As a result of this analysis, the Mount Vernon Square site was determined to be the preferred site due to its close proximity to the District’s downtown businesses and because the District owns the majority of the land, thus, minimizing the cost of land acquisition. To determine the status of the proposed new convention center project, we interviewed WCCA officials, contractors, regulatory agencies (National Capital Planning Commission and the Advisory Council on Historic Preservation), and District officials to determine and assess WCCA’s progress toward meeting critical milestones. We reviewed the master plan, which contains the concept design and requirements for the new convention center, and compared the design to the existing facility to determine if the new convention center will provide additional exhibit space. We also compared the new convention center exhibit space to facilities in other cities, based on the 1997 Market Demand Update, to determine the ranking in terms of prime exhibit space once the project is completed. We analyzed the current predevelopment and construction cost estimates for the project as of May 31, 1997, to determine how they compare to the previous cost estimates prepared by WCCA. We reviewed documents and held discussions with WCCA officials to obtain reasons for variations from previous cost estimates for the proposed new convention center. We reviewed financial records and current balances to determine the amount of dedicated taxes reported as collected and transferred to WCCA. In addition, we reviewed the lockbox procedures that were established to collect the dedicated taxes. We also reviewed WCCA’s plans for financing the new convention center project to determine whether adequate funding is available to finance the project. In addition, we reviewed alternative sources of financing proposed by WCCA. We conducted our review between March 1997 and July 1997 in accordance with generally accepted government auditing standards. Also, we considered the results of previous work. While we reviewed transactions to determine the reasonableness of the dedicated taxes collected, deposited, and transferred to WCCA, we did not audit the reported taxes collected and deposited for the new convention center project to determine if the District government accurately calculated and transferred all dedicated taxes to WCCA’s escrow account. We requested written comments on a draft of this report from the Mayor of the District of Columbia or his designee. These comments are discussed in the “District’s Comments and Our Evaluation” section and are reprinted in appendix I. The project has reached the regulatory review phase which involves the WCCA obtaining necessary permits, reviews and approvals from federal and local regulatory agencies. NCPC is reviewing the design of the project as provided by law. During its approval process, NCPC is also considering the environmental impact of the conceptual design on the construction site and neighborhood. The environmental impact study (EIS) is complete and was reviewed by the federal Environmental Protection Agency (EPA). According to WCCA’s Manager of Contracts, NCPC did not receive comments from EPA within the comment period. Also, NCPC, WCCA, the Advisory Council on Historic Preservation, the District of Columbia State Historic Preservation Officer, the Mayor, and the Chair of the D.C. City Council, entered into a Memorandum of Agreement (MOA) on September 12, 1997, following consultations under section 106 of the National Historic Preservation Act. The MOA contains a plan to mitigate a number of community, business, civic, and historic preservation concerns regarding the project. As a result of the signing of the MOA, NCPC has resumed consideration of the project and has scheduled hearings on the proposed convention center for September 22 and 25, 1997. Another critical phase in the development process of the project is obtaining authority to use the dedicated revenues attributable to section 301-304 of the 1994 Act to finance the construction of the project. Next, WCCA would have to adopt a resolution to issue the revenue bonds (subject to the City Council review). As part of the financing phase, WCCA would be required to obtain a credit rating for revenue bond financing from rating agencies. Based on WCCA’s current schedule, the financing phase is scheduled for completion in late Fall of 1997, and groundbreaking is planned for late 1997 or early 1998. When we last reported in December 1996, WCCA estimated that the project would be completed by December 31, 1999. The estimated completion date is now December 31, 2000. However, if more delays occur in the regulatory process and the finalization of a financing plan, this project could be further delayed. WCCA has to acquire 15 remaining parcels of land for the Mount Vernon Square site. According to WCCA officials, the District does not anticipate any problems in acquiring the remaining properties, either by negotiated sale or, if necessary, by exercise of its power of eminent domain. According to requirements in the master plan, which WCCA officials told us are still the current thinking, the proposed new convention center will almost double the exhibit space for conventions and expositions available at the existing convention center. The current convention center has 800,000 gross square feet consisting of 381,000 gross square feet of exhibit space on two levels. The upper level has three exhibit halls with 276,000 square feet of exhibit space; the lower level contains 105,000 square feet of exhibit space. The master plan for the proposed new convention center calls for a partially below-ground facility with approximately 2.1 million gross square feet that includes 730,000 gross square feet of prime exhibit space. The master plan organizes the new facility into three buildings with the approximate height ranging from 35 feet on the northern end to 130 feet on the southern end of the complex consistent with the Building Height Act restriction. The proposed new convention center would have four levels with a completely below ground exhibit level containing 500,000 square feet of contiguous exhibit space and adjacent loading docks. The street level would consist primarily of lobby/registration space, meeting rooms, service/support space with some retail and community space on the perimeter. The upper level has additional meeting rooms and 230,000 square feet of column free exhibit space. The ballroom level (the fourth level) also includes the central kitchen. The current convention center is ranked 30th, in terms of prime exhibit space, among competing convention centers in the United States, and the proposed new convention center is expected to be ranked eighth after completion, as shown in table 1. According to WCCA officials, the current master plan contemplates a state of the art facility with technology, including fiber optics and improved telecommunications capabilities to meet future District demands for convention exhibit space. To compete with other convention centers, WCCA’s proposed plan would increase the number of loading docks, column spacing, ceiling heights, and floor loads in the exhibit space to attract some of the major trade shows. WCCA must obtain NCPC’s and the City Council’s approvals before the project can move forward. In addition, WCCA’s Board of Directors has determined that the “design build method” for the proposed new convention center would best meet WCCA’s cost and scheduling requirements for the new convention center. The design/build approach combines the responsibilities for designing and constructing the project in a single entity rather than separating the responsibilities among a number of entities. To mitigate development risks, WCCA’s design/build contract would need to include performance clauses with specificity to prevent cost overruns and construction delays. The estimated total predevelopment and construction costs, including contingencies, of the proposed new convention center have increased by a net of $100 million to approximately $650 million from the $550 million that was last reported. The increase is caused by several factors, primarily a $118 million increase in construction cost, now estimated to be about $534 million, excluding contingencies, compared to the previous estimate of $416 million and offsetting declines in predevelopment costs. Table 2 highlights the total estimated project costs for the new convention center. The increase in the construction costs are primarily associated with (1) additional steel necessary to construct the facility, (2) excavation and slurry wall costs from lowering the building 50 feet below ground to reduce the building height above ground, (3) shifting the building mass south, (4) providing retail and community space on the perimeter, (5) reducing construction across L Street which requires finished elevations on both sides of the street, and (6) allowing M Street to remain open for local traffic, which requires creating an overpass over the street and reinforcing the street over the below ground exhibit space. The majority of these revisions are in response to NCPC’s concerns, the section 106 mitigation plan (required by the National Historic Preservation Act), and community concerns. Estimated contingencies are up approximately $6 million over the previous estimate of $70 million. The increase is due primarily to Section 106 mitigation and other regulatory issues, such as the implementation of the transportation management plan. Predevelopment costs, down $24 million from the previous estimate of $64 million, fell largely because certain costs previously budgeted for bond insurance and investment banking services will now be paid from future bond proceeds. As of July 31, 1997, WCCA incurred predevelopment costs of approximately $15.6 million, which are primarily for program management services, architect/engineering design, environmental impact study, land acquisition, and legal services. WCCA receives a portion of the District’s hotel sales and use taxes, hotel occupancy tax, corporation franchise, and unincorporated business taxes to help fund operations of the existing convention center and the predevelopment costs of the proposed new convention center. Since October 1994, taxes have been collected monthly, and based on audited financial statements, WCCA had received approximately $33 million and $35.5 million in tax revenues for fiscal years 1995 and 1996, respectively. The District projects tax revenues to WCCA of about $35 million for fiscal year 1997 and average annual amounts of about $36 million for fiscal years 1998 to 2002. As of July 31, 1997, WCCA had received about $97 million in dedicated tax revenues. WCCA had invested approximately $67.6 million of the $97 million in Fannie Mae and Freddie Mac discount notes, which are earning an average of about 5.5 percent annually. Table 3 highlights the receipts and disbursements from the dedicated tax revenues collected since inception in fiscal year 1995. In our December 1996 report, we stated that WCCA is considering the use of revenue bonds, backed by the dedicated taxes, to finance the construction cost of the project. Also, during our discussions with rating agencies officials, they informed us that the rating might be improved if the collection process for the dedicated taxes were separated from the District’s tax collection process. Since that time, the District has separated the dedicated taxes from the District’s tax collection process by having businesses send the dedicated tax payments directly to lockboxes under the control of the banks. As of February 1997, lockboxes have been established at Signet Bank and First Union Bank for the collection of all the dedicated convention center taxes, and the banks will now be responsible for transferring these tax revenues to WCCA. Further, based on a report by the District of Columbia Auditor, which discusses the convention center dedicated tax revenues, WCCA was entitled to approximately $1 million in additional dedicated tax revenues from the District. The District subsequently transferred the $1 million to WCCA. The dedicated tax revenues became effective as of fiscal year 1995, and, according to the District of Columbia Auditor, the shortfall in tax revenues transferred to WCCA occurred primarily in the early months of fiscal year 1995. Based on the audit, the underpayment occurred primarily due to the Department of Finance and Revenue’s (DFR) failure to calculate the additional dedicated taxes owed the WCCA based upon reconciliations of sales and use tax payments that were completed 60 to 90 days after DFR’s initial reporting period. Based upon these reconciliations, additional dedicated taxes should have been transferred to the WCCA. Thus, the establishment of lockboxes for the collection of the dedicated tax revenues could result in more timely and accurate receipt of revenues as well as improve the likelihood for an investment grade rating should WCCA decide to issue revenue bonds to finance the construction cost of the project. WCCA plans to issue revenue bonds backed by dedicated taxes to finance the construction cost of the project. Section 490 of the Home Rule Act (Public Law No. 93-198, as amended) was recently amended to authorize the District to issue revenue bonds backed by a pledge of dedicated taxes to finance various capital projects or other undertakings, including convention facilities. In addition, the fourth sentence of section 446 of the Home Rule Act was recently amended to, among other things, authorize the District to disburse dedicated tax revenues to pay the principal of, interest on, or premium for any authorized revenue bond without further action by the Congress. However, the 1997 Act did not expand the authority to use the tax revenues attributable to sections 301-304 of the 1994 Act to construct the new convention center. In addition, section 204 of the 1994 Act provides that WCCA may not adopt a resolution to authorize a bond issuance without submitting the resolution to the City Council for a 30-day review period during which the Council may adopt a resolution disapproving the bond issuance. Assuming that WCCA receives authority to use the revenues attributable to sections 301-304 of the 1994 Act for constructing the project and that the City Council does not disapprove the bond issuance, current projections of future dedicated tax revenues are not sufficient to support debt service costs for the full amount of the estimated construction cost. In March 1997, WCCA engaged a financial advisory services firm to provide various financial services related to the convention center finances. In its proposal dated May 15, 1997, as well as subsequent proposals since May, the financial advisory services firm outlined several financing options for WCCA such as the issuance of revenue bonds, federal grants, lease arrangements, sale of the existing convention center, vendor participation/naming rights, and reallocation of a portion of the hotel occupancy tax. Table 4 depicts a financing option that WCCA is considering, assuming that all necessary approvals are granted regarding the use of currently dedicated revenues. As stated previously, the total cost of the project is estimated at $650 million—$610 million is estimated for construction costs, including contingencies, and $40 million is estimated for predevelopment costs. WCCA already has sufficient funds from dedicated taxes for predevelopment activities. However, the funding for the construction cost is uncertain at this time. Based on WCCA’s estimate, table 4 shows a financing gap of $106 million that must be addressed before WCCA can enter the bond market. We have projected the estimated shortfall to be about $114 million since WCCA will need an additional $8 million to satisfy a $30 million operation and maintenance reserve, which is required by the rating agencies before WCCA can enter the bond market. The following information describes the above financing option in more detail. Senior and Junior Lien Bonds. The foundation of WCCA’s financing plan is to generate the maximum amount of revenue bond funding for the construction cost of the new convention center project, which, according to the financial advisors, could be accomplished by using a senior lien and junior lien bond structure. As previously stated, WCCA collects approximately $35 million annually from the dedicated tax revenues. Of this amount, approximately $7.5 million is needed annually for operating expenses of the existing convention center, which leaves approximately $27.5 million as collateral for the issuance of revenue bonds. According to WCCA’s financial advisors, the $27.5 million in annual revenue would support approximately $423 million in bond proceeds, assuming an average interest rate of about 6.3 percent. Senior lien bondholders will be provided higher coverage, and this financing will be based on historical tax collections of the dedicated tax revenues. Junior lien bondholders will accept some of the credit risk of projected growth in the dedicated revenues, and this financing structure will be predicated on a 1 percent growth per year in the dedicated tax revenues. A critical component of financing costs involves the level of risk associated with the bond. Higher risk bonds generally have higher interest rates, may require insurance, or may require the issuer to set up large debt service reserves. Officials at bond rating agencies have indicated that a number of factors are important in their assessment of bonds that are backed by dedicated revenues. First, if the bond is backed by a tax, the collection history of the tax is important. Bonds backed by taxes that have a solid collection history are less risky than those backed by new or unproven taxes. Second, the tax backing for a bond is less risky if it is assessed on a broader range of goods, services, or population. Third, revenue streams that have some legislative risk (that is, revenues based on an appropriation) make the bond higher risk. Finally, the general economic strength of the area is critical to the bond assessment. Since the majority of the funds, 69 percent ($423 million of $610 million), for the financing of the convention center is expected to come from bonds backed by the dedicated tax revenues, we have attempted to determine the collection history of the taxes and the District’s assumptions regarding future collections. As previously stated, the dedicated taxes are derived from portions of the District’s hotel occupancy, the corporation franchise, the unincorporated business franchise, and sales and use taxes (restaurant meals, rental cars, and hotel rates). These taxes are not new. The majority of the taxes was generated from rate increases of existing taxes (corporation franchise and unincorporated business franchise and sales and use taxes), and the rest (hotel occupancy tax) was diverted from taxes that previously went to the District’s general fund. The majority of the dedicated revenues that WCCA receives, approximately 79 percent in fiscal year 1996, is derived from sales and use taxes (restaurant and rental cars and hotel rates), which are parts of the District’s general sales and use tax. Based on the District’s Comprehensive Annual Financial Report (CAFR), in fiscal year 1996, the District collected $467.5 million in total general sales and use taxes, and WCCA received $27.9 million, or about 6 percent, of this total. The District is unable to disaggregate the specific taxes that are dedicated to WCCA from the general sales and use tax category, and as a result, the District could not provide us with audited historical data for these specific taxes. Based on information received from the District, we compared information from the District’s Business Tax Information System to information reported in the District’s CAFR, and that information reflects different amounts for the taxes for the same reporting periods. For example, in fiscal year 1995, the District’s CAFR showed total general sales tax of $485.6 million and the Business Tax Information System showed $468.8 million, a difference of approximately $17 million. Therefore, it is difficult to discern how these specific taxes have performed over the past 5 years. In addition, the District could not provide us with projections for these specific taxes for the next 5 years. Construction Fund Earnings. WCCA’s financial advisors project that WCCA could generate about $51 million in interest earnings from bond proceeds between 1997 and 2001. The bond proceeds would be deposited in a construction fund. During the construction period, funds that are not drawn from the account would be invested to generate the $51 million. WCCA Cash-on-Hand. As of July 31, 1997, WCCA has on hand approximately $72.9 million in dedicated tax revenues, with $24.4 million earmarked for the remaining predevelopment costs, and about $2.5 million budgeted for additional operating subsidy of the existing convention center for the remainder of fiscal year 1997, leaving $46 million. WCCA plans to use $30 million of this money to help finance the construction cost of the project. The remaining $16 million as well as a projected $5.8 million in additional collection for the remaining months (August and September) of fiscal year 1997 is needed to establish an operation and maintenance reserve (O&M), which is required, by the rating agencies, to be available before WCCA enters the bond market. WCCA’s financial advisors estimate that about $30 million would be required for the O&M reserve. It is projected that about $22 million would be available at the end of fiscal year 1997 to be applied toward the reserve. Thus, assuming currently estimated costs are substantially accurate, WCCA needs about $114 million ($106 million plus an additional $8 million for O&M reserve) if it were to enter the market in October, as originally planned, to obtain bond financing. WCCA is considering several financing options to close the shortfall, such as reallocation of the total amount of hotel occupancy tax from the District, sale of the existing convention center, and federal grants. However, it is uncertain at this time as to the outcome of these options. For example, the disposition of the existing convention center would not occur for some time, and it is uncertain how much WCCA would benefit from its disposition/sale, especially since there is an outstanding debt of $75 million on this center. In addition, section 304 of the 1994 Act, D.C. Code Ann. sec. 47-3206 (1981, 1996 Supp.), makes 40 percent of the dedicated hotel occupancy tax available to WCCA for financing the project while the remaining 60 percent (about $5 million annually) is allocated for other purposes. WCCA is actively seeking to gain control of the 60 percent allocated for other purposes to assist it in closing a portion of the funding gap of the project. Table 5 highlights some of the financing options that WCCA is considering to close the funding gap. On August 29, 1997, we provided the Mayor of the District Government with copies of a draft of this report for review and comment. In a September 9, 1997 meeting, WCCA officials, including the Project Director, General Counsel, and the Chief Financial Officer, generally concurred with our report and provided additional information. Written comments from the Mayor are reprinted as appendix I. We have incorporated changes as appropriate throughout the report. WCCA provided updated information concerning its progress in obtaining the necessary regulatory approvals and developing a financing plan. Specifically, WCCA stated that it appears to have obtained consensus on an MOA that will be signed by the necessary regulatory agencies. Since the Mayor commented on this report, all parties have signed the MOA, and NCPC has scheduled hearings on the proposed convention center for September 22 and 25, 1997, to consider site and design approval for the new convention center at Mount Vernon Square, alley and street closings, as well as the urban renewal plan amendments necessary to allow construction to begin at Mount Vernon Square. These are key issues that must be resolved before WCCA can proceed with the project. WCCA told us that it has a financing plan to eliminate the funding gap for the proposed convention center. This plan proposes to reallocate the Hotel Occupancy Tax revenues imposed by D.C. Code Section 47-3206—and now available to the Washington Convention and Visitors Association and the Mayor’s Committee to Promote Washington—for the payment of debt service for the new convention center. In addition, the term of convention center bonds would be authorized to mature in up to 40 years. Current law limits bond maturity to 30 years from issuance. WCCA is drafting legislation to authorize the above changes. Approval of these or alternative steps are key issues in moving ahead with the convention center project. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Committees on Appropriations and their subcommittees on the District of Columbia; and the Senate Committee on Governmental Affairs, Subcommittee on Oversight of Government Management, Restructuring and the District of Columbia, and the Ranking Minority Member of your Subcommittee. Major contributors to this report are listed in appendix II. If you or your staff need further information, please contact me at (202) 512-4476. Richard Cambosos, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on the progress of the proposed new convention center project for Washington, D.C., focusing on the project's: (1) approval process; (2) building design; (3) estimated costs; (4) dedicated revenues; and (5) financing plans. GAO found that: (1) the Washington Convention Center Authority (WCCA) is faced with the challenge of obtaining sufficient financing for the construction of the new convention center project; (2) the total cost--predevelopment and construction, including contingencies--is now estimated to be about $650 million, excluding $87 million of borrowing costs and certain reserve requirements; (3) WCCA already has sufficient funds from dedicated taxes for the $40 million in projected predevelopment costs of which a reported $15.6 million was expended as of July 31, 1997; (4) however, funding for the entire estimated $610 million in construction costs is uncertain; (5) WCCA plans to issue revenue bonds backed by dedicated taxes to finance a portion of the construction cost of the project; (6) however, WCCA would need to have its authority to use the taxes currently dedicated to the project expanded to include using them for construction and would have to adopt and submit for City Council review a resolution authorizing the issuance of revenue bonds; (7) the current stream of existing annual dedicated tax revenues is not sufficient to support the debt required to fund the project's estimated construction cost; (8) the current earmarked tax collections are estimated to support a revenue bond issuance of $423 million; (9) WCCA estimated that if $51 million of interest earnings from bond proceeds as well as $30 million of cash on hand from dedicated taxes as of July 31, 1997 are added to the estimated $423 million, total estimated revenues would amount to $504 million; (10) however, this would leave a shortfall of approximately $106 million; (11) assuming estimated costs are accurate, WCCA would need about $114 million ($106 million plus an estimated $8 million to satisfy an operation and maintenance reserve) if it were to enter the market in October 1997, as originally contemplated, to obtain bond financing; (12) WCCA, with the assistance of financial advisors, has been exploring options such as additional funds from the District, federal funding, and sale of the existing convention center to supplement the dedicated tax revenues; (13) also, before the project can move forward, the National Capital Planning Commission (NCPC), the central agency for conducting planning and development activities for federal lands and facilities in the National Capital Region, including the District of Columbia, must approve the concept design as well as address community concerns regarding the project; and (14) since GAO's December 1996 report, WCCA's estimated completion date has slipped 1 year to December 31, 2000, and based on the delays and approvals required, this date is uncertain.
CPP was created to help stabilize the financial markets and banking system by providing capital to qualifying regulated financial institutions through the purchase of preferred shares and subordinated debt. Rather than purchasing troubled mortgage-backed securities and whole loans, as initially envisioned under TARP, Treasury used CPP investments to strengthen the capital levels of financial institutions. Treasury determined that strengthening capital levels was the more effective mechanism to help stabilize financial markets, encourage interbank lending, and increase confidence in the financial system. On October 14, 2008, Treasury allocated $250 billion of the original $700 billion, later reduced to $475 billion, in overall TARP funds for CPP. In March 2009, the allocation was reduced to reflect lower estimated funding needs, as evidenced by actual participation rates. On December 31, 2009, the program was closed to new investments. Under CPP, qualified financial institutions were eligible to receive an investment of 1–3 percent of their risk-weighted assets, up to $25 billion. In exchange for the investment, Treasury generally received preferred shares that would pay dividends. As of the end of 2014, all the institutions with outstanding preferred share investments were required to pay dividends at a rate of 9 percent, rather than the 5 percent rate in place for the previous 5 years. EESA requires that Treasury also receive warrants to purchase shares of common or preferred stock or a senior debt instrument to further protect taxpayers and help ensure returns on the investments. Institutions are allowed to repay CPP investments with the approval of their primary federal bank regulator, and after repayment, institutions are permitted to repurchase warrants on common stock from Treasury. Treasury largely has wound down its CPP investments, and as of February 29, 2016, had received $226.7 billion in repayments and income from its CPP investments, exceeding the amount originally disbursed by almost $22 billion. The repayments and income amounts included almost $200 billion in repayments and sales of original CPP investments, as well as about $12 billion in dividends and interest, almost $7 billion in proceeds in excess of costs, and about $8 billion from the sale of warrants (see fig.1). After accounting for write-offs and realized losses from sales totaling about $5 billion, CPP had almost $0.3 billion in outstanding investments as of February 29, 2016. Treasury’s most recent estimate of lifetime income for CPP (as of Nov. 30, 2015) was about $16 billion. As of February 29, 2016, 16 of the 707 institutions that originally participated in CPP remained in the program (see fig. 2). Of the 691 institutions that had exited the program, 261 repurchased their preferred shares or subordinated debentures in full. Another 165 institutions refinanced their shares through other federal programs, 28 through the Community Development Capital Initiative (CDCI) and 137 through the Small Business Lending Fund (SBLF), another Treasury fund that was separate from TARP. An additional 190 institutions had their investments sold through auction, 39 institutions had their investments restructured through non-auction sales, and 32 institutions went into bankruptcy or receivership. The remaining 4 merged with other institutions. As shown in figure 3, as of February 29, 2016, the remaining $257.1 million in outstanding CPP investments was concentrated in half of the remaining 16 institutions. The institutions with the eight highest amounts of outstanding CPP investments accounted for 88 percent ($225.5 million) of the outstanding investments, while one institution accounted for 49 percent ($124.97 million). The remaining $31.6 million (12 percent) was spread among the other eight institutions. Our analysis of financial condition metrics over the past 4 years indicates that among the 16 institutions remaining in CPP as of February 29, 2016, several have continued to face challenges. Although the median return on average assets—a key indicator of a company’s profitability—was higher in the fourth quarter of 2015 than in 2011, 9 of the 16 institutions had negative returns in 2015. Furthermore, 6 of the 16 institutions had a lower return on assets in 2015 than they did at the end of 2011. The remaining institutions also had varying levels of reserves for covering losses, as measured by the ratio of reserves to nonperforming loans. For example, 6 of 15 institutions had lower levels of reserves for covering losses in 2015 compared to 2011, while 9 institutions had higher levels. Treasury officials stated that the remaining CPP institutions generally had weaker capital levels and worse asset quality relative to institutions that had exited the program. They noted that this situation was a function of the lifecycle of the program, because stronger institutions had greater access to new capital and were able to exit, while the weaker institutions had been unable to raise the capital needed to exit the program. Many of the remaining CPP institutions were on the “problem bank list” of the Federal Deposit Insurance Corporation (FDIC) and most have been delinquent on their dividend payments for several years. The problem bank list contains banks with demonstrated financial, operational, or managerial weaknesses that threaten their continued financial viability; the number of problem banks is publicly reported on a quarterly basis. Specifically, as of December 31, 2015, 11 of the then 17 remaining CPP institutions (65 percent) were on FDIC’s problem bank list. The percentage of remaining CPP institutions on the problem bank list is higher than the number we reported for the previous 2 years: 47 of 83 (57 percent) in 2013 and 20 of 34 (59 percent) in 2014. Of the 16 CPP institutions remaining as of February 29, 2016, 1 of the 14 required to pay dividends made the most recent scheduled dividend or interest payment. The 13 institutions that are delinquent have missed an average of 23 quarterly dividend payments, with the fewest missed payments at 16 and the most missed payments at 29. Institutions can elect whether to pay dividends and may choose not to pay for a variety of reasons, including decisions they or their federal or state regulators make to conserve cash and capital. However, investors may view an institution’s ability to pay dividends as an indicator of its financial strength and may see failure to pay as a sign of financial weakness. Treasury officials told us that they regularly monitor and have direct and substantive conversations with the remaining 16 institutions, including discussions about their plans to exit the program. Treasury officials expect most of the remaining CPP institutions to exit through restructurings but do not have a specific end date for exiting all their CPP investments and winding down the CPP program. EESA does not require Treasury to set a specific date on which the program will expire. Although Treasury has not changed its exit strategy, which consists of repayments, restructurings, and auctions, the extent to which each approach has been used has shifted over time. Repayments. Repayments allow financial institutions, with the approval of their regulators, to redeem their preferred shares in full. Institutions have the contractual right to redeem their shares at any time. As of February 29, 2016, 261 institutions had exited CPP through repayments. Institutions must demonstrate that they are financially strong enough to repay the CPP investments to receive regulatory approval to proceed with a repayment exit. Restructurings. Restructurings allow troubled financial institutions to negotiate new terms or discounted redemptions for their investments. Raising new capital from outside investors (or a merger) is a prerequisite for a restructuring. With this option, Treasury receives cash or other securities that generally can be sold more easily than preferred stock, but the restructured investments are sometimes sold at a discount to par value. According to Treasury officials, Treasury facilitated restructurings as an exit from CPP in those cases in which new capital investment and redemption of the CPP investment by the institutions otherwise was not possible. Treasury officials said that they approved the restructurings only if the terms represented a fair and equitable financial outcome for taxpayers. Treasury completed 39 such restructurings through February 29, 2016. Auctions. Treasury conducted the first auction of CPP investments in March 2012, and has continued to use this strategy to sell its investments. As of February 29, 2016, Treasury had conducted a total of 28 auctions of stock from 190 CPP institutions. Through these transactions, Treasury received about $3 billion in proceeds, which was about 80 percent of the investment’s face amount. As we previously reported, Treasury has sold investments individually to date, but noted that combining smaller investments—into pooled auctions—remained an option. Whether Treasury sells stock individually or in pools, the outcome of this option will depend largely on investor demand for these securities and the quality of the underlying financial institutions. The method by which institutions have exited the program has varied over time. As shown in figure 4, from 2009 through 2011, the majority of institutions exiting CPP did so through repayment or refinancing their shares through CDCI and SBLF. From 2012 to 2014, auctions were the predominant exit strategy. During that same period, restructurings also increased. For example, in 2012, 4 percent of exits (7 of 159) were restructurings. In 2014, 15 percent (8 of 52) used restructuring as an exit strategy. In 2015, restructurings remained a common strategy, representing 35 percent (6 of 17) of exits, while auctions dropped from 44 percent (23 of 52) in 2014 to 29 percent (5 of 17) in 2015. Treasury officials told us they expected restructurings to be the primary exit strategy in the future, but as noted earlier, auctions remain a possible exit strategy. Treasury expects to rely on restructurings and auctions because the overall financial condition of the remaining institutions makes full repayment unlikely. At this time, Treasury does not have any plans to fully write off any investments. Treasury officials anticipate that the current strategy to restructure or auction the remaining investments will result in a better return for taxpayers. According to officials, any savings achieved by writing off the remaining CPP assets and eliminating costs associated with maintaining CPP would be limited, because much of the TARP infrastructure will remain intact for several years to manage other TARP programs. Treasury officials also noted that writing off the remaining assets could be seen as diminishing the equitable treatment of institutions across the program. That is, writing off the remaining assets, (and thereby not requiring repayment from the remaining institutions) would be unfair to the institutions that already had repaid their investment and exited the program. We provided Treasury with a draft copy of this report for review and comment. Treasury provided technical comments that we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees. This report will be available at no charge on our website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or garciadiazd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. In addition to the contact named above, Karen Tremba (Assistant Director), Anne Akin (Analyst-in-Charge), Bethany Benitez, William R. Chatlos, Lynda Downing, Risto Laboski, Marc Molino, Barbara Roesmann, Christopher Ross, and Max Sawicky have made significant contributions to this report.
CPP was established as the primary means of restoring stability to the financial system under the Troubled Asset Relief Program (TARP). Under CPP, Treasury invested almost $205 billion in 707 eligible financial institutions between October 2008 and December 2009. CPP recipients have made dividend and interest payments to Treasury on the investments. The Emergency Economic Stabilization Act of 2008 includes a provision that GAO report at least every 60 days on TARP activities. This report examines (1) the status of CPP, (2) the financial condition of institutions remaining in the program, and (3) Treasury's strategy for winding down the program. To assess the program's status, GAO reviewed Treasury reports on the status of CPP. In addition, GAO used financial and regulatory data to assess the financial condition of institutions remaining in CPP. Finally, GAO interviewed Treasury officials to examine the agency's exit strategy for the program. GAO provided a draft of this report to Treasury for its review and comment. Treasury provided technical comments that GAO incorporated as appropriate. The Capital Purchase Program (CPP) largely has wound down and the Department of the Treasury's (Treasury) returns on CPP investments surpassed the original amount disbursed. As of February 29, 2016, Treasury had received $226.7 billion in repayments and income from its CPP investments, exceeding the amount originally disbursed by almost $22 billion. As of the same date, 16 of the 707 institutions remained in the program. Treasury's most recent estimate of lifetime income for CPP (as of Nov. 30, 2015) was about $16 billion. Most of the remaining CPP institutions have continued to exhibit signs of financial weakness. Specifically, 9 of the 16 institutions had negative returns on average assets (a common measure of profitability) in 2015. Also, 6 institutions had a lower return on assets in 2015 than they did at the end of 2011. Treasury officials stated that the remaining CPP firms generally had weaker capital levels and worse asset quality than firms that had exited the program. Also, nearly all the firms that are required to pay dividends have continued to miss payments. Treasury expects most remaining CPP institutions to exit through restructurings but has not set time frames for winding down the program. Over the past 6 years, repayment of Treasury's investment and Treasury's auction of CPP securities to interested investors were the primary means by which institutions exited CPP. Restructurings—the expected exit method for the remaining firms—allow institutions to negotiate terms for their investments and require institutions to raise new capital or merge with another institution. With this option, Treasury agrees to receive cash or other securities, typically at a discount. Treasury officials expect to rely primarily on restructurings because the overall financial condition of the remaining institutions makes full repayment unlikely.
WIA specifies a different funding source for each of the act’s main client groups—youth, adults, and dislocated workers. Our report focuses on adults and dislocated workers. Once the Congress appropriates WIA funds, the amount of money that flows to states and local areas depends on a specific formula that takes into account unemployment for the adult and dislocated worker funding streams, the number of low-income individuals for the adult funding stream, and the number of long-term unemployed for the dislocated worker funding stream. Labor allots 100 percent of the adult funds and 80 percent of the dislocated worker funds to states. The Secretary of Labor retains 20 percent of the dislocated worker funds in a national reserve account to be used for National Emergency Grants, demonstrations, and technical assistance and allots the remaining funds to each of the 50 states, the District of Columbia, and Puerto Rico. In program year 2003, Labor allotted approximately $2 billion to states for adults and dislocated workers (see app. II for a listing of program year 2003 allotments by state). Upon receiving its allotments, each state can set aside no more than 15 percent to support statewide activities. These may include a variety of activities that benefit adults, youths, and dislocated workers statewide, such as providing assistance in the establishment and operation of one-stop centers, developing or operating state or local management information systems, and disseminating lists of organizations that can provide training. In addition, each state can set aside no more than 25 percent of its dislocated worker funds to provide rapid response services to workers affected by layoffs and plant closings. The funds set aside by the states to provide rapid response services are intended to help dislocated workers transition quickly to new employment. After states set aside funds for rapid response and for other statewide activities, they allocate the remainder of the funds—at least 60 percent—to their local workforce areas (see fig. 1). Approximately 600 local workforce areas exist throughout the nation (see fig. 2). Each local area has a local workforce board that administers WIA activities within the local area, including selecting one-stop center operators, identifying eligible training providers, developing links with employers, and overseeing the use of funds for employment and training activities. WIA was intended to meet both the needs of businesses for skilled workers and the training, education, and employment needs of individuals. The act allows training and employment programs to be designed and managed at the local level to meet the unique needs of local businesses and individuals. Another aspect of the act was to provide customers with easy access to the information and services they needed and empower those who need training to obtain the training they find most appropriate. One cornerstone of WIA was the one-stop concept where information about and access to a wide array of services would be available at a single location. At the one-stop center, customers can get information about job openings; receive job search and placement assistance; receive an assessment of their skill levels, aptitudes, and abilities; and obtain information on a full array of employment-related services, including information on local education and training providers. Through the one- stop centers, employers also have a single point of contact to provide information about current and future skills needed by their workers and to list job openings. The services typically available at one-stop centers fall into the following categories: Core services. These include job search and placement assistance, the provision of labor market information, and preliminary assessment of skills and needs. Core services are available to all adults who come to a one-stop center, with no eligibility requirements imposed. Intensive services. These include comprehensive assessments, case management, short-term prevocational services, work experience, and internships. Intensive services are available to qualified adults and dislocated workers who are unable to obtain or retain a job that leads to self-sufficiency. Training services. These include occupational skills training, on-the- job training, customized training, and skill upgrading and retraining. Training services are available to qualified adults and dislocated workers who are unable to obtain or retain employment after receiving at least one intensive service. Supportive services. These include services—such as transportation, child care, and housing—that are necessary to enable WIA participants to take part in WIA activities. WIA requires the use of ITAs, which allow qualified individuals to purchase the training they determine best for themselves. Adults and dislocated workers use ITAs to purchase training services from eligible providers they select in consultation with case managers. Payments from ITAs may be made in a variety of ways, including the electronic transfer of funds through financial institutions, vouchers, or other appropriate methods. Payments may also be made incrementally. WIA requires that ITAs can only be used to purchase training from programs listed on an eligible training provider list (ETPL). Local boards, in partnership with the state, compile this list by identifying training providers and programs whose performance qualifies them to receive WIA funds to train adults and dislocated workers. Good information allows participants to make informed training choices. In this regard, WIA requires that local boards ensure that participants have access to performance information on training providers, including the percentage of individuals completing their training program, the percentage of individuals in the program who obtained jobs, and the wages earned by these individuals. Under certain situations, however, local boards have the option of purchasing training without using ITAs. The three exceptions to using ITAs are if the activity is on-the-job training or customized training, if a local board determines an insufficient number of eligible providers exist in the area (such as in a rural area), or if a training provider has a demonstrated effectiveness in serving a special population that face multiple barriers to employment. To assess whether it is accomplishing its goals, WIA established a performance measurement system for the programs directly funded by WIA—one that emphasized results in areas of job placement, retention, earnings, and skill attainment (see table 1). WIA requires states to use Unemployment Insurance wage records to track employment-related outcomes. States submit this information to Labor in annual reports submitted each December. States also submit quarterly performance reports, which are due 45 days after the end of each quarter. In addition to the performance reports, states submit their updates for WIASRD every January. WIA also requires Labor to conduct at least one multisite study to determine program results by the end of fiscal year 2005. Local boards used an estimated 40 percent of the WIA funds they had available in program year 2003 to obtain training services for WIA participants. Nationally, local boards had approximately $2.4 billion in WIA funds that were available to serve adult participants during program year 2003 and used about $929 million for training activities, primarily occupational classroom training. The remaining funds pay for other program costs, including job search assistance, case management, and supportive services, as well as administrative costs. We estimate that 416,000 WIA participants received training during the year. However, because some individuals may have received more than one type of training, this count may include some individuals more than once. Of those trained in program year 2003, about 323,000 participants received occupational classroom training, of which about 85 percent was purchased through ITAs. Local boards also used the flexibility provided under WIA to offer a broad range of training-related activities aimed at increasing employability but not included in WIA’s definition of training. Local boards nationwide used an estimated 40 percent of their WIA funds for training in program year 2003. During that year, local boards had about $2.4 billion in WIA funds available to serve adults and dislocated workers. Almost all local boards had funds from the WIA adult and dislocated worker funding streams; in addition, many boards had National Emergency Grants or funds from two state set-asides, the 15 percent set- aside for statewide activities and the 25 percent set-aside for rapid response. WIA permits local boards up to 2 years to spend each program’s funding. Accordingly, to get a national picture of available WIA funds at the local level, we defined available funds as the combined amount of program year 2003 funds and funds carried over from program year 2002. Of the approximate $2.4 billion in combined WIA funds that local boards had available, about $1.8 billion (75 percent) came from the program year 2003 allocation, while the rest consisted of funds carried over from 2002 (see fig. 3). Allocations from the WIA adult and WIA dislocated worker funding streams together constituted about 80 percent of the funds local boards could use. Of the $2.4 billion available, local boards used approximately $929 million in program year 2003 to fund training activities, representing about 40 percent of the WIA funds that were available to serve adult participants in the program. The remaining funds pay for other program costs, including job search assistance, case management, and supportive services, as well as administrative costs. We found that local boards spent an estimated $724 million on training and obligated another $205 million. Obligations are funds local boards commit to pay for training, but for which services have not yet been provided and costs not yet incurred (see fig. 4). Local boards used a slightly higher percentage of their WIA adult funds (43 percent) for training than their dislocated worker and state set-aside funds, both of which had 37 percent used for training (see fig. 5). Of the WIA dollars local boards spent on training, an estimated 79 percent was for occupational classroom training. Boards used the remainder of the funds to pay for on-the-job training, customized training, and other types of training, including adult basic education and skill upgrading. In addition to using WIA funding, many local boards also leveraged other sources of funding to help pay the costs of training for WIA participants. Some of these funding sources were federal programs, including Trade Adjustment Assistance (TAA), the H-1B skill grant program, and Temporary Assistance for Needy Families (TANF). For example, in program year 2003, one board we visited in Maryland enrolled 49 WIA participants in training funded by TAA. Other sources of funding came from state and local governments or private entities. For example, at one site we visited in Georgia, the local public school system paid for high school equivalency classes for WIA participants, including teacher salaries, testing, and books and materials. In addition, the local housing authority provided training on a variety of soft skills for 600 of its clients at the one- stop center. Overall, we estimate that 416,000 WIA participants were enrolled in training during program year 2003 and that about 323,000 participants received occupational classroom training. In our survey, local boards reported the number of people enrolled in each category of training rather than the total receiving training. As a result, it is possible that the 416,000 includes some duplication of individuals who received more than one kind of training during that year (see table 2). We estimate that more than three-quarters of the training that participants received (78 percent) was occupational classroom training (see fig. 6). On- the-job training and customized training each accounted for an additional 6 percent of the total training that occurred in program year 2003, while 10 percent of training included other activities, such as adult education, literacy classes, entrepreneurial training, and skill upgrading. Approximately 85 percent of the occupational classroom training provided during program year 2003 was purchased by participants through ITAs. At some local boards, however, specific kinds of occupational classroom training were obtained without the use of ITAs. For example, one local board we visited in Iowa used WIA funds to pay for classes in typing skills at the local community college, but ITAs were not used to pay for this training because it was short-term training that did not result in a credential. At the eight sites we visited, participants used ITAs to pay for training in a wide variety of occupations. WIA regulations require that participants select a training program directly linked to employment opportunities. Each of these local boards told us that nursing and other health care professions were in high demand locally, and accordingly, participants in all eight areas sought training in health care occupations. Other high- demand occupations at some of the local boards we visited included information technology, truck driving, manufacturing, and teaching. As local labor markets have changed, some boards have developed specially tailored programs designed to mirror shifts in labor demand. For example, one local area in California faced massive layoffs in the high-tech industry but had a dearth of qualified teachers in the local schools. As a result, the local board created a program to train dislocated high-tech workers to become teachers. In addition to providing training activities, local boards used the flexibility provided under WIA to offer a broad range of intensive services, some of which are aimed at increasing job skills. These training-related activities, including work experience, internships, and computer skills training, are not captured in WIA’s definition of training and, therefore, not paid for with training dollars. Accordingly, neither the amount of funding spent on these activities nor the number of participants who benefit from them are identified in our statistics on training. Nevertheless, these activities can play a significant role in preparing WIA participants for successful employment. Although many WIA participants do not need extensive training to obtain a job, some still need help improving a variety of skills that will further their chances of successfully searching for and retaining a job. Much like training, these activities are intended to increase employability. Many are short-term activities, such as computer lab training and other intensive skills workshops. For example, approximately one-half of local boards use WIA funds to offer participants computer lab training in software applications, basic keyboarding, and other computer skills. One board we visited in Georgia offers another type of short-term, intensive skills workshop through its Basic Industrial Maintenance program. During this 4-week training course, participants learn a variety of skills including the basics of construction, plumbing, and carpentry. Other training-related activities that are intended to increase skills but are not included in WIA training are internships and work experience opportunities. For example, one local board we visited in Iowa spent about $79,000 in program year 2003 to provide internships and work experience opportunities to 41 WIA participants. Moreover, some boards we visited used WIA funds to pay for supportive services, such as child care and transportation, that enable participants to attend training. Like funding spent on training-related activities, the cost of these supportive services is not reflected in the amount of WIA funding that local boards spend on training. However, these services can represent a large investment of WIA dollars. Local boards have flexibility in whether and how they use WIA funds for services that support training. For example, one local board we visited in rural Iowa spent over $160,000 in program year 2003 on a wide array of supportive services for people in training, including child care, transportation, eye exams, and glasses. Because the area contains several correctional facilities, the board also used a portion of these WIA funds to purchase bicycles for ex-offenders who were attempting to reenter the workforce but had lost their driving privileges. Another local area we visited in California spent $87,000 in supportive services during program year 2003; in addition to paying for child care and transportation, these WIA funds paid for items including books, uniforms, and tools, as well as services such as fingerprinting and tuberculosis testing, which some training programs require. Not all boards provided WIA-funded supportive services to people in training, however. One local area we visited in Maryland did not use WIA funding to provide supportive services to its adult participants, although it referred those in need to other agencies for assistance. Most local workforce boards have developed policies to manage the use of ITAs, but many boards have encountered challenges in trying to implement their use. Local boards often require participants to complete various skill assessments prior to entering training and gather additional information on the occupation for which they desire training. In addition, they generally limit the amount of money participants can spend on training using ITAs and how long participants can spend in training. Although the vast majority of local boards use ITAs, most also said they have faced challenges in managing their use. The challenge most frequently identified was lack of good performance data on training providers. Local boards in rural areas face a different challenge—lack of nearby training providers. Some boards have identified initiatives to mitigate the challenges they face. We estimate that most local boards established procedures to ensure that any training purchased using ITAs is warranted and placed spending limits on individual ITAs to control costs. WIA regulations require that participants must receive at least one intensive service, such as individual counseling and career planning, before enrolling in training. Many local boards also require participants in the adult and dislocated worker programs who want training to first complete specified activities to demonstrate their need for training. For example, local boards may require participants to complete skill assessments or attend specific workshops. In addition, they may require participants to gather information on the occupation for which they want training or document their inability to find employment (see fig. 7). More than 80 percent of the local boards require adults and dislocated workers to complete specified skill assessments or tests before being allowed to purchase training with ITAs. For example, three local boards in Georgia, Kansas, and Mississippi commented that participants are required to complete career assessments or occupational interest inventories prior to training. A local board in New York mentioned that staff are required to interview participants and determine whether they are in need of training and have the skills and qualifications to successfully participate in the training program. A local board we visited in Georgia required participants to demonstrate an aptitude and interest in an area before enrolling in training. Depending on how proactive the participant is, the process can take up to 6 months before the participant is enrolled in a training program. Approximately 70 percent of the local boards require adults and dislocated workers to gather additional information about the occupation for which they want training. For example, a local board in Arizona noted that participants must interview three people working in their desired field; another board, in Washington, commented that participants are required to conduct informational interviews with employers in the occupation they wish to pursue. Similarly, three of the local boards that we visited required participants to perform specific tasks prior to entering training. For example, one local board we visited in Georgia required participants to gather specific information on training providers and then compose an essay explaining why they chose a particular course and provider. Another board we visited, in California, asks participants to research and disclose information on the training they want to pursue, including the occupation’s starting wage, whether this wage is sufficient to support them and their family, working conditions, available job openings, and education and skill requirements. Also, a local board we visited in Maryland requires participants seeking certain types of training, such as on the operation of a tractor-trailer, to obtain prehire letters that guarantee employment once training is completed. About one-third of local boards required adults and dislocated workers to complete workshops prior to enrolling in training. For example, a West Virginia board noted that participants must demonstrate their soft skills or complete a soft skills program before entering occupational training. One of the local boards we visited in California requires that participants attend an orientation and a soft skills workshop prior to entering training. They also offer additional voluntary workshops, including those in which participants explore different vocations, complete applications, practice interviews, and perform self-assessments. Similarly, a local board we visited in Georgia requires participants to take a general orientation and résumé writing workshop before being eligible for ITAs. Approximately 85 percent of local workforce boards limit the amount of money participants can spend on training using ITAs. An estimated 31 percent of the local boards limited ITAs to between $3,000 and $5,000 (see fig. 8). At local boards limiting ITAs, the amount of the caps ranged from $350 at one local board to $15,000 at three local boards. One of the local boards we visited in Maryland said that its ITA cap had changed four times since program year 2000. The cap started at $4,000, was then increased to $4500 because of inflation, and later rose to $5,500 because of the increased demand for, and cost of, information technology training. However, because of reduced WIA funding, the board later lowered the cap to $3,000. Rather than having a single dollar limit on ITAs, two local boards reported having ITA caps that could vary by training program. Specifically, a local board in Hawaii limits an ITA for any particular training program to the cost of the least expensive provider among those who offer equivalent programs, while a local board we visited in Iowa limits an ITA for a particular training program to the highest cost of obtaining that training at a state public institution. Most of the boards that impose dollar caps on ITAs expect the amount of the ITA to also cover the costs of books, tools, and uniforms. A number of local boards also expect the amount of the ITA to cover the costs for supportive services and other items, such as fees for licenses, certifications, tests, and physical exams (see fig. 9). In addition to limiting the amount of money participants can spend on training, an estimated two-thirds of the local boards also limit how long participants can spend in training. The most frequent limit on training was 2 years. Many of these local boards, however, indicated that time limits could be waived depending on an individual’s circumstances. One of the local boards we visited in Iowa had no time limit for training using ITAs, but encouraged shorter-term training lasting one or two semesters, especially for dislocated workers, because of the limited period for collecting unemployment insurance. The use of particular training providers varied among the local boards we visited. For example, in program year 2003, one local board we visited in California used 48 private schools as training providers for 305 participants and 3 community colleges for 42 participants. This local board also used 15 4-year colleges for 202 participants. The majority of these participants were former high-tech workers being trained to become teachers. A local board we visited in Georgia relied heavily on private, proprietary schools, using them for about 1,000 participants each year. The board believes these schools are more flexible than other training providers and offer a wide array of training courses. On the other hand, a local board we visited in Iowa used community colleges for 90 percent of the 246 ITAs issued in program year 2003. Local boards responding to our survey reported that 37 percent of the ITAs issued in program year 2003 were used at proprietary schools and 35 percent were used at community colleges. The remainder were used at various providers, including 4-year colleges, public vocational and technical schools, and community based- organizations. In December 2004, Mathematica Policy Research issued an interim report concluding that the way ITAs are administered influences the likelihood of participants requesting counseling or receiving ITAs. The study also found that different approaches to administering ITAs appeared to have a limited effect on participants’ training choices. Labor funded the 3-year study to assess how different approaches to administering ITAs affect training choices, employment and earnings outcomes, returns on investment, and customer satisfaction. Eight sites were included in the study. These sites were located in or around Atlanta, Georgia; Bridgeport, Connecticut; Charlotte, North Carolina; North Cook County, Illinois; Jacksonville, Florida; and Phoenix, Arizona. Mathematica’s study results are not generalizable beyond these eight sites. A later report by Mathematica will present an analysis of how the ITA approaches affect additional outcomes, including training completion, customer satisfaction, and employment and earnings after training, as well as an analysis of the return on the investment in training. Most local boards faced some challenges in their efforts to implement ITAs, and local boards in rural areas face a unique challenge. The majority of local boards encountered as challenges the lack of performance data on training providers, the timing of training being offered, being able to get new training providers on the eligible training provider list (ETPL), and the ability to link the ITAs with economic development strategies (see table 3). A few local boards have found ways to mitigate some of these challenges. Nearly two-thirds of the local workforce boards encountered the lack of performance data on training providers as a challenge. For example, a local board in Wisconsin commented that the lack of consistent data on training providers reduces the value of the ETPL to local boards. A local board in Missouri noted that one of the greatest challenges lies in not having reliable information regarding the quality and relevance of the training being offered by training providers. The board further stated that the state’s report card containing performance information on training providers in Missouri was incomplete or unavailable. Local boards we visited in California and Iowa said that a statewide report card on training provider performance did not exist. In lieu of a statewide report card, the California boards tracked training provider performance themselves, while the Iowa boards relied upon informal feedback about provider performance. Approximately 60 percent of the local boards encountered the timing of the training offered by providers as a challenge. For example, two local boards we visited in Georgia and Iowa said that some participants are unable to attend training programs that are offered only during a regular academic schedule. The Iowa board explained that some participants who have to wait too long for a training program to begin may have their unemployment insurance benefits run out before the training can be completed. On the other hand, some local boards have found solutions to deal with this issue. For example, a local board in Washington commented that it purchased classroom group training to offer more flexibility as to when training will be offered and to satisfy the demand for particular training. Similarly, a local board in Massachusetts noted it persuaded local technical high schools to offer programs at night, thereby resulting in greater availability of training in trade-related fields that are in high labor demand. A local board we visited in Maryland developed close relationships with area community colleges that now schedule occupational training outside the regular academic calendar. More than half the boards faced getting new providers on the ETPL as a challenge. This has been a long-standing concern. We reported in 2001 that according to training providers, the data collection burden resulting from participation in WIA can be significant and may discourage willingness to participate under WIA as training providers. Labor has heard these concerns from training providers and has approved waivers for 30 states. These waivers, in effect, give states additional time to address data collection challenges. However, getting training providers, particularly community colleges, to participate in the ETPL remains a concern for some local boards. For example, local boards in California, Indiana, Massachusetts, and Michigan noted that some providers, community colleges in particular, are reluctant to participate in the ETPL. A local board that we visited in California elaborated on this point, stating that community colleges in its area are operating at full capacity and do not need WIA dollars or participants and, therefore, are not interested in getting on the ETPL. Some local boards are finding ways to encourage providers to participate. For example, one local board in Massachusetts has been working collaboratively with other boards throughout the state to meet with key figures in the community college system to provide information, consultation, and feedback. Two local boards we visited in Iowa and Maryland have developed strong relationships with the 16 community colleges in their areas, each of which is on state lists. Another local board we visited in California conducts regional ETPL workshops with training providers and shares ideas with other local boards in the surrounding areas. More than half of the local boards found linking ITAs with local economic and business development strategies to be a challenge. Several local boards provided different examples of why they found it difficult to provide participants with training in high-demand occupations in their area. The area around one local board we visited in California faced a nursing shortage, but nearby training for nurses was difficult to obtain. Some area community colleges were opting not to provide nursing training because they could not recoup the costs of operating them. A local board we visited in Georgia also said that keeping up with businesses’ needs is a challenge and noted that some information technology sector-based training courses are not always available. Other local boards identified some initiatives they are pursuing to strengthen links with economic development. For example: A local board in Michigan partners with a local technical training center to develop intensive, short-term certificate programs in high- skill, high-wage, and high-demand fields. The technical center is operated by a local community college but offers non-credit certificate programs responsive to business and community needs. A local board in Ohio works closely with the local Chambers of Commerce and economic development partners to formulate training programs that are based on employer demands. Specifically, if the board hears that a group of employers have a skill need, then the board will develop the appropriate training program with a service provider. A local board in Texas uses an industry cluster analysis report to focus attention on specific industries and then targets funds for training in these sectors. Training institutions must prepare industry-approved training curriculums in order to have programs approved by the local board and to have ITAs issued to participants for training. A local board in Washington partnered with representatives from local education, economic development, and government to develop a shared blueprint for economic development and training around eight high-demand industry clusters. A number of local boards representing rural locations mentioned that the requirement to use ITAs to purchase training from providers on the ETPL presented a different problem for them from their counterparts in urban areas. Local boards in Arizona, Colorado, Kansas, Montana, North Carolina, and Utah all mentioned that participants in rural areas have few nearby training providers from which to choose. Additionally, local boards in Kansas and Montana noted that being located in a rural area with limited providers makes dealing with ITAs burdensome. A local board that we visited in rural California applied for and received a waiver from using ITAs. The board directly contracts with a community college for a 2-year program to train participants to become registered nurses and an 18- month program to train participants to become licensed vocational nurses. Through April 2004, 73 participants have completed the nursing programs, and according to the board, all were employed in their respective fields. Little is known on a national level about the outcomes of those being trained. Certain aspects of WIASRD have been found to be incomplete and unverified. Additionally, data generally cannot be compared across states or local areas because of differences in data definitions. Labor is taking some steps that may address these concerns and plans to complete an evaluation that will measure the overall impact of the WIA program. Our analysis of program year 2003 WIASRD has shown that the database does not contain information for a large number of data elements. It is unclear if these values are missing because Unemployment Insurance wage records are not available or because they simply were not entered into the database by officials. This finding reaffirms issues that have been raised previously about the quality of data that Labor uses to assess program performance. Our 2004 report found that performance data submitted by states in quarterly and annual reports were not sufficiently reliable to determine outcomes for the WIA programs. Labor’s Office of Inspector General (OIG) raised the same concerns in 2002 by noting that because of inadequate oversight of data collection and management, little assurance exists that the states’ performance data for all WIA programs are either accurate or complete. OIG’s report found that of the 12 local areas examined, none had adequately documented procedures for validating participant performance data. Similarly, none of the four states examined had sufficient procedures to ensure the accuracy of their reported performance data. At the time, OIG recommended that states use a statistical sampling method for validating reported data. Because of questions about the comparability of data elements, states’ performance data are of limited value for national comparisons, or even comparisons within a single state. Labor allows local areas to exercise some flexibility in determining how to collect and report certain performance data on participants. For example, while local areas collect data on a participant who leaves the program, they use different methods to determine when a person has officially exited from the program. WIASRD guidelines define participants as having exited the program on the last date they received WIA services. Labor allows two ways to define exit: (1) at the point of case closure or (2) when the participant has not received any services or training for more than 90 days and is not scheduled for future services. We found that local area definitions differ from this and from each other. For example, one local board we visited defines exit as occurring when participants are finished with their WIA services; another board defines exit when participants have found a new job and the wages for their new job are considered acceptable (regardless of the number of days that have passed since their last service). Wage and employment outcomes under these different circumstances would vary greatly, making it difficult to compare outcome data across local areas. In a prior report, we also noted the data are not comparable on what constitutes a credential—whether it is the attainment of a certified skill or of a degree. Labor allows states and local areas to determine what constitutes a credential. Labor is taking some steps to address data quality concerns and improve the data used at the national level. This includes implementing a new project to validate the performance information collected and reported under WIA. The data validation initiative covers both the accuracy of reports submitted to Labor on program activity and performance outcomes and the accuracy of individual data elements. The report validation checks the accuracy of states’ software used to calculate the performance reports submitted to Labor. For example, if a state reports, for a particular time frame, that 100 adults found employment after they received services, the validation software searches through the state’s electronic records to ensure that 100 records are found that match these criteria. Data element validation, on the other hand, evaluates the accuracy of the participant data used to generate reports submitted to Labor. The process compares selected information from a sample of participant exit records with the original paperwork that contained this information. Data element validation results in an estimate of the error rate for each data element that has been selected for validation. While Labor provides guidelines for the data validation, states are responsible for executing the initiative. Program year 2003 is the first year states have completed the process, and Labor plans to review state results and start setting acceptable standards for error rates. Eventually, Labor plans to hold states accountable for meeting accuracy standards. Once these accuracy standards are in place, states failing to meet the standards may lose eligibility for incentive awards or, in cases with significant deviations from the standards, may be sanctioned. Because this initiative is relatively new, it is too soon to tell if it will satisfactorily resolve all data quality problems associated with WIASRD outcome measures—but it is a step toward addressing these concerns. Labor is also in the initial stages of developing a single, streamlined reporting and record-keeping system that will replace a series of databases, including WIASRD, and may address some concerns about data quality. The system, formally called the ETA Management Information and Longitudinal Evaluation (EMILE), would replace the current data collection and reporting requirements for 12 employment and training programs. The goal of the new system is to allow for consistent, comparable analysis across Labor’s employment and training programs, using the same definitions for specific measures. These definitions, known as common measures, are to be implemented July 1, 2005, well before EMILE is implemented. A feasibility analysis on the EMILE proposal, which will provide information on next steps, is in the planning stages and should be completed by December 2005, but it is unclear how soon EMILE will be implemented. Labor has plans to meet the requirement that it conduct at least one multisite study to determine the general effectiveness of the WIA program and the specific impacts of WIA services on the community and participants involved. WIA requires Labor to conduct at least one multisite evaluation using a control group by the end of fiscal year 2005. However, as noted in a prior report, Labor will not meet this deadline. Labor is waiting for WIA reauthorization to begin the evaluation, and will likely wait until the second year after reauthorization to commission the study. Labor anticipates it will take at least 5 years to complete the first evaluation. WIA represented a fundamental shift for workforce development because it attempted to significantly change how employment and training services were provided and because it provided considerable latitude to those implementing WIA at the state and local levels, yet little is known about the impacts of these changes. In program year 2003, local workforce boards used a substantial amount of WIA funds to train a large number of individuals. During these times of increasingly tight federal budgets, it is important to know what types of training are the most successful and how the outcomes from training compare with outcomes from other services. We previously recommended that Labor take actions to improve data quality and to conduct an impact evaluation of WIA services. While Labor is taking steps to address the accuracy of data contained in WIASRD, reliable information is not yet available on a national level as to the outcomes of those being trained. In addition, Labor will likely wait until 2007 to conduct the multisite impact evaluation required by WIA. Our findings reaffirm the need for a continued focus on resolving reported data quality issues and determining what services are the most successful. We provided a draft of this report to Labor for review and comment. In its comments, Labor acknowledged that the WIA reporting system currently has limited information on training expenditures and training outcomes, but noted that some of our information conflicts with their estimates of these activities. In particular, Labor states the Administration’s estimate of the amount of WIA funds spent on training is lower than the 40 percent that we estimate was used for training in program year 2003. In addition, Labor estimates adult training enrollments to be roughly 200,000. We agree with Labor that the WIA reporting system has limited reliable information. As a result, we went directly to the local workforce boards to obtain as complete a picture as possible on the extent to which WIA funds were used for training and how many adults were trained in program year 2003. Our information differs from Labor’s estimates in two ways. First, our report indicates that 40 percent of WIA funds were used for training in program year 2003. We define funds used for training to include funds spent as well as funds obligated. As shown in the report, of the approximately $929 million used for training in program year 2003, about $724 million was actually spent and another $205 million was obligated. Second, Labor’s estimate of 200,000 adults enrolled in training includes only those adults reported in WIASRD who exited from the program. Our estimate of 416,000 comes directly from the local workforce areas that provided the training and includes the total number of adults who received training in program year 2003, including those who had not exited from the program. We believe our estimates of the amount of WIA funds used for training and the number of adults trained represent a more complete and accurate picture than Labor’s estimates because we included all funds used for training in program year 2003, whether they were spent or obligated, and counted all adults who received training in program year 2003, not just those who exited the program. Regarding our discussion on data quality, Labor stated that it appreciates that we recognize the steps taken to improve data quality through data validation and noted that it has provided states with the software, handbooks, user guides, and technical assistance necessary to develop reports and document validation. Labor commented that it plans to continue supporting data validation during the transition to common measures and is currently revising the software and related materials to match the new reporting requirements. In addition, Labor stated that in the coming months it will be publishing proposed revisions to the WIASRD in the Federal Register. Labor also has issued guidance standardizing the definition of “exit” for purposes of assessing program performance across all programs implementing common measures. Labor said that states will begin implementing the change on July 1, 2005, and believes that this will improve the comparability of WIA outcome data. We commend Labor’s efforts to improve data quality and acknowledge that these actions are a step in the right direction to having a reporting system that contains complete and accurate information. Labor also provided technical comments that we incorporated where appropriate. Labor’s entire comments are reproduced in appendix III. We will send copies of this report to the Secretary of Labor, relevant congressional committees, and other interested parties and will make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. A list of related GAO products is included at the end of this report. If you or your staffs have any questions about this report, please contact me at (202) 512-7215 or at nilsens@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Other contacts and staff acknowledgments are listed in appendix IV. We were asked to determine (1) to what extent Workforce Investment Act (WIA) funds have been used for training, (2) how local workforce investment boards have managed the use of Individual Training Accounts (ITA) and what challenges they have encountered, and (3) what is known at the national level about the outcomes of those receiving training. To address these issues, we conducted a Web-based survey of all local workforce investment boards in the 50 states, the District of Columbia, and Puerto Rico, and visited eight local boards located in 4 states. We conducted our work from June 2004 to May 2005 in accordance with generally accepted government auditing standards. To determine the extent to which program year 2003 WIA funds were used for training, how local workforce boards manage ITAs, and what challenges they have encountered, we conducted a Web-based survey of the local workforce investment boards for the 590 local workforce investment areas in existence in program year 2003. Program year 2003 was the most recent year for which complete data were available. The basis for our list of local workforce investment boards was the directory from the National Association of Workforce Boards (NAWB). To view the survey, go to www.gao.gov/cgi-bin/getrpt?GAO-05-807SP. Prior to administering the survey, we pretested the content and format of the questionnaire with a number of local workforce investment boards to determine whether (1) the survey questions were clear, (2) the terms used were precise, (3) respondents were able to provide the financial and client data we were seeking, and (4) the questions were unbiased. We made changes to the content and format of the final questionnaire based on pretest results. The surveys were conducted using self-administered electronic questionnaires posted on the Web. We received completed surveys from 428 boards (a 73 percent response rate). We attempted to assess the reliability of the responses to survey questions that asked for quantitative data relating to WIA funds used for training and numbers of clients served. We included questions in the survey that asked whether or not the local board carried out certain practices or procedures to ensure that databases or data systems used to produce the financial and client information we asked for were in fact reliable. These questions asked (1) if there were written procedures that defined data elements or specified how data were collected; (2) if routine internal reviews of data were conducted to check for errors in completeness, accuracy, or reasonableness; (3) if periodic monitoring or audits of the data were conducted to check for errors in completeness, accuracy, or reasonableness; and (4) whether or not routine quality control procedures were in place such as data verification to source documents or computer edit checks. We asked these four questions separately for both the financial data and the client data obtained in the survey. If the local area responded that it had used at least two of the four practices or procedures to monitor the quality of the financial and participant data provided in their survey, we either accepted the responses or called for clarification, depending on which procedures were used. If the local board responded that three or four of these practices or procedures were not used for either financial or client data, we did not use those data. In cases where the local board responded that it was not sure whether these practices or procedures had been carried out or did not answer three or more of the four questions, we telephoned the board to try to determine whether or not it had actually carried out these practices or procedures. In cases where we determined that the data actually did meet our data reliability criteria, based on these telephone calls, we accepted the responses. On the basis of the criteria, we accepted the financial data for all 428 of the completed surveys and we accepted the participant data for 425 of the completed surveys. In the three cases where we did not accept the participant data, we did retain and use responses from other sections of the survey. In addition to including the questions on data reliability in the survey, we also checked the consistency of responses on a number of survey questions that asked for numeric data. On the basis of these consistency checks, some responses were dropped from our analysis. We generated estimates to the population of 590 boards by treating the 428 responding boards as a simple random sample. We chose to treat the 428 responding boards as a simple random sample from the population of 590 based on an analysis of the differences between the responding boards and the nonresponding boards. To determine if there were any significant differences between the responding boards and the nonresponding boards, we contacted officials in the 50 states, the District of Columbia, and Puerto Rico to obtain the number of unemployed individuals for each local area. We obtained this number for all 590 boards in the population. We used this information to determine whether sample-based estimates of this characteristic generated from the responding boards compared favorably with the known population values. The known population value of the number of unemployed individuals fell within the 95 percent confidence interval surrounding the sample-based estimate. On the basis of these results, and our assessment that the number of unemployed individuals is correlated with key items we were estimating, we concluded that treating the 428 responding boards as a simple random sample is not likely to introduce significant bias into estimates. Some of the 428 responding boards did not provide responses to all of the items in the survey. To improve our estimates, we employed a nearest neighbor hot deck imputation methodology to account for nonresponse of numerical items. Because we decided to treat the respondents as a simple random sample of boards, our results are estimates of the population of 590 boards and thus are subject to sampling errors that are associated with samples of this size and type. Our confidence in the precision of the results from this sample is expressed in 95 percent confidence intervals, which are expected to include the actual results in 95 percent of samples of this type. We calculated confidence intervals based on methods that are appropriate for a simple random sample. All percentage estimates have margins of error of plus or minus 5 percentage points or less. All numerical estimates other than percentages have relative margins of error of plus or minus 15 percent or less, except for those shown in table 4. For example, an estimate of $1,000,000 with a relative margin of error of plus or minus 15 percent would have a 95 percent confidence interval of $850,000 to $1,150,000. The practical difficulties of conducting any survey may introduce other kinds of errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the data are entered into a database or were analyzed can introduce unwanted variability into the survey results. We took steps in the development of the questionnaire, the data collection, and the data analysis to minimize these nonsampling errors. For example, social science survey specialists designed the questionnaire in collaboration with GAO staff with subject matter expertise. Then, the draft questionnaire was pretested with a number of local boards to ensure that the questions were relevant, clearly stated, and easy to comprehend. As mentioned above, we included additional questions to determine whether certain practices or procedures had been carried out on the financial and client data. When the data were analyzed, a second, independent analyst checked all computer programs. Since this was a Web-based survey, respondents entered their answers directly into the electronic questionnaire. This eliminated the need to have the data keyed into a database, thus removing an additional source of error. We used two criteria in selecting site visit locations. First, we stratified the 48 continental states and the District of Columbia into four categories according to amount of program year 2003 adult and dislocated worker formula funds to reflect states with varying funding sizes. The four categories were less than $10 million, $10 million to $25 million, over $25 million to $50 million, and over $50 million. We selected one from each category. Second, we chose geographically dispersed states to help illuminate regional differences in implementing ITAs. From within each state, we judgmentally selected two local boards to provide a mix of urban and rural areas (see table 5). At each location visited, we obtained general information about the local workforce area and additional information on WIA funding, local training policies implemented, challenges encountered, innovative practices, and reliability of data systems. To increase our confidence in the reliability of the data we gathered from our survey, we interviewed each local board we visited about data monitoring and quality control procedures and policies with respect to their financial and participant databases. We also asked the local boards we visited to show us samples of records in their databases and to trace them to source documents. We generally found their data quality processes and procedures to be sufficiently reliable for the purposes of our report. To determine whether the Workforce Investment Act Standardized Record Data (WIASRD) might be a viable source of data for outcomes of training participants, we reviewed our prior reports about the reliability of the WIASRD data and a report by Labor’s Office of the Inspector General on WIA performance outcomes. We also performed electronic tests of the program year 2003 WIASRD data to check for missing values. The variables analyzed included employment after first, third, and fifth quarters after exit quarter, and type of credential attained. Missing data due to unemployment insurance wage data lags were taken under consideration. On the basis of our analysis, we determined that the WIASRD data elements pertinent to this report were not sufficiently reliable for our purposes. We have discussed the data reliability issues throughout the body of the report. Joan Mahagan, Assistant Director, and Wayne Sylvia, Analyst-in-Charge, managed all aspects of the assignment. Rebecca Woiwode made significant contributions to this report in all aspects of the work. In addition, Amanda Mackison and Matthew Saradjian assisted in the survey design, data collection, and report writing; Stuart Kaufman and Shana Wallace assisted in the design of the national survey and the assessment of the survey’s data reliability; James Ashley and Sidney Schwartz assisted in data projections of the national survey; Grant Mallie and Stefanie Bdzusek assisted in analyzing the national survey responses; Joan Vogel assisted in assessing the data reliability of the WIASRD data; Jessica Botsford and Richard Burkard provided legal support; and Corinna Nicolaou provided writing assistance. Workforce Investment Act: Labor Should Consider Alternative Approaches to Implement New Performance and Reporting Requirements. GAO-05-539.Washington, D.C.: May 27, 2005. Workforce Investment Act: Employers Are Aware of, Using, and Satisfied with One-Stop Services, but More Data Could Help Labor Better Address Employers’ Needs. GAO-05-259. Washington, D.C.: Feb. 18, 2005. Public Community Colleges And Technical Schools: Most Schools Use Both Credit and Noncredit Programs for Workforce Development. GAO-05-04. Washington, D.C.: Oct. 18, 2004. Trade Adjustment Assistance: Reforms Have Accelerated Training Enrollment, but Implementation Challenges Remain. GAO-04-1012. Washington, D.C.: Sept. 22, 2004. Workforce Investment Act: States and Local Areas Have Developed Strategies to Assess Performance, but Labor Could Do More to Help. GAO-04-657. Washington, D.C.: June 1, 2004. National Emergency Grants: Labor Is Instituting Changes to Improve Award Process, but Further Actions Are Required to Expedite Grant Awards and Improve Data. GAO-04-496. Washington, D.C.: April 16, 2004. Workforce Investment Act: Labor Actions Can Help States Improve Quality of Performance Outcome Data and Delivery of Youth Services. GAO-04-308. Washington, D.C.: Feb. 23, 2004. National Emergency Grants: Services to Dislocated Workers Hampered by Delays in Grant Awards, but Labor Is Initiating Actions to Improve Grant Award Process. GAO-04-222. Washington, D.C.: Nov. 14, 2003. Workforce Investment Act: Exemplary One-Stops Devised Strategies to Strengthen Services, but Challenges Remain for Reauthorization. GAO-03-884T. Washington, D.C.: June 18, 2003. Workforce Investment Act: One-Stop Centers Implemented Strategies to Strengthen Services and Partnerships, but More Research and Information Sharing Is Needed. GAO-03-725. Washington, D.C.: June 18, 2003. Workforce Investment Act: Issues Related to Allocation Formulas for Youth, Adults, and Dislocated Workers. GAO-03-636. Washington, D.C.: April 25, 2003. Workforce Investment Act: Interim Report on Status of Spending and States’ Available Funds. GAO-02-1074. Washington, D.C.: Sept. 5, 2002. Workforce Investment Act: Better Guidance and Revised Funding Formula Would Enhance Dislocated Worker Program. GAO-02-274. Washington, D.C.: Feb. 11, 2002. Workforce Investment Act: Better Guidance Needed to Address Concerns over New Requirements. GAO-02-72. Washington, D.C.: Oct. 4, 2001.
The Congress passed the Workforce Investment Act (WIA) in 1998 seeking to create a system connecting employment, education, and training services to better match job seekers to labor market needs. However, questions have been raised about how WIA funds are being used and, in particular, how much is being spent on training. Contributing to the concern about the use of WIA funds is the lack of accurate information about the extent to which WIA participants are enrolled in training activities. GAO was asked to determine (1) the extent to which WIA funds are used for training, (2) how local workforce boards manage the use of Individual Training Accounts (ITA) and what challenges they have encountered, and (3) what is known at the national level about outcomes of those being trained. In its comments, the Department of Labor (Labor) noted that some of our estimates on training conflicts with their estimates. Labor's estimate of the number of adults trained comes from their database and includes only those who had exited from the program. GAO's estimates represent a more complete and accurate picture than Labor's because they are based on information obtained directly from the local workforce areas, include all funds spent or obligated for training, and count all adults who received training in program year 2003, not just those who exited the program. Local workforce boards used an estimated 40 percent of the WIA funds they had available in program year 2003 to obtain training services for WIA participants. Nationally, local boards had approximately $2.4 billion in WIA funds that were available to serve adults and dislocated workers during program year 2003 and used about $929 million for training activities. The remaining funds paid for other program costs as well as administrative costs. We estimate that 416,000 WIA participants received training during the year. However, because some individuals may have received more than one type of training, this count may include some individuals more than once. Most of the participants received occupational classroom training purchased with ITAs, which are established on behalf of an eligible participant to finance training services. Most local workforce boards have developed policies to manage the use of ITAs, but many boards have encountered challenges in trying to implement their use. Local boards often require participants to complete specified tasks prior to entering training, such as gathering additional information on their desired occupation. In addition, they generally limit the amount of money participants can spend on training using ITAs and how long the training can last. Among the challenges encountered by local boards was the lack of good performance data on training providers making it difficult to determine which providers were most effective. Local boards in rural areas faced a different challenge--lack of nearby training providers. Little is known on a national level about the outcomes of those being trained. Certain aspects of Labor's national participant database have been found to be incomplete and unverified. Additionally, data generally cannot be compared across states or local areas because of variations in data definitions. Labor is taking some steps to address these concerns, but the findings from this study reaffirm the need for a continued focus on resolving reported data quality issues.
The majority of major acquisition programs in DOD’s space portfolio have experienced problems during the past two decades that have driven up cost and schedules and increased technical risks. Several programs have been restructured by DOD in the face of delays and cost growth. At times, cost growth has come close to or exceeded 100 percent, causing DOD to nearly double its investment in the face of technical and other problems without realizing a better return on investment. Along with the increases, many programs are experiencing significant schedule delays—as much as 7 years—postponing delivery of promised capabilities to the warfighter. Outcomes have been so disappointing in some cases that DOD has had to go back to the drawing board to consider new ways to achieve the same, or less, capability. As figures 1 and 2 below indicate, five programs that were begun in the late 1990s / early 2000s to replenish aging constellations of satellites have incurred substantial cost growth and schedule delays, including the (1) the Advanced Extremely High Frequency (AEHF) communications satellite program, (2) the National Polar-orbiting Operational Environmental Satellite System (NPOESS), which DOD is jointly developing with the National Oceanic and Atmospheric Administration, (3) the Space Based Infrared System (SBIRS), which detects missile launches, (4) the Wideband Global SATCOM (WGS), another communications satellite, and (5) the Global Positioning System (GPS) IIF program. Last year we reported that AEHF and WGS had worked through the bulk of their technical problems. Since our testimony, the first WGS satellite was launched, but the AEHF program experienced technical problems with hardware components that have pushed back its first launch date by 6 months. Also, this year, as described below, we found that NPOESS and SBIRS still face very high risks, even after recent acquisition replanning efforts. Further, GPS IIF has experienced additional technical problems. SBIRS continues to face cost and schedule setbacks. Software problems have recently delayed the first satellite launch by about a year, which will likely increase the program’s overall delay to roughly 7 years. Correcting the problems may necessitate hardware and software changes that could, according to the Air Force, also drive cost increases up to $1 billion, which would be in addition to the $6 billion cost growth already incurred. Management-reserves expenditure continues at an unsustainable rate. Program officials acknowledge that management reserves set aside to fix unexpected problems will likely be depleted in early 2009, even though the reserves were intended to last through 2012. Given the complexity of the SBIRS satellites, it is possible that further design flaws may be discovered, leading to more cost and schedule increases. If management reserves are depleted and not replenished, the program will likely experience further cost and schedule problems. In July 2007, the NPOESS program finalized its restructure in response to a Nunn-McCurdy (10 U.S.C. § 2433) program acquisition unit breach of the critical cost growth threshold. The restructure included about an additional $4.1 billion, or about a 49 percent, life-cycle cost increase for fewer satellites to be acquired, delays in satellite launches, and deletions or replacements of satellite sensors. The restructure also included removing 7 of the original 14 critical technologies from the program. Furthermore, 3 of the remaining technologies remain immature and the program continues to experience development problems, increasing risks of further problems. At this point, the program has seen a 153 percent unit cost increase. The GPS IIF program has faced technical challenges in completing development and production, causing another schedule delay in the launch of the first IIF satellite—over a 2-year slip from the original launch Not all of DOD’s space programs are facing the problems being experienced by GPS, NPOESS, and SBIRS. For example, the Navy’s Mobile User Objective System (MUOS), another communications satellite program, is meeting cost and schedule goals. Further, as discussed later in this testimony, newer Air Force acquisition efforts such as the Transformational Satellite Communications System (TSAT) and Space Radar have been taking actions to ensure they can meet their cost and schedule goals, though their funding has been reduced in light of overall affordability of space acquisitions. These two efforts were highly complex and ambitious and were predicted to be the most expensive military satellite developments ever. In addition, in December 2005, the Air Force was directed to begin efforts to develop competing capability in parallel with the SBIRS program; this effort was previously known as the Alternative Infrared Satellite System (AIRSS). We reported in September 2007 that DOD had not positioned the AIRSS effort for success. DOD agreed, and revised the effort’s development strategy to reflect best practices. The effort has a new name, the Third Generation Infrared Surveillance (Third Gen), and is now a follow on to the SBIRS program. The first sensor prototypes are expected later this month. Lastly, our annual weapons system assessment this year will be reporting on challenges faced by the Evolved Expendable Launch Vehicle (EELV) program, as the two providers—Boeing and Lockheed Martin—undertake a joint venture that will provide U.S. government launches of medium- to heavy-lift rockets. The consolidation of production, engineering, test, and launch operations under the joint venture, called the United Launch Alliance or ULA, is expected to yield cost savings in the future, but when and how much remains unknown. ULA expects the consolidation to be nearly complete by the end of 2010, but there are preliminary indications that some elements of the consolidation are falling behind schedule. Furthermore, the Air Force revised its acquisition and contracting strategy for EELV in 2005, which among other things increased program office oversight responsibilities. The change in contracting strategy created new data analysis activities for the program and expanded the types of expertise needed by the program office to utilize the new information provided by contractors. Despite its increased responsibilities, the program office is experiencing staff reductions and expects staffing vacancies to continue in the near term. The current military staff lacks some of the technical expertise needed to fully analyze contractor performance data now being collected under the new contracting strategy. Our work has identified a variety of reasons for this cost growth, most notably that weapons programs are incentivized to produce and use optimistic cost and schedule estimates in order to successfully compete for funding. We have also found that DOD starts its space programs too early, that is, before it has assurance that the capabilities it is pursuing can be achieved within available resources and time constraints. We have also tied acquisition problems in space to inadequate contracting strategies; contract and program management weaknesses; the loss of technical expertise; capability gaps in the industrial base; tensions between labs that develop technologies for the future and current acquisition programs; divergent needs in users of space systems; diffuse leadership; and other issues that have been well documented in DOD and GAO studies. Many of these underlying issues affect the broader weapons portfolio as well, though we have reported that space programs are particularly affected by the wide disparity of users, who include DOD, the intelligence community, other federal agencies, and in some cases, other countries and U.S. business and citizens. Moreover, problematic implementation of an acquisition strategy in the 1990s, known as Total System Performance Responsibility, for space systems resulted in losses of technical expertise and weaknesses in contracting strategies that space programs are still dealing with the effects of. Over the past decade, we have identified best practices that DOD space programs can benefit from. DOD has taken a number of actions to address the problems that we have reported on. These include initiatives at the department level that will affect its major weapons programs, as well as changes in course within specific Air Force programs. Although these actions are a step in the right direction, additional leadership and support are still needed to ensure that reforms that DOD has begun will take hold. Our work—which is largely based on best practices in the commercial sector—has recommended numerous actions that can be taken to address the problems we identified. Generally, we have recommended that DOD separate technology discovery from acquisition, follow an incremental path toward meeting user needs, match resources and requirements at program start, and use quantifiable data and demonstrable knowledge to make decisions to move to next phases. We have also identified practices related to cost estimating, program manager tenure, quality assurance, technology transition, and an array of other aspects of acquisition program management that space programs could benefit from. Table 1 highlights these practices; appendix II provides more detail. DOD is attempting to implement some of these practices for its major weapons programs. For example, we recently reported that DOD released a strategy to enhance the role of program managers in carrying out its major weapon system acquisitions. As part of this strategy, DOD established a policy that requires formal agreements among program managers, their acquisition executives, and the user community intended to set forth common program goals. In addition, DOD plans a variety of actions to enhance development opportunities, provide more incentives, and arrange knowledge-sharing opportunities for its program managers. Within this strategy, the department also acknowledged that any actions taken to improve accountability must be based on a foundation from which program managers can launch and manage programs toward greater performance, and must include an overarching strategy and decision-making processes that prioritize programs based on a match between customer needs and available resources. DOD highlighted several initiatives that, if adopted and implemented properly, could provide such a foundation. Some of these include establishing an early decision gate to review proposed programs at the concept stage, testing portfolio management approaches in selected capability areas and using capital budgeting accounts for programs in development. Additionally, as we reported previously, the Air Force adopted a “back to basics” approach for space designed to reduce technology risk and ensure programs were more executable. Specifically, for its TSAT and Space Radar acquisition efforts, the Air Force committed to delaying product development until critical technologies could be demonstrated to work in a relevant environment. This stood in sharp contrast to previous programs, started with immature technologies, such as NPOESS and SBIRS. The Air Force also committed to deferring more ambitious technology efforts associated with these efforts to science and technology organizations until they are ready to be added to future increments. TSAT, for example, deferred the wide-field of view multi-access laser communication technology, and contributed about $16.7 million for “off- line” maturation of this technology that could be inserted into future increments. It laid out incremental advances in other capabilities over two increments. Space Radar has deferred lithium-ion batteries, more efficient solar cells, and onboard processing for its first increment, and like TSAT, contributed toward their development by space and technology organizations. Further, both efforts have used systems engineers to help determine achievability of requirements. In our experience, the Navy has tended to follow good acquisition practices for its space programs, especially in relation to keeping technology risks out of programs. The Navy’s Mobile User Objective System (MUOS) is an example. Specifically, the MUOS acquisition effort began development with almost all of its critical technologies mature. Additionally, about 95 percent of design drawings had been completed at the critical design review milestone in March 2007. Since MUOS’s development start in September 2004, the program has been meeting its overall cost and schedule goals, with the first satellite expected to become operational in March 2010. Furthermore, the Air Force, U.S. Strategic Command, and other key organizations have made progress in implementing the Operationally Responsive Space (ORS) initiative. This initiative encompasses several separate endeavors with a goal to provide short-term tactical capabilities as well as identifying and implementing long-term technology and design solutions to reduce the cost and time of developing and delivering simpler satellites in greater numbers. ORS provides DOD with an opportunity to work outside the typical acquisition channels to more quickly and less expensively deliver these capabilities. In performing a review of ORS for this committee, we found that DOD has made progress in putting a program management structure in place for ORS as well as executing ORS- related research and development efforts, which include development of low-cost small satellites, common design techniques, and common interfaces. Other parts of DOD are also moving towards space programs with less risk and that have a greater chance of being more successful. The Missile Defense Agency’s Space Tracking and Surveillance System (STSS) program office is seeking an operational constellation that would be easier to produce than originally envisioned for the constellation. The new development approach for the constellation would involve no technology breakthroughs or scientific discovery, and the program office wants to scale the system design so that it will only require only a 5- to 6-year build cycle. DOD has also pushed back the decisions to start the TSAT and Space Radar acquisitions so it could reformulate their acquisition schedules and approaches to make them more affordable within DOD’s overall space portfolio. For example, TSAT is currently being assessed by the Office of the Secretary of Defense (OSD) to better ensure that proposed future funding levels for TSAT are affordable in the near term. In the meantime, the program office is continuing to fund risk-reduction efforts between two separate contractors to further reduce overall risk in TSAT. Similarly, the Space Radar program office told us that it is adjusting its acquisition approach to better balance affordability through incremental evolution of the Space Radar capability. In both of these cases, DOD will likely be better positioned with acquisition programs that are more affordable and executable in terms of meeting cost, schedule, and performance goals. The actions that the Air Force and OSD have been taking to address acquisition problems are good first steps. The back to basics policy and ORS, in particular, represent significant shifts in thinking about how space systems should be developed as well as commitment from senior leadership. But, there are still more, significant changes to processes, policies, and support needed to ensure reforms can take hold. First, while DOD pilot initiatives related to portfolio management are targeted at addressing funding pressures, there has not been a real commitment to prioritizing investments across DOD. For the past several years, we have emphasized that DOD starts more space and weapon programs than it can afford, creating a competition for funding that encourages low cost estimating, optimistic scheduling, overpromising, suppressing of bad news, and, for space programs, forsaking the opportunity to identify and assess potentially better alternatives. Programs focus on advocacy at the expense of realism and sound management. Invariably, with too many programs in its portfolio, DOD is forced to continually shift funds to and from programs—particularly as programs experience problems that require additional time and money to address. Such shifts, in turn, have had costly, reverberating effects. This year, significant cuts were made to several major space programs including TSAT, Space Radar, and STSS largely in light of the realization that new, expensive programs were not affordable at a time when DOD was attempting to upgrade other capabilities and still contending with problematic programs like SBIRS. In the case of TSAT, resulting delays in capability could have a dramatic effect on other new programs, such as the Army’s Future Combat System, which were counting on TSAT-like capabilities to enhance their performance. Second, as we have testified before, space programs are facing capacity shortfalls. These include shortages of staff with science and engineering backgrounds as well as staff with program-management and cost- estimating experience. Several of our reviews of major space programs have cited shortages of personnel as a key challenge that increases risk for the program, specifically in technical areas. In addition, during our review of DOD’s space cost estimating function, Air Force space cost-estimating organizations and program offices said that they believed their cost- estimating resources were inadequate to do a good job of accurately predicting costs. Because of the decline in in-house cost-estimating resources, space program offices and Air Force cost-estimating organizations are now more dependent on support contractors. We recognize that there are actions being taken to strengthen the space acquisition workforce, but we have not yet seen the condition get much better at the individual program office level. Our past work has also pointed to capacity shortfalls that go beyond workforce. For example, in 2006, we reported that cost-estimation data and databases are incomplete, insufficient, and outdated. And in previous testimonies, we pointed to limited opportunities and funding for space technologies, and the lack of low-cost launch vehicles. The ORS initiative is designed to help alleviate shortfalls in launch and testing resources, but one concern raised in interviews with launch providers was that there was still not enough investment being directed toward low-cost launch. Furthermore, policies that surround space acquisition need to be further revised to ensure best practices are instilled and sustained. For example, DOD’s space acquisition policy does not require that acquisition efforts such as TSAT and Space Radar achieve a technology readiness level (TRL) 6 (that is, testing in a relevant environment) or higher for key technologies before being formally started—key decision point B (KDP B). Instead, the policy suggests that TRL 6 be achieved later—at preliminary decision review (KDP C) or soon after. In fact, the back to basics approach that was adopted by the Air Force has not been incorporated into DOD’s space acquisition policy. Given that there are many pressures and incentives that are driving space and other weapon programs to begin too early and to drive for dramatic rather than incremental leaps in capability, DOD needs acquisition policies that ensure programs have the knowledge they need to make investment decisions and that DOD and Congress have a more accurate picture of how long and how much it will take to get the capability that is being promised. In addition, although the policy requires that independent cost estimates be prepared by bodies outside the acquisition chain of command, it does not require that they be relied upon to develop program budgets. Officials within the space cost-estimating community also believed that the policy was unclear in defining roles and responsibilities for cost estimators. We continue to recommend changes be made to the policy—not only to further ingrain the shift in thinking about how space systems should be developed, but to ensure that the changes current leaders are trying to make can be extended beyond their tenure. Last, while DOD is planning many new practices that will provide program managers with more incentives, support and stability, the overall environment within which program managers perform their work is very difficult to change simply with policy initiatives. Policies similar to the one DOD issued in 2007 to increase accountability of program managers have existed for some time, but according to DOD and Air Force officials, they have not always been practiced. For example, while DOD policy provides for program managers of major defense acquisition programs to serve as close to a 4-year tenure as practicable, many serve for only 2 years. One example is the SBIRS program, which has had six program managers in 12 years. In fact, our work has shown that rather than lengthy assignment periods between key milestones as suggested by best practices, many of the programs we have reviewed had multiple program managers within the same milestone. In conclusion, senior leaders managing DOD’s space portfolio are clearly working in a challenging environment. There are pressures to deliver new, transformational capabilities, but problematic older satellite programs continue to cost more than expected, constrain investment dollars, pose risks of capability caps, and thus require more time and attention from senior leaders than well-performing efforts. To best mitigate these circumstances and put future programs on a better path, DOD needs to continue with the actions it has begun undertaken. However, these measures should be complemented by realistic estimating of what it will take to complete space programs, prioritizing programs for investment, and strengthening DOD acquisition policy for space. At the same time, DOD should ensure its ORS program is well-supported and focused on alleviating capability gaps as well as developing longer-term solutions for space programs. Taken together, such actions, with the support of Congress, should help senior leaders negotiate acquisitions in a challenging environment and ensure their commitments to reform can be sustained into the next administration. Mr. Chairman, this concludes my statement. I will be happy to answer any questions that you have. In preparing this testimony, we relied on our body of work in space programs, including previously issued GAO reports on assessments of individual space programs, common problems affecting space system acquisitions, and the Department of Defense’s (DOD) space acquisition policy. We relied on our best practices studies, which comment on the persistent problems affecting space acquisitions, the actions DOD has been taking to address these problems, and what remains to be done. We also relied on work performed in support of our 2008 annual weapons system assessment. The individual reviews were conducted in accordance with generally accepted government auditing standards. We conducted this performance audit from February 26 to March 4, 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Because there are more product ideas than there is funding to pursue them, successful organizations we have studied ensure that decisions to start new product developments fit within an investment strategy. The investment strategy determines project priority as well as providing a basis for trade-off decisions against competing projects. Program managers find their company’s use of investment strategies helpful because it gives them confidence that their project has commitment from their organization and from their top leaders and managers, and clearly identifies where their project stands within the company’s overall investment portfolio and funding priorities. Organizations we have studied generally follow an evolutionary path toward meeting market needs rather than attempting to satisfy all needs in a single step. In effect, the companies evolve products, continuously improving their performance as new technologies and methods allow. These evolutionary improvements to products eventually result in full desired capability, but in multiple steps, delivering enhanced capability to the customer more quickly through a series of interim products. The approach permits program managers to focus more on design and manufacturing with a limited array of new content and technologies in a program. The organizations we have studied are able to achieve their overall investment goals by matching requirements to resources—that is time, money, technology, and people—before undertaking a new development effort. Any gaps that existed are relatively small, and it is the program manager’s job to quickly close them as development begins. As part of the effort to build a business case, requirements are researched and defined before programs start to ensure that they are achievable given available resources. Successful organizations ensure cost estimates are complete and accurate. They hold program managers accountable for their estimates. They also develop common templates and tools to support data gathering and analysis and maintain databases of historical cost, schedule, quality, test, and performance data. Cost estimates themselves are continually monitored and regularly updated through a series of numerous gates or milestone decisions that demand programs assess readiness and remaining risk within key sectors of the program as well as overall cost and schedule issues. Once cost estimates are complete, the organization commits to fully funding projects before they begin. As part of the effort to build a business case, critical technologies are matured by the start of a program, that is, proven to work as intended. More ambitious technology development efforts are assigned to research departments until they are ready to be added to future generations (increments) of a product. In rare instances when less mature technologies are being pursued, the organization accepts and plans for the additional risk. Systems engineering is used to close gaps between resources and requirements before launching the development process. As our previous work has shown, requirements analysis, the first phase of any robust systems engineering regimen, is a process that enables the product developer to translate customer wants into specific product features for which requisite technological, software, engineering, and production capabilities can be identified. Once a new product development begins, program managers and senior leaders use quantifiable data and demonstrable knowledge to make go/no-go decisions. These cover critical facets of the program such as cost, schedule, technology readiness, design readiness, production readiness, and relationships with suppliers. Development is not allowed to proceed until certain thresholds are met, for example, a high proportion of engineering drawings completed or production processes under statistical control. Program managers themselves place high value on these requirements, as it ensures they are well positioned to move into subsequent phases and are less likely to encounter disruptive problems. The organizations we have studied empower program managers to make decisions on the direction of the program and to resolve problems and implement solutions. The program managers can make trade-offs among schedule, cost, and performance features, as long as they stay within the confines of the original business case. When the business case changes, senior leaders are brought in for consultation—at this point, they could become responsible for trade-off decisions. Program managers are held accountable for their choices. Sometimes this accountability is shared with the program team or senior leaders, or both. Sometimes, it resides solely with the program manager on the belief that the company provides the necessary levels of support. In all cases, the process itself clearly spells out what the program manager is accountable for—the specific cost, performance, schedule, and other goals that need to be achieved. In a recent study, we also noted that successful organizations hold their suppliers accountable to deliver high-quality parts for their product through such activities as regular supplier audits and performance evaluations of quality and delivery, among other things. To further ensure accountability, program managers are also required to stay with a project to its end. Sometimes senior leaders are also required to stay. At the same time, program managers are incentivized to succeed. If they meet or exceed their goals, they receive substantial bonuses or salary increases, or both. Awards can also be obtained if the company as a whole meets larger objectives. In all cases, companies refrain from removing a program manager in the midst of a program. Instead, they chose first to assess whether more support is needed in terms of resources for the program or support and training for the program manager. Use of common tools and templates to support data gathering and analysis. Implementation and adherence to formal lessons-learned processes. Senior leaders stay committed to projects, mentor program managers, instill trust with their program managers, encourage program managers to share bad news, and encourage collaboration and communication. For further information, please contact Cristina Chaplain at 202-512- 4841or chaplainc@gao.gov. Individuals making contributions to this testimony include Art Gallegos, Greg Campbell, Claire Cyrnak, Anne Hobson, Rich Horiuchi, Sigrid McGinty, Angela Pleasants, Josie Sigl, and Alyssa Weir. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Each year, the Department of Defense (DOD) spends billions of dollars to acquire space-based capabilities to support current military and other government operations as well as to enable DOD to transform the way it collects and disseminates information, gathers data on adversaries, and attacks targets. In fiscal year 2009 alone, DOD expects to spend over $10 billion to develop and procure satellites and other space systems. At the same time, however, DOD's space system acquisitions have experienced problems over the past several decades that have driven up costs by hundreds of millions, even billions, of dollars; stretched schedules by years; and increased performance risks. In some cases, capabilities have not been delivered to the warfighter after decades of development. This testimony relies on the extensive body of work GAO has produced reviewing DOD space acquisitions. It comments on the persistent problems affecting space acquisitions, the actions DOD has been taking to address these problems, and what remains to be done. The majority of major acquisition programs in DOD's space portfolio have experienced problems during the past two decades that have driven up cost and schedules and increased technical risks. At times, cost growth has come close to or exceeded 100 percent, causing DOD to nearly double its investment in the face of technical and other problems without realizing a better return. Along with the increases, many programs are experiencing significant schedule delays--as much as 7 years--postponing delivery of promised capabilities to the warfighter. Outcomes have been so disappointing in some cases that DOD has had to go back to the drawing board to consider new ways to achieve the same, or less, capability. Our past work has identified a number of causes behind the cost growth and related problems. These include: optimistic cost and schedule estimating; the tendency to start programs with too many unknowns about technology; inadequate contracting strategies; contract and program management weaknesses; the loss of technical expertise; capability gaps in the industrial base; tensions between labs that develop technologies for the future and acquisition programs; divergent needs in users of space systems; and diffuse leadership. DOD has taken a number of actions to address the problems that GAO has reported on. These include initiatives at the department level that will affect all major weapons programs, as well as changes in course within specific Air Force programs. Most notable, the Air Force has sustained its commitment to reduce technology risks in programs and acted to restructure new programs so that its space portfolio can be more affordable. These actions are a step in the right direction and will be effective, particularly if they are complemented by more accurate cost estimating; continued prioritization of investments; actions to address capacity shortfalls, such as low-cost launch and shortages of staff in program offices; and changes to acquisition policies to reflect the best practices the Air Force is committing to.
With the completion of the Uruguay Round of multilateral trade negotiations in 1994, member countries of the General Agreement on Tariffs and Trade (GATT) agreed to a variety of disciplines for international trade in agricultural products. Nonetheless, according to GATT/World Trade Organization (WTO) and member country officials, state trading enterprises (STE) were not a major issue during the Uruguay Round. Since the start of GATT, member countries have noted STEs’ unique role and potential to distort world trade and have thus required them to operate in accordance with commercial considerations. STEs have been important players in the world agriculture market, particularly in wheat and dairy products. Since 1980, 16 GATT member countries have reported to the GATT secretariat that they operate STEs in their wheat sector, while 14 countries have reported STEs in their dairy sector. With the volume of trade in agricultural goods expected to expand, understanding the role and operations of STEs is likely to be an important component in understanding the nature of international trade. The Final Act resulting from the GATT Uruguay Round negotiations was signed by more than 117 countries on April 15, 1994. The intent of the Uruguay Round was to further open markets among GATT countries. Under the Uruguay Round agreement, member countries committed to reductions in tariffs worldwide by one-third; strengthened GATT through the creation of WTO and a revised multilateral dispute settlement mechanism; improved disciplines over unfair trade practices; broadened GATT coverage by including areas of trade in services, intellectual property rights, and trade-related investment that previously were not covered; and provided increased coverage to the areas of agriculture, textiles and clothing, government procurement, and trade and the environment. Since GATT was first drafted in 1947, STEs have been recognized as legitimate trading partners in world markets. However, the original drafters of GATT also understood how governments with a dual role as market regulator and market participant can engage in activities that protect domestic producers and place foreign producers at a disadvantage. A separate GATT article was established to monitor STEs and ensure they operate within GATT disciplines. Article XVII establishes a number of guidelines and requirements with respect to the activities of STEs and the obligations of member countries. In addition to holding STEs to the same disciplines as other trading entities, such as making purchases or sales in accordance with commercial considerations and allowing enterprises from other countries the opportunity to compete, the article requires periodic reporting by member countries to the GATT/WTO secretariat. In an August 1995 report, we commented on the disciplines placed on STEs by both article XVII and other GATT provisions. Among other things, our report noted that GATT member countries’ compliance with the article XVII reporting requirement between 1980 and 1994 had been poor. In addition, although state trading was not a major issue during the Uruguay Round negotiations, members established a definition of STEs and new measures to improve reporting compliance. Our report also highlighted the Uruguay Round’s Agreement on Agriculture, which requires all countries trading in agricultural goods, including those with STEs, to observe new trade-liberalizing disciplines (the agreement is defined in the next section). Finally, our report emphasized that the effectiveness of article XVII is especially important given the potential for increases in STEs if countries such as the People’s Republic of China (China), Russia, and Ukraine join GATT/WTO. Attempts to understand the role of STEs are complicated by the various measures that STEs use to control either a country’s production, imports, and/or exports. As we reported in August 1995, STEs’ practices to control commodities have included placing levies on production and/or imports, requiring licenses for exports, giving government guarantees, and providing export subsidies. Some STEs have justified their controls by emphasizing the needs for such things as protection against low-priced imports and safeguarding national security. governmental and nongovernmental enterprises, including marketing boards, which have been granted exclusive or special rights or privileges, including statutory or constitutional powers, in the exercise of which they influence through their purchases or sales the level or direction of imports or exports. As we stated in our 1995 report, it is still too early to determine the impact of the STE definition and additional measures to improve the reporting compliance of member countries. These new measures include the creation of a working party to review STE notifications. Although some GATT/WTO member countries have stated that article XVII should require that STEs report more information, such as detailed data about transaction prices, other member countries consider this information to be confidential and related to an STE’s commercial interests. The absence of this information is expected to hinder those member countries concerned about the role of STEs from obtaining the type of information they say is needed to fully determine whether STEs are adhering to GATT disciplines. According to U.S. Department of Agriculture (USDA) officials, the working group on STEs has met twice since August 1995. Members of this working group are reviewing each others’ notifications for completeness. Additionally, the United States has proposed improvements to the existing questionnaire on state trading and is seeking disciplines on STE activities through a working group on credit guarantee disciplines. The Agreement on Agriculture, resulting from the Uruguay Round, requires member countries to make specific reductions in three areas—market access restrictions, export subsidies, and internal support—over a 6-year period beginning in 1995. Under the market access commitment, countries are required to convert all nontariff barriers, such as quotas, to tariff equivalents and reduce the resulting tariff equivalents (as well as old tariffs) during the implementation period. Under the export subsidy commitment, countries are required to reduce their budgetary expenditures on export subsidies and their quantity of subsidized exports. Member countries are also expected to reduce their aggregate measure of selected internal support policies. These internal support policies include budgetary expenditures and revenue forgone by governments or their agents. These reductions are expected to have the effect of liberalizing trade in agricultural products, thereby increasing the flow of these products between GATT/WTO member countries. STEs are subject to these reductions. The United States is expected to experience economic benefits as a result of the new trade discipline in agriculture. As we reported in 1994, USDA estimated that as a result of the Uruguay Round, U.S. annual agricultural exports are likely to increase between $1.6 billion and $4.7 billion by 2000, and between $4.7 billion and $8.7 billion by 2005. Higher world income, as well as reduced tariffs and export subsidies among U.S. trade partners, is also expected to raise U.S. exports of coarse grains, cotton, dairy, meat, oilseeds and oilseed products, rice, specialty crops such as fruits and nuts, and wheat.U.S. subsidies on some agricultural products will also be reduced, most likely shrinking government support for dairy, coarse grains, meat, oilseed products, and wheat. Nonetheless, even with projected gains for U.S. agriculture, some U.S. producers are concerned that countries with STEs have not taken the same steps to reduce trade-distorting activities. For example, the United States developed its agricultural export subsidies to counteract those of other countries, such as members of the European Union (EU). These export subsidies were subsequently used to counteract STE practices as well. U.S. producers are now concerned that under the Uruguay Round the United States has committed to reduce those subsidies without a corresponding reduction in other countries’ state trading activities. The majority of STEs reported to the GATT secretariat between 1980 and 1995 involved trade in agricultural products. Although the reporting represented only a portion of GATT member countries, the largest number of STEs were found to be trading in either grains and cereals or dairy products. As shown in table 1.1, 16 member countries have reported state trading in their grain and cereals sector, while 14 have reported state trading in their dairy sector. Countries support their agricultural producers through both direct and indirect assistance. One way of measuring the flow of direct and indirect government assistance to producers is by using the “producer subsidy equivalent” (PSE). The Organization for Economic Cooperation and Development (OECD) uses PSEs to compare levels of assistance among countries. PSE is an internationally recognized measure of government assistance. It represents the value of the monetary transfers to agricultural production from consumers of agricultural products and from taxpayers resulting from a given set of agricultural policies in a given year. A relatively high PSE means that the government provides a larger amount of production assistance than do governments in countries with a lower PSE. Table 1.2 presents the PSEs for wheat in Australia, Canada, the EU, and the United States from 1979 to 1994. Table 1.3 presents the PSEs for milk in Australia, the EU, New Zealand, and the United States during the same period. As indicated in both tables, in recent years both the EU and the United States have subsidized their wheat and milk production to a greater extent than Australia, Canada, or New Zealand. Members of Congress’ concerns about STEs, further informed by reports of the USDA’s Foreign Agricultural Service (FAS) and the International Trade Commission that highlight the operations and trading practices of STEs operating in the world dairy and wheat markets, have led to the issuance of three GAO reports on the subject of STEs. We have already published reports on state trading, including (1) a July 1995 report that provides a summary of trade remedy laws available to investigate and respond to activities of entities trading with the United States, including STEs; (2) a report on the GATT/WTO disciplines that apply to STEs and the effectiveness of those disciplines to date; and (3) a correspondence report describing the impact of the Uruguay Round on U.S. cheese quotas and importer licensing process, as well as the operations of the New Zealand Dairy Board (NZDB) in the United States. Eighteen Members of the House of Representatives and the Senate have also asked us to provide more information on how STEs operate in an open, competitive marketplace. Members noted the role of state trading in the wheat and dairy sectors, saying that any trade problems in these sectors could be representative of potential problems that may affect U.S. producers, processors, exporters, and importers. We were asked to describe (1) the potential capability of export-oriented agricultural STEs to distort trade and (2) the specific potential of the Canadian Wheat Board (CWB), the Australian Wheat Board (AWB), and NZDB to engage in trade-distorting activities, based on their status as STEs. We agreed to review the three export STEs based upon their considerable role in international trade and not due to any assumption of trade distortion. To create a framework for understanding export STEs, we reviewed various trade practices and trade agreements; reviewed related literature; interviewed U.S., GATT/WTO member country and GATT/WTO secretariat officials; and utilized information from STE and national government officials in Canada, Australia, and New Zealand. Our purpose in establishing a framework was to (1) facilitate data collection in the three countries, (2) allow for various STE characteristics to be reported in a consistent and organized manner, and (3) determine the relevant relationships maintained by STEs and thereby come to some conclusion about whether or not an STE is able to distort international trade. We used the framework as a tool to try to partially overcome the transparency (openness) problem found in international trade in both the dairy and wheat sectors. The absence of transaction-level data, protected as commercial practice by both STE and private sector traders, necessitated another way of evaluating STEs’ influence on the market. However, foreign countries’ STEs and private firms are under no obligation to provide these data, since we have no audit authority over them. Even with this information, such an extensive analysis would require additional data regarding production costs. As such, definitive conclusions regarding STEs’ trade-distorting activities cannot be reached, given the complexity of the overall task. The framework underwent a peer review by economists at WTO, USDA’s FAS and Economic Research Service (ERS), the Congressional Research Service, and a private sector agricultural organization. We made changes in the framework where appropriate. To obtain information about STE operations in Canada, we interviewed officials from CWB, Agriculture Canada, the Canadian Grain Commission, the Canadian International Grains Institute, the Department of Foreign Affairs and International Trade, private sector grain traders, and provincial grain associations. In Australia, we interviewed officials with AWB, the Department of Primary Industries and Energy (DPIE), the Department of Foreign Affairs and Trade, the Grains Council of Australia, and the Industry Commission. To obtain information about the operations of NZDB, we interviewed officials from NZDB, the Ministry of Agriculture and Fisheries (MAF), the Ministry of Foreign Affairs and Trade, the Federated Farmers of New Zealand. We also spoke with or reviewed materials from assorted industry and academic groups. In the United States, we interviewed officials at the Office of the U.S. Trade Representative (USTR) and USDA’s FAS. We met with Canadian, Australian, and New Zealand embassy representatives located in Washington, D.C. We also conducted interviews with officials representing both U.S. dairy and wheat interests. In the case of the wheat industry, we spoke to officials from U.S. wheat, miller, and grain trading associations. We also reviewed background documents and reports on wheat and dairy trade provided by the officials mentioned previously, as well as reports from other government, industry, and academic organizations. Information on foreign law in this report does not reflect our independent legal analysis but is based on interviews and secondary sources. We did our review from April 1995 to October 1995 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the USTR and the Secretary of Agriculture or their designees. On March 26, 1996, we received oral comments from USTR. The agency was generally pleased with the report and declined to offer written comments. On May 2, 1996, the FAS Administrator provided us with written comments on the draft. In general, FAS agreed with the conclusions in our report. FAS was concerned that we had not fully explored certain market power issues as they relate to STEs, such as a guaranteed product supply and pricing flexibility. Although we generally addressed these issues in our draft report, we have expanded the discussion to better reflect the importance of the market power of these issues. Additionally, specific comments regarding clarifying language or updated information have been incorporated as appropriate. We also discussed the factual content of the report as it relates to the STE in their individual country with embassy representatives from Canada, Australia, and New Zealand. Their comments have been incorporated in the report where appropriate. Various types of STEs operate in the world market, with differences in aspects such as an export or import STE, the types of industries, the size of the operations, and the level of government involvement. This diversity makes it hard to generalize about the effects of STE operations on particular markets or on the world trading system. This is true even among CWB, AWB, and NZDB, which are the subject of this report. As a result, it is necessary to consider STEs on a case-by-case basis to understand their potential effects. We developed our framework for reference to incorporate information from a variety of sources that we believe should be considered in an analysis of the potential effects of individual export STEs. In subsequent chapters, we use this framework in reviewing three specific export STEs. Using this framework, we divided the relationships of export STEs into three groups: the relationship of the STE to domestic producers, the relationship of the STE to the government, and the relationship of the STE to foreign buyers. (See fig. 2.1) Using such a framework provided advantages in the collection, reporting, and interpretation of information on STE operations. First, it should aid in the collection of a consistent and comprehensive set of information about the operations of these entities. Similarly, it facilitated reporting about STE characteristics since the material can be described in an organized manner. Finally, the framework helped in the interpretation of the information about STEs since it distinguished between important characteristics and those that are less important. In each of the relationships, we considered the advantages and disadvantages the STE might have in relation to its private sector counterparts. In particular, we highlighted those characteristics that might provide a unique advantage in international markets, especially those that have the potential to distort trade. We also included material on a number of practices that are common to both private firms and STEs, even if they are not trade distorting. Some of these practices have been the cause of concern among industry observers. We developed this framework based on our own expertise in reviewing various trade practices and trade agreements, our review of related literature, and our discussions with STEs and officials with national governments and international organizations that deal with STE issues. We circulated a draft of this framework among agency officials and solicited their comments. The discussion of the framework in this chapter draws upon examples from CWB, AWB, and NZDB. Chapters 3, 4, and 5 provide more detailed descriptions of those boards using the framework set forth in this chapter. One of the relationships that is central to the discussion of export STEs is the relationship between the STE and the domestic producers. Two aspects of this relationship are important: (1) the ownership and management of the STE by the domestic producers and (2) the requirement that domestic producers sell to the STE. The ownership and management structure can vary significantly across STEs: these characteristics may provide insights into the goals of the enterprises. For example, STEs can be owned and managed entirely by producers, where all of the returns from the sales are given back to the producers in the form of profits. In these cases, we might expect the organization to try to maximize its own returns by selling at the highest prices possible. The stated objectives of the three STEs we assessed suggest that they are all producer oriented (see app. I for the three STEs’ objectives and other information). In each case, these enterprises seem to be operated on behalf of farmers. For example, the CWB annual report for 1993-94 states: “The CWB focuses on maximizing performance for prairie farmers,” while the AWB literature says that its mission is to “maximize long-term returns to Australian grain growers.” Alternatively, if the STE is owned or managed by some group other than the producers, it is possible that it might have a different goal, such as maximizing domestic political benefits. In these cases, the STE might choose to sell the commodity at prices that are advantageous to certain domestic groups. In this situation, the STE might be able to use its monopoly authority to lower the returns to producers. This would allow the STE to sell at a lower price in either the domestic or the international market. However, if the STE is successful in lowering returns to producers, this will make those sales less attractive, eventually drive marginal producers from the market, and decrease supply. The management of the STE could also make other changes in the terms, such as pooling the returns of producers. For example, the STE may choose to pay the producers the same return regardless of the time of delivery during the marketing year. For example, CWB describes price pooling efforts as “something which smooths out the seasonal fluctuations in prices and reflects the values that are achieved over the course of a marketing year.” This might make it easier for some producers to secure commercial financing by reducing the volatility of the returns to producers. However, these practices may also have disadvantages, since this feature would remove some of the incentive for the individual producers to try to be responsive to world markets. One of the central elements of an export STE is the relationship of the STE to the domestic producers. As part of their status as government-related entities, STEs often have some control over the sales of particular commodities. However, the extent of this control can vary. In some cases, the STE may have exclusive rights to acquire a commodity destined for export from producers in designated regions, as in the case of CWB, or from the nation, as in the case of NZDB. Under this authority, NZDB typically handles the export transactions, and sometimes licenses private firms to do the exporting. This type of authority might provide certain advantages in terms of size over individual producers or groups of producers who attempt to export on their own behalf. Exclusive purchasing authority can provide the STE with a more secure source of supply than would be the case for a private exporter. Depending upon the size of the domestic market and the extent of the purchasing authority, an STE can count on a certain level of supply for its export sales. This may increase its willingness to enter into long-term supply relationships. However, the success of the STE over a period of years depends more upon its ability to charge high prices and generate high returns for producers. These high returns keep the marginal suppliers in business and induce others to increase their production for the STE. These pressures are similar to those facing private exporters. A somewhat different situation exists when the STE has exclusive authority to purchase all production of a particular commodity, whether destined for domestic or export markets. Although none of the three enterprises we reviewed has control over all exports and all domestic sales, CWB does have control over all wheat and barley sales for human consumption from the western provinces. This additional authority over domestic sales could provide the STE with the ability to charge different prices in the domestic and export market. For example, if the STE’s goal were to increase consumption in the domestic market, it could charge higher prices abroad in order to subsidize the domestic price. On the other hand, if the STE’s goal were to maximize exports, it might charge higher prices to domestic consumers and use the profits to lower the export price. When the export prices are below the cost of production, these actions are referred to as “cross-subsidization,” and are potentially trade distorting. Two factors are important in considering the ability of an STE to engage in this type of cross-subsidy. One is the openness of the STE’s domestic market to imports. If the domestic market is open to shipments from abroad, the ability of the STE to raise the domestic price would be limited by the availability of imports from the world market. On the other hand, if the market is closed to imports, this would create at least the potential for the STE to raise prices above the level of the world price. For example, AWB must compete with both domestic wheat sellers and foreign wheat sellers for a share of the Australian domestic wheat market. As a result, an STE with both domestic market and export authority that operates from a closed market has more potential for trade distortion than an STE with only two of those factors. None of the STEs that we reviewed has all these capabilities. Finally, the extent to which this type of cross-subsidization is possible depends in part upon the relative size of the domestic and the export markets. For example, the fact that CWB exports more than 85 percent of the wheat under its control limits its ability to cross-subsidize. Domestic prices would have to be raised significantly in order to collect enough funds to lower the export price in any meaningful way. Certain types of relationships between the STE and the government could provide financial benefits to the STE that would not be available to private firms. For example, direct subsidies could provide advantages for the STE over its competitors in the international market. Other government actions may also provide benefits for the STE, but these may or may not be related to the fact that the exporter in a particular industry happens to be organized as an STE. The most obvious type of advantage a government can provide is direct subsidies paid out from general revenues to STEs. These funds could be used to reduce the prices of exports to gain an advantage in the international market. If these subsidies are used in an isolated case, they could have the effect of protecting the producers from unusually low prices. For example, the Canadian government provided financial assistance in the 1990-91 marketing year to CWB during a year when market prices were low, thus diminishing the impact of the low prices on producers. If these subsidies were provided on a regular basis, the higher returns to subsidized producers would likely lead to an increase in the supply of the commodity and reduce the sales and profits of other producers in the world market. These kinds of changes are generally considered trade distorting in the international markets. There are other ways in which a government could provide financial assistance to the STE. For example, a special tax advantage for an STE could reduce the amount of taxes for an STE or its domestic suppliers. Alternatively, if the STE is allocated tariff revenues on imports of the commodities, these revenues could be used to lower the price of its exports. In each of these cases, the potential exists for government assistance to be used to lower the prices of the STE to increase sales without lowering the prices received by producers. These actions have the potential to distort trade. In addition, there are a number of ways in which a government might provide indirect advantages to an STE. One of these indirect benefits is the interest rate advantage that might accrue to those firms that are associated with the government. Because the perceived risk of lending to governments is usually lower than the risk of lending to private entities, the costs of borrowing money are typically lower for governments than for private organizations. Because of their association with the government, STEs might thus have a lower cost of borrowing than a private organization with the same characteristics. The extent of this advantage would be difficult to estimate but would depend upon the amount of borrowing and the difference between the borrowing rate of the STE and the rate of a private entity with similar characteristics but without the government association. In cases where the government has actually stepped in to provide funds when the STE was in danger of default, the difference would tend to be the highest. On the other hand, in situations where the government has not provided funds since the inception of the STE, the difference would tend to be the lowest. There are other situations in which the STE may not pay the full cost of services provided. One example is a transportation subsidy where the stated cost of transporting commodities has been held significantly below the true cost, although these subsidies may or may not be related to the STE. For example, in the case of a transportation subsidy, it could be that the STE happens to be operating in an industry where this type of subsidy exists. While the STE might benefit from the subsidy, the potential for trade distortion comes from the indirect subsidy itself, whether it is a transportation subsidy or some other type of subsidy. It is also useful to examine the relationship between the STE and the foreign buyers to determine whether there is any unique advantage to operating as an STE in foreign markets. For example, as single sellers from export markets, STEs may have certain advantages in terms of spreading costs and achieving unity among producers. Some of these STE characteristics appear to be especially important in selling to foreign markets restricted by import quotas. In other situations, however, STEs appear to rely chiefly on practices that are also available to commercial exporters. An STE might provide certain advantages in terms of size and experience over individual producers acting on their own behalf. The costs of operating an office with specialized expertise in exports is likely to be considerable, and the larger scale of operations of an STE might enable these costs to be spread over a much greater volume of sales. NZDB officials noted that individual farmers or cooperatives would have a difficult time marketing dairy products on their own; thus, NZDB provides a mechanism through which the New Zealand dairy farmer can compete in a global marketplace. Multinational firms may not have the captive source of supply, but can achieve economies of scale through efficient operations and establishing relationships with producers in various countries and in various commodities. The establishment of an STE can also lead to a reduction in the number of exporters and an increase in the market power of the remaining participants. This might allow the STE to be more effective in certain situations in acting as a cartel to maintain higher prices than a collection of private firms. The distinction between the STE and the producers who sell to an exporter or participate in a cooperative is that the STE can prevent its producers from selling at a discount. Private firms and cooperatives would generally rely on voluntary cooperation and would therefore have the ability to offer discounts from the prices set by the cooperative. To the extent that STEs can extend their control over supply through collusion with other exporters, their ability to influence the market would increase. However, the exercise of market power over more than one year depends on the response of other suppliers to those higher prices. If those higher prices result in greater production by other nations, the STE may face additional competition in the market in the following years. There are other ways in which STEs might have an advantage in exporting to controlled import markets. One reason is that the importing nation may be more responsive to export promotion efforts when they are government affiliated, such as an STE. STEs with control over the exports of a commodity may also have an advantage in selling into a market that is protected by a quota. In this case, the STE is better able to capture the full difference between the lower world price and the higher price in the protected market through the establishment of a subsidiary in the importing nation. For example, NZDB has set up a wholly owned subsidiary for importing quota cheese products into the United States. As a sole exporter selling to a subsidiary in the protected market, NZDB has been able to capture more of the return than would have been possible in selling to an independent agent. There may also be differences in the way that STEs deal with foreign buyers. However, our ability to analyze sales practices is somewhat limited by the lack of transaction-level price information for either STEs or private firms. Recognizing this limitation, nevertheless it is useful to identify certain practices of firms and STEs in international markets and ask whether the status of the STE offers any particular advantages. Price discrimination is the practice of distinguishing between buyers of a particular good or service in order to charge a higher price to some buyers and a lower price to others. With the right combination of market characteristics, some sellers may be able to increase their profits because the lower-priced sales do not affect their sales to premium customers. STEs may be able to lower the price to certain importing countries without affecting the prices to its other customers. However, the important part of price discrimination is to be able to charge a higher price to premium customers. If there are other sellers willing to sell at a known world price, as there are in many commodity markets, it is not obvious why any buyer would ever be willing to pay a higher price to the STE. As a result, the success of the price discrimination would depend upon the existence of other producers willing to sell at the world price, rather than the fact that one seller happens to be an STE. One particular type of price discrimination is “predatory pricing,” where a seller or group of sellers lowers prices for the purpose of driving other sellers from a market by using higher prices from one market to lower prices in a second market. If successful, the remaining seller(s) can raise prices once the competition has been eliminated. However, we did not examine data to determine whether STEs practice predatory pricing, or how STEs might provide any unique advantage in this area. Successful predatory pricing would depend upon the existence of barriers to entry in the agricultural commodity markets, which would prevent new competitors from taking the place of those eliminated from the market. Predatory pricing implies a certain size in relation to the available market. In these cases, the important issue is whether the STE or the multinational firms have that type of market power. STEs might use other practices such as engaging in long-term supply arrangements or emphasizing quality to differentiate its products and services from those of other sellers. For example, STEs might be able to set some uniform grading standards for the producers; in fact, AWB sets standards for its wheat for export and further classifies the wheat based on quality and variety. Similarly, CWB has emphasized the high quality of the grain as a marketing strategy, but in some cases may have provided a higher quality than the customer required, potentially reducing the returns to the Canadian farmers. The success of these efforts in raising returns to producers would depend upon whether the STE is more responsive to the demands of world markets than a private firm. In these cases, it is useful to ask whether the practices are somehow unique to STEs or could be equally—or perhaps more effectively—practiced by any seller in the market. The actions of CWB in using private firms to export commodities rather than export the commodities itself may be evidence that the private sector is more effective at some of these commercial practices. By volume, CWB is the world’s largest grain-marketing board. As an STE, CWB has certain factors that provide it with the potential to distort international trade in wheat and barley. The CWB’s control over both domestic human consumption and exports of wheat creates the potential for cross-subsidization, though the risk of such practices is reduced by Canada’s dependence on the export market. However, CWB could potentially cross-subsidize between the domestic and foreign markets in its barley trade. Canadian government payments to CWB to cover the CWB’s periodic wheat and barley pool deficits have at times represented a significant subsidy. Finally, the margin between initial payments and final payments to the Canadian producers allows for greater flexibility in pricing than is the case with private sector grain traders. Nonetheless, some changes in subsidies and CWB control, as well as ongoing reviews of the CWB’s monopoly status, may have the effect of reducing the CWB’s ability to potentially distort trade. In addition, a joint commission established by the United States and Canada has made suggestions for restructuring both U.S. and Canadian trade practices. CWB operates as a government-backed, centralized marketer of wheat and barley. It remains the world’s largest grain-marketing board and Canada’s single largest net exporter. According to USDA figures, Canada’s 19-percent share of world exports of wheat and wheat products in 1994 was expected to increase to 22 percent in 1995. Figure 3.1 shows the six largest wheat-exporting nations over the last 3 years. 1995 (Forecast) In the previous 6 crop years, Canadian exports have averaged about 75 percent of total wheat production, making wheat growers dependent on export sales. Canadian barley growers are less dependent on foreign markets. Exports of barley over the past 6 years have averaged about 32 percent of Canada’s total barley production (see table 3.1). The first attempt to organize the Canadian prairie farmers began with the Manitoba Grain Act of 1900. This act provided farmers with the right to ship their own grain and to load from their own wagons or warehouses, rather than having to sell to the grain elevators. The first cooperatives were soon to follow in 1906, with the first Wheat Board established in 1919. Although the Wheat Board lasted for only one year’s crop, it incorporated the concepts of initial and final payments, pricing to maximize producer (pool) return, and centralized marketing. Prairie provincial wheat pools were successfully formed in 1924, but went into temporary receivership after the stock market crash of 1929. Following the financial hardship faced by farmers during the Depression, the government of Canada passed the Canadian Wheat Board Act of 1935 establishing CWB. CWB was also given control of marketing oats and barley, although oats have since been removed from the CWB’s control. CWB is administered by three to five commissioners, who are appointed by the government of Canada. A producers’ advisory committee, composed of 11 farmer-elected representatives from the prairie provinces, provides CWB with producer advice on matters related to its operation. As of July 1994, CWB employed 464 permanent employees and 58 temporary employees. The Canadian government has limited oversight of CWB operations. Officials from Canada’s Department of Foreign Affairs and International Trade told us that the Canadian government takes a “hands-off” approach to CWB. The CWB’s day-to-day operations are free from government monitoring and supervision. The CWB’s only formal reporting requirement to the government of Canada is an annual report to the Parliament under the authority of the Minister of Agriculture. Western Canadian farmers are required to pool their wheat and barley production for domestic human consumption and export under CWB, which then markets this commodity in both the domestic and foreign markets. The CWB’s control of domestic sales for human consumption sales and monopoly over export sales of wheat and barley provide it with the potential ability to charge a higher domestic price for these commodities and use these proceeds to lower export prices, particularly in the case of barley exports. Though pooling diminishes the uncertainty involved in marketing their product, pooling may also lower the returns to some Canadian producers. The limited transparency of CWB operations reduces the ability of Canadian farmers to determine the success of the CWB’s services. Some Canadian farmers have questioned the CWB’s role and are requesting the chance to market their wheat and barley outside the CWB system. The CWB’s 1993-94 annual report states that “the CWB’s monopoly is its single greatest asset” and concludes that “the economic benefits that accrue to Prairie farmers from this marketing strength would be greatly diminished were the CWB to operate in tandem with a private system.” CWB has the sole authority to market for export and for domestic human consumption wheat and barley grown in the western prairie provinces of Manitoba, Saskatchewan, Alberta, and British Columbia. The small quantities of wheat and barley grown outside of this area are not handled by CWB. In addition, feed wheat and feed barley grown throughout Canada can be sold by the producer domestically. CWB controls all exports of wheat and barley products through an export licensing process. Even producers who do not operate under CWB, such as producers with the Ontario Wheat Producers’ Marketing Board, are still required to obtain an export license from CWB. Canadian producers can buy back their own grain in order to export it themselves, but they have to purchase it back at the price that CWB sets. CWB also allows accredited exporters, both Canadian and foreign grain companies, to buy grain from CWB and sell it on their own. Until recently, CWB also controlled imports of wheat into Canada. On August 1, 1995, Canada replaced the CWB’s wheat import-licensing procedure with a tariff-rate quota. The change from licenses to a tariff-rate quota was part of the alterations agreed to under the Uruguay Round. Canada’s Department of Foreign Affairs and International Trade administers the new system. Canada’s barley import-licensing procedure, already administered by the federal government, was also replaced with a tariff-rate quota. Canada has established industry advisory committees for each commodity subject to a tariff-rate quota. The advisory committees are open to national industry representatives, producers, and consumers. CWB and others participated in the wheat advisory committee meeting held before implementation of the tariff-rate quotas on August 1, 1995. As the sole marketing agent for western prairie wheat and barley destined for domestic human consumption or export trade, CWB has the ability to offer differentiated prices. Under the framework discussed in chapter 2, an STE with both domestic and export authority might charge higher prices to domestic consumers and use the profits to lower the export price. The market-distorting potential of CWB in domestic and export sales depends on whether CWB is selling wheat or barley. In the case of wheat, Canada’s small domestic consumption of wheat compared to its large export sales limits the CWB’s ability to cross-subsidize between these two markets: charging a higher domestic price would generate limited profits and therefore have a small impact on the export price (see table 3.1 for a comparison of domestic consumption of wheat versus exported wheat). As shown in table 3.2, the majority of CWB wheat sales have been to foreign markets. In addition, the CWB’s ability to raise the domestic price of wheat is also limited by the availability of imports of wheat from the United States. In the case of barley, CWB has a greater ability to use domestic prices to lower the price of barley exports since only about one-third of Canada’s barley production is exported (as shown earlier in table 3.1). However, the CWB’s domestic control is limited to barley sold for human consumption. CWB does not have control over Canadian feed barley sold domestically. In fact, CWB does not attempt to sell feed barley domestically, as shown in table 3.2, though CWB does sell about half of its human consumption barley to the domestic market. Another factor that strengthens the CWB’s position with regard to Canada’s domestic barley market is the tariff Canada places on U.S.-designated barley imports, limiting the ability of Canadians to substitute U.S. barley for Canadian barley. A USDA official said these high barley tariffs have been a point of contention between the two countries. The intent of pooling farmer wheat and barley production is to maximize the returns of Canadian farmers while minimizing the risk inherent in marketing their grain. Pooling effects include (1) removing the timing of sales as a factor for farmers and (2) distributing market risk while also sharing resources. Approximately 50 different grades of wheat and barley are delivered by farmers in a crop year. Also, the wheat and barley are sold in different quantities at different prices at different times of the year. In July, the farmers indicate the number of acres seeded to various crops. CWB then signs a contract with the farmers committing itself to purchase a certain percentage of each farmer’s offer. The contract should indicate the quantity and quality of the wheat and barley that each farmer intends to deliver to CWB in four contract series over the crop year. The marketing year for wheat and barley lasts from August to July of the following year. According to CWB officials, the grain delivery period is longer than usual because Canada’s internal transportation infrastructure limits the amount of wheat CWB can market at any one time. Farmers deliver their grain to country elevators, where it is graded and binned with similar grades entering into the marketing system for exporting grain. At that time, the elevator companies make initial partial payments to the farmer. In turn, CWB reimburses the elevator companies once the grain is delivered to a shipping port. The initial payments are set by the government of Canada in consultation with CWB and are to cover approximately 70 to 85 percent of the anticipated price of the grain. At the end of the marketing year, CWB tallies its total revenues from marketing sales, deducts appropriate operational and marketing costs from each pool account according to pool sales and expenses, and returns the difference to the Canadian producers. Each producer’s payment is based on the type of grain provided, less transportation, handling and cleaning costs. If revenues are lower than the initial payment to the farmers, the Canadian government covers the CWB’s price pooling deficit. (Pooling deficits are discussed in greater detail on p. 41.) As we reported in 1992, pooling by itself does not guarantee higher prices for farmers. The very nature of distributing the production and marketing risk means that some Canadian farmers benefit more than others in a given year. For example, a farmer who gets his or her crop into the distribution system when the international price for the commodity is at a high point will still receive no more for the grain than the average pool price. Distributed costs, such as some farmers incurring greater transportation costs to get their product to market, may also benefit some farmers at the expense of others. Some Canadian producers have questioned the underlying premise of pooling. For example, grain farmers in the province of Alberta have expressed concerns that the CWB’s operations are not transparent enough to determine whether CWB is maximizing returns to the farmers. During November 1995, the government of Alberta held a referendum to determine whether the provincial farmers should have the freedom to sell their grain outside CWB. The majority of Albertans voted for voluntary participation in CWB, though the result of the vote is not binding on the federal government. Other Canadian grain producers and grain traders have also questioned CWB operations, with some of them voicing concern that CWB inefficiencies can be hidden through the pooling process. Although the Canadian groups we questioned seek an opportunity to market their grain outside of CWB, they are not calling for the CWB’s elimination, but rather for a voluntary relationship with CWB. The Canadian government has already attempted to respond to some of the farmers’ concerns. In July 1995, the Canadian Minister of agriculture announced the formation of a nine-member panel to review western grain marketing issues. This panel, in consultation with the Canadian public, farmers, and farm organizations, is to look at “all available facts and background information about our existing and potential markets, the commodities and products we sell into those markets, and the marketing systems we have or could have to maximize our sales volume and returns.” The panel was expected to hold local town hall meetings throughout western Canada in late 1995, followed by formal hearings in early 1996 where farmers and farmer organizations can put forward their own arguments for alternative marketing methods. A concluding report is expected to be released in 1996. CWB benefits both from federal direct subsidies and from government guarantees. As a quasi-governmental entity, CWB has its periodic operational losses covered by the federal government, providing CWB with almost $1 billion in government assistance over the last 10 years. Canadian wheat producers have also benefited from government transportation subsidies, though these subsidies were eliminated in 1995. In addition, CWB receives indirect subsidies, such as a lower interest rate on commercial loans as a result of its quasi-governmental status. CWB officials told us the only direct revenue CWB receives from the federal government is for the purpose of covering operational deficits. As a crown corporation, CWB can make a direct charge of its unliquidated financial obligations to the Canadian government. As a result, the CWB’s status has protected CWB from price pooling losses. Since 1943, CWB has experienced 3 crop years with wheat pool deficits and 7 crop years with barley pool deficits (see table 3.3). Pool deficits have also increased in recent years. The wheat pool deficit in the 1990-91 crop year, by far the largest of the pool deficits, cost the federal government over $695 million. The losses in the 1990-91 market year accounted for approximately 57 percent of the total pooling deficits recorded since the establishment of CWB. CWB attributed the 1990-91 pooling loss to a price collapse in wheat and barley markets caused by a “trade war” between the United States and the EU, where both nations highly subsidized their wheat and barley exports. CWB added that a record world cereal crop in 1990 also caused a decline in the price received for these commodities. CWB also benefited from indirect subsidies. One indirect subsidy to CWB and Canadian wheat and barley producers, though a direct subsidy to the Canadian railroad, existed in the form of transportation subsidies. The 1983 Western Grain Transportation Act, which modified the Crow’s Nest Pass Agreement, was enacted to subsidize Canadian rail transportation. This subsidy amounted to approximately $410 million during the 1994-95 crop year. According to a USDA official, this transportation subsidy encouraged farmers to grow primarily those crops covered under the program, such as wheat and barley. Due to internal budget constraints plus Canada’s obligations to reduce subsidies under the Uruguay Round, on August 1, 1995, the Canadian government eliminated the transportation subsidy provided under the Western Grains Transportation Act. In order to offset the impact of this change, the government intends to compensate Canadian farmers for this loss by (1) providing about $1.2 billion as a lump sum payment to the farmers, (2) establishing a $220-million Adjustment Assistance Fund, and (3) offering about $732 million in new export credit guarantees for Canadian agricultural products. Transportation pricing will also change in the 1995-96 crop year due to the elimination of deductions on transportation costs for wheat and barley traveling to eastern Canada. In the past, all grain producers had to support the additional costs for grain going eastward, even though the majority of grain was shipped from western ports. During the 3-year phaseout period of this subsidy and afterward, producers shipping their grain East will begin to bear the full cost of the transportation. USDA officials said the elimination of the transportation subsidies may affect what Canadian farmers grow and where they sell their goods. Since the government will no longer subsidize the transportation costs of crops being exported, Canadian farmers are expected to diversify out of grain crops, plant more high-value crops, and expand livestock production. Nonetheless, the effect of the eliminated subsidies on U.S.-Canadian trade is still uncertain. According to USDA’s November 1995 Agricultural Outlook, “more Canadian grains could eventually move south because of the lower transportation costs.” However, the report noted that increased crop diversification and livestock production in Canada could increase the demand for U.S. grain. The CWB’s 1993-94 annual report states that a partnership of farmers and government creates a link between farmers and the federal government that offers “distinct economic advantages.” The report goes on to cite the benefits of this relationship, including government backing of CWB borrowing, which “translates into lower interest costs.” CWB officials told us that although the government does not provide CWB with any loans or preferential treatment, it guarantees CWB an excellent credit rating by virtue of its status as a crown corporation. This credit rating assists CWB in obtaining the loans it needs at favorable rates on commercial markets. According to CWB officials, CWB does not benefit from special tax treatment or the ability to levy assessments on Canadian wheat and barley producers. Although CWB does not pay taxes to the federal government, the returns paid to the farmers are taxed as regular income to the farmers. In addition, CWB officials told us that CWB initiated a voluntary levy was initiated in the 1994-95 marketing year to help fund grain research at the Western Canadian Grain Research Institute. A CWB official said that 30 percent of the wheat and barley producers have declined to participate in the voluntary levy. The United States, as well as other grain-trading countries, has questioned the CWB’s monopoly authority over Canadian wheat and barley as well as the lack of transparency in the CWB’s marketing system. This lack of transparency in the CWB’s pricing methods may provide CWB with greater pricing flexibility than is found among private sector traders. CWB has attempted to address some of these transparency concerns. However, a recent U.S.-Canadian joint commission has questioned both CWB and U.S. trade practices. Canada is the third-largest export market for U.S. agricultural commodities. USDA’s ERS preliminary figures for 1995 showed the United States exporting $5.8 billion in agricultural products to Canada. Earlier ERS forecasts showed Canada as the second-largest source for U.S. agricultural imports, with the United States importing $5.2 billion in agricultural products from Canada in 1995. With respect to grain, the United States has run a trade deficit with Canada. In 1994, the United States had a $500-million trade deficit with Canada in the grains, grains product, and animal feeds sector in 1994. The U.S. government, as well as other grain-exporting countries, has expressed concerns about the CWB’s monopoly power and the limited transparency of its operations. U.S. critics of CWB contend that CWB has an unfair pricing advantage due to its status as the single selling authority. According to one USDA official, the day-to-day “replacement cost” for wheat is more readily apparent in the United States with its commodity markets than is true of CWB. In such a case, the grain traders in the United States are “price takers,” or are required to buy their grain at the given market price without being able to affect that price. The CWB’s exclusive purchasing authority over wheat and barley for human consumption provides CWB with a more secure source of supply, as well as more control, than would be the case for a private exporter. USDA officials expressed concern that the CWB’s margin between the initial price and the final price paid to the Canadian wheat and barley producers allows CWB to adjust transaction prices at will, even if it is to the detriment of Canadian producers. As stated earlier, some Canadian producers are also concerned that such detrimental pricing policies could occur without greater transparency over CWB operations. Some U.S. officials are also concerned about CWB undercutting U.S. producers using its grain quality standards. According to USDA officials, CWB has used high quality as a marketing strategy, often providing higher protein content in its wheat than the customer requests and thus developing an expectation that CWB’s wheat is a better value for the money. In comparison, the U.S. wheat industry tends to blend wheat to the specifications of the buyer. Although the CWB’s approach may be a useful marketing strategy, it also has the effect of providing a benefit to a buyer without CWB getting the full value of the higher quality wheat. The uniform grading standards that CWB uses, although also cited as a benefit of CWB by providing consistency across sales, may also be a liability at times to Canadian producers. A USDA official told us about U.S. concerns that CWB has downgraded wheat to “feed quality” using these standards, even though the “feed” grain is later milled in the United States. In such a situation, Canadian producers would be deprived of the full value of their wheat. Two recent Canadian reports indicate continued attempts to understand the benefits and costs of the CWB’s single selling authority status. The first report, authored by three Canadian agriculture economists with the assistance of CWB, estimated that CWB has provided greater revenues and lower management costs to Canadian wheat producers than would have been the case had the producers sold their grain through a multiple-seller system. The report estimates that from 1980 to 1994, Canadian wheat producers received additional revenues ranging from a low of $18.88 to a high of $34.47 per ton of wheat due to the single selling authority marketing system. A second report, prepared by two Canadian agriculture economists with the assistance of the Alberta Department of Agriculture, found no evidence of CWB price premiums for wheat and barley when prices were examined at the farm level. The report also found that the hidden costs of the single selling authority marketing system to producers could be as high as $20 per ton for wheat and more than $20 per ton for barley. Moreover, the report indicated that hidden costs to Canadian taxpayers for having a single selling authority could be another $5.50 per ton for wheat and about $9 per ton for barley. CWB has attempted to provide greater transparency in its operations and final price forecasts. CWB has started to provide more detail on expected returns for CWB grains as well as daily price quotes. In 1993, CWB introduced the Pooled Return Outlook/Expected Pool Return to forecast pool returns for each crop year in order to assist producers with seeding, marketing, and financial planning decisions. A truck-offer program was also initiated to provide daily price quotes based directly on the Minneapolis future and cash wheat markets to Canadian farmers wishing to buy back their grain. Finally, CWB started a weekly South East Asian News Flash publication showing the CWB’s offer/tradable prices for grain at West Coast ports. Even so, without additional transaction price information, there is little likelihood that the transparency issue between Canada and other grain-trading nations will be resolved. CWB, like private sector grain traders, continues to protect this information as commercially sensitive data. One USDA official said U.S. grain traders are just as likely as CWB to treat this information as proprietary. In addition, CWB does not always have access to end-user transaction prices. According to CWB officials, accredited exporters are not, in all cases, required to provide CWB with the final transaction price or even the customer. These accredited exporters purchase wheat from CWB and then resell it to U.S. customers, as well as other customers throughout the world. Canadian government officials have stated that the transparency issue has already been resolved, claiming that previous U.S. reports have exonerated CWB from charges of violating applicable trade agreements and U.S. law. USDA officials we interviewed early in our review acknowledged that they did not have any evidence that CWB was violating existing trade agreements. Nonetheless, trade differences between the United States and Canada have led to curbs on Canadian wheat imports into the United States as well as the establishment of a joint commission to look at all aspects of the two countries’ respective marketing and support systems for grain. In response to growing U.S. criticism of Canadian exports of durum wheat to the United States, on November 17, 1993, the President requested that the U.S. International Trade Commission begin a section 22 investigation. The investigation began on January 18, 1994, with the purpose of reviewing U.S. imports of wheat, wheat flour, and semolina from all countries, including Canada. The International Trade Commission issued its final report in July 1994. As a result of the investigation and negotiations, a memorandum of understanding between the United States and Canada with respect to cross-border wheat trade was made effective on September 12, 1994. The memorandum called for (1) a Joint Commission on Grains to be established to further examine the grain problems between the two countries; (2) a 12-month period, beginning September 12, 1994, during which the United States would apply a new schedule of tariffs on the importation of wheat into the United States; and (3) a 12-month hold on all countermeasures under NAFTA or GATT, as well as a hold on all countermeasures inconsistent with either the North American Free Trade Agreement (NAFTA) or GATT provisions. The Canada-U.S. Joint Commission on Grains released its preliminary report in June 1995 and its final report in January 1996. The final report made recommendations in a number of areas, including (1) policy coordination, (2) cross-border trade, (3) grain grading and regulatory issues, (4) infrastructure, and (5) domestic and export programs and institutions. In relation to domestic and export programs, the final report noted that “the use of discretionary pricing by governments, directly through their programs or entities, had led to trade distortions.” As a result, the report recommended that both the United States and Canada reduce and remove these trade distortions by (1) the United States eliminating, or significantly reducing with a view to eliminating, its export subsidy programs such as EEP for all cereals and their products and (2) CWB being “placed at risk of profit or loss in the marketplace” or conducting itself in an equivalent manner. In this section, the report also recommended removing trade-distorting effects in each country’s domestic agricultural policies. Finally, the Joint Commission noted that the implementation of these recommendations will depend heavily on other grain-exporting countries, such as the EU and Australia, undertaking comparable actions. Among other things, the final report also recommended that (1) Canada and the United States undertake regular and structured consultative process concerning grain policy issues with the goal of reducing trade-distorting policies and (2) a bilateral producer/industry-based Consultative Committee be established to handle short-term cross-border issues as an “early warning system for trade difficulties.” AWB has limited capability to distort international wheat markets. It has monopoly power over wheat exports but does not routinely receive direct subsidies from the Australian government. The AWB’s initial payments to farmers are underwritten by a government guarantee. Because of this guarantee, it most likely receives favorable interest rates on its loans. Additionally, its access to additional funds allows it to diversify risk by investing in other projects. AWB has the capability to be flexible in its pricing; this flexibility could lead to either lower or higher returns for producers. Although Australia is a country of less than 18 million people, its agriculture exports totaled $12.2 billion in 1992-93. This equates to about one-quarter of Australia’s total export income. Australian wheat ranks as the country’s fourth-largest export market, with 12.9 percent of total world exports in 1994, representing about 80 percent of all wheat grown in Australia. Australia ranks as the world’s fourth-largest wheat exporter. AWB is a statutory marketing authority with federal and enabling state government legislation providing it with the sole license to export Australian wheat. AWB was established in 1939 to “acquire, with certain exceptions, all wheat held in Australia and to arrange for its disposal in view of low world prices prevailing and the marketing and transport difficulties created by the wartime conditions.” However, when World War II ended and the justification no longer existed, AWB was not disbanded. It was reconstituted in 1948 to establish it as the central marketing authority for wheat and to enable it to administer various wheat stabilization and marketing arrangements. New legislation in 1989 modified the AWB’s role by deregulating the domestic market, expanding the AWB’s operating domain to include other grains produced in Australia and to wheat from other countries. The legislation also removed price supports and established the Wheat Industry Fund (WIF), which is discussed in more detail in the following section. AWB is a national and international grain marketer, financing and marketing wheat and other grains for growers. AWB also spends a portion of its budget on market development and promotion, especially in the Asia/Pacific region. All profits are distributed to growers, even though it is not officially a grower-owned organization. Australia’s grains industry is not governed by a single entity. Some grains are freely traded in all states, while others are governed by state boards in certain states. AWB is the only organization in Australia that has acquisition authority for a particular grain (wheat) across all states, and thus the only organization with the power to make an impact. See table 4.1 for an overview of the various boards’ acquisition authorities. Wheat growers may deliver their wheat to AWB, which operates a number of pools each year. The wheat is segregated by class and variety, and growers receive initial payments upon delivery. AWB deducts storage, transport, operating, and marketing costs from sales and returns the remainder to the farmers once all the wheat has been sold. The AWB’s major markets lie in Asia and the Middle East, as shown in figure 4.1. AWB is responsible to both producers and the government, and its managing board includes representatives from both groups. Producers sell their wheat to AWB through a pooling system that averages individual producer returns, thus dispersing the producers’ financial risk. AWB payments to producers are not immediate, though, since product pools may take years to close. Since AWB must compete with other suppliers in the domestic market, it does not have the capability to cross-subsidize between its domestic and foreign market sales. AWB’s managing board consists of a nonexecutive chairperson, the Managing Director, eight members with special expertise in wheat production or other specified fields, and a government representative. The eight members with special expertise include growers, as evidenced by the current board composition. DPIE, a government agency, loosely oversees the AWB’s activities by appointing a government representative to sit on the AWB’s managing board; requiring an annual report, which is submitted to Parliament; requiring a 3- to 5-year corporate plan, which is approved by DPIE’s Minister; and requiring an annual operating plan, which is not subject to ministerial approval. AWB must also consult with the Grains Council of Australia annually. For the past 57 years, AWB has had the statutory authority to be the primary buyer and seller of Australian wheat. It is the only entity licensed by the Australian government to export Australian wheat to the global marketplace. Thus, growers who wish to take advantage of the export market are forced to sell their product to AWB. AWB purchases all Australian wheat bound for export and combines it into a number of pools based on quality and variety. AWB then sells the wheat on the international market and returns the proceeds, minus expenses, to growers. Through the pooling system, all growers of a similar quality and variety of wheat generally receive the same price. This means that bad luck in delivering their products during a part of the marketing year when prices are low will have less impact on individual producers. However, it also means that those who were able to deliver the product at a time of higher-than-average prices receive a lower return. Actual net pool revenues may vary based on individual grower transportation and storage costs. The majority of wheat produced in Australia is delivered for marketing and payment within the AWB’s pooling system for export. Various criteria govern the pools, including quality, time of delivery, location, and category of wheat. Storage, handling, and transport costs are disaggregated and charged to growers, and marketing costs and borrowings to fund payments are pooled. AWB makes a net payment to growers at each stage of the process, mostly in advance of receipts from sale of the delivered wheat. Typically, the first payment is made within 3 weeks of delivery, sometime between November and January. It amounts to about 80 percent of the estimated total payment. The second payment is made during March, once AWB receives the entire harvest. Other payments may take place before a final payment is made. The final payment for a particular pool may not take place for years, since some of the wheat may be sold on credit terms and not finalized for several years. AWB conducts its domestic market dealings somewhat differently. AWB offers cash on delivery to a designated silo for wheat destined for the domestic market. According to AWB, this is a relatively small quantity of wheat compared to the amount handled in the pooling system. AWB does not have the capability to cross-subsidize its sales between its domestic and overseas markets. AWB sells wheat on the domestic market, but it must compete with other sellers. According to AWB officials, AWB only accounts for 30 to 40 percent of the domestic wheat market; however, other sources claim that this figure is as high as 80 percent. Additionally, AWB does not have any control over the import of wheat to Australia, so it cannot control the entrance of other sellers to the domestic Australian wheat market. Besides its monopoly on Australian wheat exports, AWB benefits from several forms of assistance. This assistance has changed over time. The AWB’s current benefits include a government guarantee on borrowings, which most likely results in lower interest rates. WIF funds also allow AWB to maintain a strong capital base and invest in outside interests. Additionally, a number of indirect subsidies benefit AWB, including government matching research funds. Before 1989, the Australian government provided economic support to wheat farmers through a number of mechanisms, including guaranteed minimum prices and an artificial premium on domestic wheat prices. Unit-pooled returns to growers were guaranteed at a certain level by the federal government, at least for a limited volume of exports, with the guaranteed price based on cost of production estimates by the Australian Bureau of Agricultural Resource Economics. Economic assistance to AWB also included a home consumption price that was set in line with the guaranteed price, based on the assessed cost of production. Generally, domestic end-users paid a higher price for wheat than foreign buyers under this scenario. The guaranteed minimum price for wheat was ensured through a stabilization fund. If export prices exceeded a trigger price, an effective export tax was imposed. If prices fell below the guaranteed price, a deficiency payment to growers made up the difference between the actual price and the guaranteed price, up to a specified limit. If the stabilization fund was depleted, the federal government made up the difference. When the fund was first established, growers paid into it. However, from 1958 to 1974, the federal government was forced to heavily subsidize the fund. After other changes in the 1970s and 1980s, including altering the baseline of stabilized prices, establishing guaranteed minimum delivery prices, and allowing AWB to borrow on the commercial money market, major reforms were introduced in 1989. The most important of these included deregulation of the domestic market, establishment of WIF, and abolition of the government’s guaranteed minimum pricing scheme. AWB was given some flexibility in the commercial market; besides other activities, AWB may buy and sell a variety of grains. Since 1989, the Australian government has guaranteed a portion of the AWB’s borrowings to pay farmers at harvest time. This guarantee covers, at a minimum, between 80 and 90 percent of the aggregate estimated net pool return. According to the Industry Commission, this guarantee has risen in value from $21.5 million in 1989-90 to $26.4 million in 1992-93. The government guarantee was initially established to last until June 1994, but was extended at that time to continue until June 1999 at a maximum of 85 percent. Both AWB officials and the Industry Commission agree that the government’s guarantee translates into real savings on interest rates, since the guarantee transfers the risk from AWB to the taxpayers. This guarantee results in increased net returns to the wheat industry because of the lower interest charges. WIF, a nonsales source of AWB revenue, is supported by a 2-percent levy on wheat growers. WIF serves as the AWB’s capital base and underwrites the AWB’s domestic trading operations, as well as strategic investments that support outside business activities. Growers hold equity in WIF and may transfer that equity. AWB manages the fund in conjunction with the Grains Council of Australia. As noted previously, AWB may use this fund to diversify its holdings. For example, it has used WIF funds to invest in flour mills in China and Vietnam. Thus, farmers are not completely reliant on the international wheat market for income; outside investments may help soften the blow of declining wheat prices. WIF also provides a capital base for the AWB’s domestic market activities. This practice is questionable because of its implications for other domestic sellers. That is, farmers who sell their wheat abroad through AWB may also choose to sell their wheat on the domestic market through another company. However, the farmers then must pay a 2-percent WIF levy on their exported wheat to AWB; in effect, they could be funding the efforts of a competitor. Wheat research and development are partially funded by the government. The government matches industry research contributions dollar for dollar up to 0.5 percent of gross value of production. The Grains Research and Development Corporation manages about 22 percent of the wheat research funds for the industry. Growers only funded about 20 percent of wheat research and development in 1993-94, and the remainder was provided by the Commonwealth government, state governments, and private sources. Before 1990, drought was regarded in Australia as a natural disaster, and automatic relief was available through direct subsidies. Direct subsidies were not available after 1992, when Australia instituted the National Drought Policy. This policy reclassified drought as “normal operating procedure” and removed the direct payments. Through this policy, Australia offers welfare assistance and interest rate subsidy support for exceptional drought circumstances, assistance with the creation of financial reserves, and research funds for drought impact and risk management practices. In 1992, through the Crop Planting Scheme, the government spent $2.2 million to cover 75 percent of the interest rates on commercial loans to farmers who, although considered as having viable farms in the long run, were financially unable to plant a crop. The Australian government has also compensated farmers directly for extreme circumstances that affected their incomes directly. For example, it made a single payment of $27 million to wheat farmers to compensate for losses due to the sanctions against Iraq. Growers are taxed on the returns under Australia’s income tax system. Primary sector producers receive tax breaks from the government under a number of measures, including two schemes that may reduce the producers’ taxes. The Income Averaging Scheme allows producers to average their income over 5 years to compute their tax rate, and the Income Equalization Deposits Scheme allows producers to make tax-deductible deposits into a fund that can be used in low-income periods. AWB does not pay a separate tax on any returns distributed to the growers, but instead pays corporate tax on holdings and investments, both domestic and abroad. The AWB’s monopoly over wheat exports allows it to set prices without fear of competition from other Australian wheat exporters. This allows for price flexibility; however, we were unable to determine whether AWB engaged in any form of price discrimination or cross-subsidization between foreign markets since data were not available from public or private sector wheat traders. Australian government-sponsored reports have suggested that the wheat industry would benefit from complete deregulation and should focus more on market-based activities. The Australian government has authorized AWB to be the sole exporter of Australian wheat, through legislation that is reviewed every 5 years. As mentioned earlier, AWB also has the right to buy and sell wheat on the domestic market, but it must compete with other firms. This differs from the pre-1989 arrangement, when AWB maintained monopolistic control over both the domestic and export markets in Australia. Since AWB is a monopoly, it may set prices for Australian wheat abroad without competition from other Australian exporters. This status may provide some advantages of a cartel in that individual producers are unable to undercut a particular price that AWB sets. In situations where there are limited alternatives to Australian wheat, this might enable AWB to charge higher prices and capture higher returns for Australian producers. Additionally, AWB’s single-desk seller status gives it a sure source of supply for its export sales. Similarly, the averaging of all sales may allow the sales at prices that are lower than a primary seller would be willing to accept. This could be done either to match lower prices of a competitor or to ensure sales to a particular buyer for other reasons. This could lead to lower returns to Australian producers. We were unable to determine whether or not AWB engaged in price discrimination because we did not have access to individual transaction data from AWB or private grain traders. Likewise, we were unable to determine whether or not AWB engaged in cross-subsidization between foreign markets. In 1993, the Australian government released a report on National Competition Policy, also known as the “Hilmer report.” This report clearly stated that STEs, known in Australia as “statutory marketing authorities,” should not exist except for certain situations based on public interest grounds. It stated that statutory marketing authorities’ anticompetitive practices such as compulsory acquisition of product and monopoly marketing arrangements are often grossly inefficient. Another report indicated that grain marketing boards cost more than private traders to perform similar services. In 1989, the Grains Council of Australia initiated the Grains 2000 Project. It identified a number of issues critical to the long-term profitability and sustainability of the Australian grains industry. Subsequently, the Grains Council of Australia established several strategic planning units to address these issues as they relate to specific grains. One of those units, the National Grain Marketing Strategic Planning Unit, commissioned a report on the Australian milling wheat industry. The study focused on issues that may affect the industry for the next 20 years and made recommendations that the authors believe would lead to greater efficiency. The Grains 2000 study concluded, among other things, that the benefits of single-desk selling currently outweighed the costs. However, it noted that this situation might change once the effects of GATT reforms take hold; that is, when effective subsidies are reduced and price differentials between subsidized and unsubsidized markets shrink, or if the market no longer supports the differentiation strategy. The study also reported that the net effect of AWB to the Australian grower in 1992-93 fell somewhere between a loss of $1.22 per ton and a gain of $4.93 per ton. Thus, it is unclear how much AWB benefits wheat farmers, if at all. The report made additional recommendations that, in its view, should result in greater efficiency and support for sustainable practices. These other recommendations included (1) protecting core markets and developing targeted defenses against Canada, (2) allowing AWB to trade all grains,(3) allowing AWB investment in wheat handling and elevation, and (4) making AWB a corporation with grower ownership. NZDB is a major player in the world dairy trade. Individual domestic producers have some involvement in NZDB activities and participate in a pooling process, thus dispersing risk across the entire New Zealand dairy industry. NZDB has successfully weathered the removal of government subsidies in 1984 and maintains about a 25-percent share of the world dairy market. The NZDB’s statutory authority allows it to maintain a monopoly over dairy exports, but does not allow it to maintain control over the domestic market or collect tariffs on imports of dairy products. The NZDB’s network of subsidiaries allows it to sell a greater amount of its goods at the best possible price in other countries’ markets, especially those with a controlled dairy import market. State trading also plays a role in international trade in dairy products. Although the United States is among the world’s largest milk producers (see fig. 5.1), the country is not a substantial exporter of dairy products because the great majority of U.S. dairy production is sold to U.S. consumers. Among the top exporters of skimmed milk powder shown in figure 5.2, three of the five countries listed maintained STEs in their dairy sector: Australia, New Zealand, and Poland. In the case of the major cheese-exporting countries, shown in figure 5.3, Australia and New Zealand again appear among the listed countries. Metric tons (in millions) Note 1: EU estimate excludes trade between EU countries. Note 2: “Other” not specified in source. NZDB operates within the terms of the Dairy Board Act of 1961, as amended. Its group mission is to maximize the sustainable income of New Zealand dairy farmers through excellence in the global marketing of New Zealand origin dairy products. According to NZDB officials, NZDB helps New Zealand farmers obtain the maximum return possible by acting as a single agent on the export market for New Zealand dairy products. This eliminates possible “undercutting” by individual dairy cooperatives and reduces the number of global players in the competition for dairy sales. Dairy exports constitute a significant portion of New Zealand’s overall export trade; approximately 90 percent of New Zealand’s dairy products are exported. According to MAF, in the year ending December 1993, dairy product exports totaled $1.9 billion. This figure represents 33 percent of New Zealand’s agricultural exports and 18 percent of New Zealand’s total merchandise exports. (See fig. 5.4.) In recent years, New Zealand has been a leading supplier of most dairy produce to world markets; in particular, it supplies a significant percentage of the world’s exports of milk powder, butter, and cheese. Conversely, New Zealand’s dairy imports are minimal. No dairy products are subject to import-licensing requirements, and no quantitative restrictions apply to dairy products entering New Zealand. New Zealand domestic producers participate, through elected representatives, in NZDB policy direction. NZDB is accountable to its producers and reports back through a number of vehicles, including annual reports and efficiency audits. Individual producer risk is dispersed across the entire industry through the pooling process. The small New Zealand domestic market is deregulated, so any company may sell dairy products domestically. NZDB cannot engage in cross-subsidization between domestic and foreign market sales because it neither has control over imports nor does it sell its dairy products in the domestic market. NZDB is owned by the New Zealand dairy industry. Policy direction is determined by a managing board of 13 directors, 11 of whom are elected by the cooperative dairy companies and are themselves both directors and shareholders of their own companies. The other two directors are appointed by the New Zealand government, on the recommendation of NZDB, on the basis of their commercial expertise. NZDB reports back to the dairy industry through a number of vehicles. Its publications include an annual report, which has financial and marketing information, and newsletters. It also conducts annual general meetings for farmers and meets with producer organizations, such as the Federated Farmers of New Zealand. MAF acknowledged that producer boards were not subject to the same competitive disciplines as commercial marketing organizations because of their statutory position and powers. Therefore, the government decided to subject these boards to performance and efficiency audits every 5 years, with the requirements for the audits specified in each board’s legislation. According to MAF, these audits would help to give producer boards more financial autonomy and make them more responsible for their actions, improve their performance, and make them more accountable to farmers for their commercial performance and to the parliament for the exercise of their statutory powers. The NZDB’s first performance and efficiency audit was published in October 1993. It was conducted by the Boston Consulting Group on behalf of the New Zealand government. The audit’s overall purpose was to “assess the effectiveness and efficiency of the NZDB’s activities in achieving its mission.” It covered 11 major topics, ranging from personnel to communication. The overall assessment was “seven out of ten.” Recommendations included a need to develop an industry vision, to conduct a review of the payments system, and to improve the key processes through which NZDB creates value for shareholders. Individual producer risk is dispersed through the pooling system. According to its enabling legislation, NZDB has the statutory power to purchase and market all dairy products intended for export. NZDB acquires these products from approximately 14,000 milk producers through a series of 15 dairy cooperatives. The processed milk is sold on the export market, and returns (minus marketing and operating costs) are distributed to the cooperatives, which in turn distribute them to the individual farmers. Cooperatives pool the milk separately, so producers are paid according to the quantity of milk provided to the individual cooperative. More efficient cooperatives will have lower operating costs and will thus provide higher payments to the producers. NZDB sells its products in export markets through a worldwide network of holding companies and subsidiaries. The number of cooperatives has decreased over time, but the volume of dairy products has generally increased. The number of dairy cooperatives in New Zealand has fallen from 95 in June 1970 to 15 companies as of May 1995. However, the volume of dairy products manufactured, in actual tons, has grown in several sectors. NZDB officials credit this phenomenon to the increased efficiency of the cooperatives and their operations and the effect of a free market system. New Zealand’s domestic dairy market has been deregulated since the late 1980s. Thus, NZDB does not have the same control over the domestic market as it does the export market. In fact, NZDB stated that it is not involved directly in the marketing of dairy products in New Zealand, but that it does have a role in coordinating market promotion and other activities on behalf of the wider industry. Only 10 percent of New Zealand milk remains in New Zealand, either as raw milk or as processed dairy products. The domestic dairy market is very small compared to the export market. For example, NZDB exported 205,000 tons of butter in 1993-94, while local market sales in 1993 totaled 32,000 tons. Similarly, typical cheese exports in 1993 were 124,000 tons, while local market sales in 1993 were 29,000 tons. Since NZDB does not have control over the domestic dairy market, cross-subsidization between domestic and foreign sales is not possible. The NZDB’s control over imports of dairy products was removed in the mid-1980s, and it does not receive any tariffs from imported dairy products. Moreover, NZDB does not even sell dairy products in New Zealand; individual dairy cooperatives may compete for shares of the domestic market. Even though NZDB has sole export authority for New Zealand dairy products, it has not received direct government subsidies since 1984, when a governmentwide reform removed most agricultural subsidies. The New Zealand government continues to support NZDB indirectly through a research grant scheme, which benefits the dairy industry as a whole. However, the New Zealand government has removed some of the NZDB’s advantages, including its access to New Zealand’s Reserve Bank credit. New Zealand has instituted a number of reforms that directly affected NZDB. The first and most important reform was put in place in 1984; it removed direct government subsidies to farmers. This reform was instituted virtually overnight, abolishing more than 30 agricultural production and export subsidy programs. As a result, New Zealand farmers lost nearly 40 percent of their gross income, and producer boards were forced to reevaluate their operations and marketing strategies and to implement new initiatives. Other terminated programs included the Export Programme Suspensory Loan Scheme and portions of the Export Market Development Tax Incentive Scheme. This scheme was available to taxpayers who incurred expenditures for the purpose of seeking markets, retaining existing markets, obtaining market information, doing market research, creating or increasing demand for the export of goods and services, or attracting tourists to New Zealand. The Supplementary Minimum Price program applied to the dairy industry, but only one payment was made in 1978-79, of $37.8 million. The PSE on milk fell from a peak of 67 percent in 1983 to an average of 15 percent in 1985-87 and an estimated 1.7 percent in 1990. The NZDB’s mission is further enhanced by enforcement actions written into its enabling legislation. The New Zealand government, through the Dairy Board Act of 1961, may impose fines on persons or companies that circumvent NZDB and try to export dairy goods without a license. This fine may not exceed $1,187. According to NZDB officials, such fines have not been imposed at any time. NZDB does not receive any direct grants or concessionary loans from the government for research, but its research affiliate may compete for government research grants. Through its Public Good Science Fund, the government sponsors a variety of projects; funds are bid upon by a variety of research institutions. NZDB sponsors the New Zealand Dairy Research Institute, which focuses on fundamental, long-term dairy research; NZDB also maintains research centers in Singapore, the United Kingdom, Japan, and the United States. Other research on dairy issues takes place in New Zealand universities and at Crown Research Institutes. In the 1980s, NZDB lost access to Reserve Bank of New Zealand credit; it has been forced to turn to the commercial lending market to obtain loans. Thus, it can no longer obtain cheap loans through the Reserve Bank of New Zealand. In 1983, the outstanding deficit in the Dairy Industry (Loans) Account, an account that served as an overdraft facility for NZDB, was converted to a long-term loan. This $725-million loan, considered a substantial subsidy by the New Zealand government, was repayable over 40 years. In 1986, the New Zealand government conceded part of this loan and allowed NZDB to pay off the balance for $102 million as part of its transition in dissolving the NZDB’s financial arrangements with the Reserve Bank. NZDB benefits from a good credit rating, which may be related to its status as a government-established STE. However, NZDB no longer benefits from tax concessions; it is taxed on its retained earnings the same as any other enterprise. Producers pay individual income tax on their returns. NZDB’s sole export authority affords it the opportunity to achieve economies of scale and provides other benefits. By using its statutory authority to export dairy products to the United States and other countries, NZDB benefits from its extensive subsidiary network and higher U.S. prices, since the U.S. price for dairy goods is higher than the standard world price. The NZDB’s ability to invest in outside companies also allows it to diversify its economic interests. While price discrimination is possible and not prohibited under GATT, we were unable to analyze the extent to which NZDB or other exporters engage in this practice because we did not have access to public or private companies’ transaction-level data. Likewise, we were unable to determine whether NZDB engaged in cross-subsidization between its higher- and lower-priced foreign market sales. The New Zealand Dairy Board Act of 1961 granted NZDB the sole authority to purchase and market all export dairy products from New Zealand. That is, all New Zealand dairy products destined for export are under the NZDB’s jurisdiction; thus, NZDB is assured a certain level of product supply and the NZDB buying price is the prevailing level of compensation available to producers. To achieve this, NZDB purchases dairy produce from the cooperative manufacturing dairy companies and sells it through a worldwide marketing network of subsidiary and associate companies. NZDB is also responsible for packaging, transporting, storing, and making shipping arrangements for its exports. NZDB has the authority to grant export licenses to other companies that want to export dairy products on their own. It may choose to grant such licenses if it is not interested in exporting a particular dairy commodity. For example, companies have successfully obtained licenses to export ice cream and certain specialty cheese products, as NZDB does not market these products. This export authority provides NZDB with the opportunity to achieve economies of scale in its operations, which translates into the ability to spread the cost of its international operations across a large volume of sales. NZDB officials noted that individual farmers or cooperatives would have a difficult time marketing dairy products on their own; thus, NZDB provides a mechanism through which the New Zealand dairy farmer can compete in a global marketplace. NZDB’s exclusive authority and size translate into market power for NZDB in certain world dairy markets. Situations where STEs or private firms supply a large share of world markets, increases the concerns about efforts of suppliers to work together to exercise their market power. As an example of the possible exercise of this market power, U.S. dairy industry sources provided us with a June 1995 proposal addressed to the Australian Dairy Industry Council from NZDB. This proposal suggested that the two industries coordinate their supply of dairy products to satisfy new EU quotas. We spoke with industry officials from both New Zealand and Australia, who were unable to pinpoint the exact origin of the proposal. According to NZDB officials, this was an effort to respond to the two governments’ agreement to maintain closer economic relations. Officials from both countries’ dairy industries affirmed that the proposal was dropped. NZDB benefits from the U.S. market, as well as other restricted markets around the world, because of its subsidiaries and domestic dairy price support programs. NZDB sells its products through 88 subsidiary companies in more than 60 countries around the world, including each of New Zealand’s largest trading partners. These companies are managed by geographically oriented holding companies. NZDB believes that this subsidiary framework allows it better access to markets, and these subsidiaries appear to offer particular advantages in markets restricted by quotas, such as the United States. The NZDB’s subsidiaries, such as Western Dairy Products, Inc., can import New Zealand cheese that is subject to quota and help NZDB realize profits that would otherwise go to unaffiliated U.S. importers. For example, under the U.S. quota system, New Zealand’s allocation of cheese can be assigned to any licensed U.S. importer. The New Zealand government has the authority to choose the U.S. importers of New Zealand cheese. NZDB may encourage the New Zealand government to select the NZDB’s own subsidiaries to import the cheese, thus keeping the cash flow within the organization. NZDB can take advantage of the difference between world and U.S. prices by selling its goods through wholly owned subsidiaries in the United States. U.S. prices are significantly higher than world prices because (1) the U.S. dairy program keeps domestic prices more elevated than they would otherwise be and (2) the U.S. cheese import quota system restricts the supply of generally lower-priced imports. Thus, NZDB can get the greatest advantage for its sales by working through subsidiaries. As of 1988, the New Zealand government granted NZDB the “powers of a natural person.” This allowed NZDB to, among other commercial practices, enter into contracts and invest in other businesses. NZDB has taken advantage of this privilege by investing in businesses in other countries and thus diversifying its economic interests. For example, during the 1993-94 season, NZDB formed New Zealand Milk Products (Egypt) Ltd., a 100-percent subsidiary to manufacture and market ghee, and New Zealand Milk Products Treasury (S) Pte Limited in Singapore as a treasury and reinvoicing center for the South East Asia region. We could not determine whether or not NZDB engaged in price discrimination because we did not have access to public or private firm transaction data. Similarly, we were unable to ascertain whether or not NZDB subsidized its sales in one foreign market with higher-priced sales in another foreign market. Some U.S. dairy industry sources expressed concerns regarding the NZDB’s potential to cross-subsidize its sales between foreign markets, but we had insufficient data to make a judgement on this potential practice.
Pursuant to a congressional request, GAO reviewed three state trading enterprises (STE), the Canadian Wheat Board (CWB), the Australian Wheat Board (AWB), and the New Zealand Dairy Board (NZDB) focusing on: (1) the potential capability of export-oriented agricultural STEse to distort trade; and (2) the specific potential capability of CWB, AWB, and NZDB to engage in trade-distorting activities based on their status as STEs. GAO found that: (1) it is necessary to consider STEs on a case-by-case basis to understand their potential to distort trade; and (2) the three STEs reviewed have varying capabilities to potentially distort trade in their respective commodities, although in each case these capabilities have generally been reduced over recent years due to lower levels of government assistance. CWB benefits from: (1) the Canadian government's subsidies to cover CWB's periodic operational deficits, (2) monopoly over both the domestic human consumption and export wheat and barley markets which may allow for cross-subsidization, and (3) pricing flexibility through delayed producer payments. Canada's elimination of transportation subsidies in 1995 has reduced some of the indirect government support going to its wheat and barley producers, and ongoing Canadian reviews of its agricultural policies may reduce the control of CWB in the future. AWB has not received direct government subsidies in several years but enjoys a government guarantee on its payments to producers. It also enjoys indirect subsidies in the form of favorable interest rates and an authority to collect from producers for investment. The deregulation of Australia's domestic grain trade and the decline of direct government assistance have lessened the possibile trade-distorting policies of AWB. Recent studies have challenged the premise behind a single selling authority, but AWB's monopoly over wheat exports still provides it with a sure source of supply. NZDB is relatively subsidy free but benefits from its monopoly over New Zealand dairy exports and its extensive subsidiary structure worldwide. NZDB's size and exclusive purchasing authority for export also translate into market power for NZDB in certain world dairy markets. Its subsidiaries allow it to keep profits from foreign sales within the organization and take advantage of the difference between world prices and those of the country in which it is selling the goods, such as the United States. NZDB's potential to distort trade due to direct government subsidies was eliminated during the 1980s when New Zealand deregulated the domestic dairy market and stopped offering dairy farmers direct government subsidies.
The Decennial Census is at a critical stage in the 2008 Dress Rehearsal, in which the Bureau has its last opportunity to test its plans for 2010 under census-like conditions. The dress rehearsal features a mock Census Day, now set for May 1, 2008. Last year at this time, the Bureau carried out a major dress rehearsal operation—address canvassing—in which the Bureau updated address lists and collected global positioning coordinates for mapspots. The largest field operation of the dress rehearsal was to have begun this month. In this operation (nonresponse follow-up), field staff were to conduct face-to-face interviews with households that did not mail back their questionnaires. Prior to the redesigning effort, the Bureau had already changed its plans for the dress rehearsal, in part, to focus greater attention on the testing of technology. In a November 20, 2007, decision memo, the Bureau announced that it would delay Census Day for the dress rehearsal by 1 month, to May 1, 2008. The Bureau also listed a number of operations it no longer planned to rehearse, including group quarters enumeration and census coverage measurement. Also in February 2008, the Bureau announced that it would remove from the scope of the FDCA program contract the development of all systems and software associated with the census coverage measurement operation. The redesign approach selected by the Secretary will require that the Bureau quickly develop and test a paper-based nonresponse follow-up operation. Any paper-based option has its own set of unique issues, such as setting up operations to support paper field data collection centers and seeking printing solutions for enumerator forms. Among other issues, decisions on a printing solution will need to be made soon. Although the Bureau has carried out paper-based operations before, in some cases they now involve new procedures and system interfaces that as a result of their exclusion from the dress rehearsal, will not be tested under census-like conditions. For nonresponse follow-up in 2010 the Bureau will be using newly developed systems for integrating responses and controlling workload. For example, the Bureau will need to rely on a newly developed system called the Decennial Response Integration System to identify households that have not returned census forms and for collecting the results of enumerators conducting nonresponse follow-up person interviews. Dropping the use of the HHCs for nonresponse follow-up and reverting to paper for that operation this late in the decade also precludes nonresponse follow-up from being fully tested in the dress rehearsal. Under the delayed dress rehearsal this operation was to begin next month, soon after households in dress rehearsal locations were to return their census forms. A paper operation requires different training, maps, and other material to be prepared prior to the operation. The Bureau has announced no specific plans for conducting field testing of certain key operations, such as nonresponse follow-up. Without sufficient testing, operational problems can go undiscovered and the opportunity to improve operations will be lost. The redesign’s move from the use of HHCs to a paper-based nonresponse follow-up operation may limit the Bureau’s ability to reduce follow-up with persons who are late in returning their census questionnaires. One of the primary advantages the Bureau cited for using HHCs was the ability, as late mail returns came in, to remove those addresses from enumerators’ assignments—preventing enumerators from doing unnecessary work. According to the Bureau, in 2000 enumerators visited over 4 million households that had returned their census form late. In 2004, the Bureau tested the capability of an earlier prototype of the HHC to adjust workloads by identifying late mail returns. We reported in 2007 that based on these tests it appears that if the Bureau had possessed this capability during the 2000 Census, it could have eliminated the need to visit nearly 773,000 late-responding households and saved an estimated $22 million (based on our estimate that a 1 percentage point increase in workload could add at least $34 million in direct salary, benefits, and travel costs to the price tag of nonresponse follow-up). The Director of the Census Bureau stated that he believes that the Bureau can still partially adjust enumerator workload to recognize late mail returns without the use of HHCs. To achieve this objective, the Bureau will need to specify the process it will use and conduct appropriate tests. The redesign will also affect the 2010 Census address canvassing operation. The Secretary’s decision to use the HHCs for the 2010 address canvassing operation means that certain performance issues with the handheld technology must be addressed promptly. Field staff experienced difficulties using the technology during the address canvassing dress rehearsal. For example, workers reported problems with HHCs when working in large assignment areas during address canvassing. The devices could not accommodate more than 720 addresses—3 percent of dress rehearsal assignment areas were larger than that. The amount of data transmitted and used slowed down the HHCs significantly. Identification of these problems caused the contractor to create a task team to examine the issues, and the team recommended improving the end-to-end performance of the mobile solution by controlling the size of assignment area data delivered to the HHC both for address canvassing and nonresponse follow-up operations. One specific recommendation was limiting the size of assignment areas to 200 total addresses. However, the redesign effort took another approach that uses laptops and the software application used for the American Community Survey to collect information in large assignment areas. It is not yet clear how this work-around will be carried out. Furthermore, the Bureau will need to define specific and measurable performance requirements for the HHCs as we recommended in January 2005. Another operational issue is the ability of the contractor to accept changes to its address files after it completes address canvassing updates. This could preclude the Bureau from conducting “restart/redo” operations for an area where the address file is discovered to be incorrect. This function is critical in developing an accurate and complete address list. Without the ability to update the mailing list for “restart/redo” operations, the Bureau would consider not mailing census questionnaires to addresses in that area and instead deliver census forms by hand. This has the potential to significantly increase costs. The Bureau still needs to agree upon and finalize requirements for the FDCA program. In March 2006, we reported that the FDCA project office had not implemented the full set of acquisition management capabilities (such as project and acquisition planning and requirements development and management) that were needed to effectively manage the program. For example, although the project office had developed baseline functional requirements for the acquisition, the Bureau had not yet validated and approved them. Subsequently, in October 2007, we reported that changes to requirements had been a contributing factor to both cost increases and schedule delays experienced by the FDCA program. In June 2007, an assessment by an independent contractor of the FDCA program reported on requirements management problems—much like those we reported in March 2006. Similar to our recommendation, the independent assessors recommended that the Bureau immediately stabilize requirements by defining and refining them. The Bureau has recently made efforts to further define requirements for the FDCA program, and it has estimated that the revised requirements will result in significant cost increases. On January 16, 2008, the Bureau provided the FDCA contractor with a list of over 400 requirements for the FDCA program to reconcile. Although some of these new requirements will be dropped based on the Secretary’s recent decision, many will still need to be addressed to ensure that FDCA will perform as needed. Commerce and Bureau officials need to address critical weaknesses in risk management practices. In October 2007, we reported that the FDCA project had weaknesses in identifying risks, establishing adequate mitigation plans, and reporting risk status to executive-level officials. For example, the FDCA project team had not developed mitigation plans that were timely or complete nor did it provide regular briefings on risks to senior executives. The FDCA project team’s failure to report a project’s risks to executive-level officials reduces the visibility of risks to executives who should be playing a role in mitigating them. As of October 2007, in response to the cost and schedule changes, the Bureau decided to delay certain system functionality for FDCA. As a result, the operational testing that was to occur during the dress rehearsal period around May 1, 2008, would not include tests of the full complement of Decennial Census systems and their functionality. Operational testing helps verify that systems function as intended in an operational environment. In late 2007, according to Bureau officials, testing plans for IT systems were to be finalized in February 2008. Therefore, we recommended that the Bureau plan and conduct critical testing, including end-to-end testing of the Decennial Census systems. As of March 2008, the Bureau still had not developed these test plans. In the recent program redesign, the Bureau included conducting end-to-end testing. The inability to perform comprehensive operational testing of all interrelated systems increases the risk that further cost overruns will occur, that decennial systems will experience performance shortfalls, or both. Given the redesigning effort, implementing our recommendations associated with managing the IT acquisitions is as critical as ever. Specifically, the Bureau needs to strengthen its acquisition management capabilities, including finalizing FDCA requirements. Further, it also needs to strengthen its risk management activities, including developing adequate risk mitigation plans for significant risks and improving its executive-level governance of these acquisitions. The Bureau also needs to plan and conduct key tests, including end-to-end testing, to help ensure that decennial systems perform as expected. Even without considering the recent expected cost increases announced by the Bureau to accommodate the redesign of the FDCA program, the Bureau’s cost projections for the 2010 Census revealed an escalating trend from the 1970 Census. As shown in figure 1, the estimated $11.8 billion cost (expressed in constant 2010 dollars) of the 2010 Census, before the FDCA program redesign, represented a more than tenfold increase over the $1 billion spent on the 1970 Census. The 1970 Census was the first Census to rely on mailing census forms to households and asking for a mail return—a major part of the data collection. Although some of the cost increase could be expected because the number of housing units—and hence the Bureau’s workload—has gotten larger, the cost growth has far exceeded the increase in the number of housing units. The Bureau estimated that the number of housing units for the 2010 Census would increase by almost 14 percent over Census 2000 levels. As figure 2 shows, before the FDCA program redesign, the Bureau estimated that the average cost per housing unit for the 2010 Census was expected to increase by approximately 26 percent over 2000 levels, from $69.79 per housing unit to $88.19 per housing unit in constant 2010 dollars. When the projected cost increase that accompanies the FDCA program redesign is considered, the average cost per housing unit will increase by an even greater percentage. Given the projected increase in spending, it will be imperative that the Bureau effectively manage the 2010 Census, as the risk exists that the actual, final cost of the census could be considerably higher than anticipated. Indeed, this was the case for the 2000 Census, when the Bureau’s initial cost projections proved to be too low because of such factors as unforeseen operational problems and changes to the fundamental design. The Bureau estimated that the 2000 Census would cost around $5 billion. However, the final price tag for the 2000 Census was more than $6.5 billion, a 30 percent increase in cost. Large federal deficits and other fiscal challenges underscore the importance of managing the cost of the census, while promoting an accurate, timely census. We have repeatedly reported that the Bureau would be challenged to control the cost of the 2010 Census. In January 2004, we reported that under the Bureau’s approach for reengineering the 2010 Census, the Bureau might find it difficult to reduce operational risk because reengineering introduces new risks. To manage the 2010 Census and contain costs, we recommended that the Bureau develop a comprehensive, integrated project plan for the 2010 Census that should include the itemized estimated costs of each component, including a sensitivity analysis and an explanation of significant changes in the assumptions on which these costs were based. In response, the Bureau provided us with the 2010 Census Operations and Systems Plan, dated August 2007. This plan represented an important step forward at the time. It included inputs and outputs and described linkages among operations and systems. However, it did not yet include sensitivity analysis, risk mitigation plans, a detailed 2010 Census timeline, or itemized estimated costs of each component. Going forward, it will be important for the Bureau to update its operations plan. The assumptions in the fiscal year 2009 President’s Budget life cycle cost estimate of $11.5 billion may not have included recent productivity data from last year’s address canvassing dress rehearsal. According to the Bureau, initially, the cost model assumed productivity for address canvassing to be 25.6 addresses per hour for urban/suburban areas. However, results from the address canvassing dress rehearsal showed productivity of 13.4 addresses per hour for urban/suburban areas. While the life cycle cost estimate increased slightly to $11.5 billion in the fiscal year 2009 President’s Budget, these increases were attributed to other factors and not to lower-than-expected canvassing productivity. Best practices call for cost model assumptions to be updated as new information becomes available. We previously reported that the life cycle cost estimate has not been updated to reflect changes in assumptions. In July 2006, we testified that the estimate had not been updated to reflect the results of testing conducted in 2004. As the Bureau updates its estimate of the life cycle cost annually and as part of the redesigning effort, it will be important that it reflect changing assumptions for productivity and hours worked. Given its size and complexity, carrying out the Decennial Census presents significant challenges under any circumstances. Late changes in census plans and operations, long-standing weaknesses in IT acquisition and contract management, limited capacity for undertaking these critical management functions, scaling back of dress rehearsal activities, and uncertainty as to the ultimate cost of the 2010 Census puts the success of this effort in jeopardy. Managing these risks is critical to the timely completion of a reliable and cost-effective census. Implementing our recommendations would help the Bureau effectively manage the myriad of interrelated operations needed to ensure an accurate and complete count in 2010 (Bureau officials have agreed with many of our recommendations, but have not fully implemented them). The dress rehearsal represents a critical stage in preparing for the 2010 Census. This is the time when the Congress and others should have the information they need to know how well the design for 2010 is likely to work, what risks remain, and how those risks will be mitigated. We have highlighted some of the risks today. Going forward, it will be important for the Bureau to specify how it will ensure that planned dress rehearsal operations will be successfully carried out, and how it will provide assurance that the largest operation—nonresponse follow-up—will be tested in the absence of a full dress rehearsal. Likewise, the Bureau will need to establish plans for working around limitations in the technology to be used in address canvassing operations. It is critical that the Bureau ensure that the technology for conducting address canvassing is a success. The Bureau should implement prior recommendations in moving forward. Contractor-developed IT systems and deliverables need to be closely monitored to ensure that contractors are performing within budget. As we have stressed throughout this testimony and in our prior recommendations, the Bureau needs to practice aggressive project management and governance over both the IT and non-IT components. Further, it is essential that the Bureau implement our recommendations related to information technology. The Bureau must solidify the FDCA program requirements, strengthen risk management activities, and plan and conduct critical testing of the Decennial Census systems. Mr. Chairmen, Census Day is less than 2 years away and address canvassing is 1 year away. The challenges we highlighted today call for effective risk mitigation by the U.S. Census Bureau, and careful monitoring and oversight by the Department of Commerce, the Office of Management and Budget, the Congress, GAO, and other key stakeholders. As in the past, we look forward to supporting the committee and subcommittee’s oversight efforts to promote an accurate and cost-effective census. Mr. Chairmen, this concludes our statement. We would be glad to answer any questions you and the committee and subcommittee members may have. If you have any questions on matters discussed in this testimony, please contact Mathew Scirè at (202) 512-6806 or sciremj@gao.gov or David A. Powner at (202) 512-9286 or pownerd@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this testimony include Signora May, Assistant Director; Mathew Bader; Thomas Beall; Jeffrey DeMarco; Elizabeth Hosler; Richard Hung; Anne Inserra; Andrea Levine; Lisa Pearson; Sonya Phillips; Cynthia Scott; Niti Tandon; Jonathan Ticehurst; Timothy Wexler; and Katherine Wulff. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2007, the U.S. Census Bureau (Bureau) estimated the 2010 Census would cost $11.5 billion, including $3 billion on automation and technology. At a March hearing, the Department of Commerce (Commerce) stated that the Field Data Collection Automation (FDCA) program was likely to incur significant cost overruns and announced a redesign effort. At that time, GAO designated the 2010 Decennial Census as high risk, citing long-standing concerns in managing information technology (IT) investments and uncertain costs and operations. This testimony is based on past work and work nearing completion, including GAO's observation of the address canvassing dress rehearsal. For IT acquisitions, GAO analyzed system documentation, including deliverables, cost estimates, other acquisitions-related documents, and interviewed Bureau officials and contractors. This testimony describes the implications of redesign for (1) dress rehearsal and decennial operations, (2) IT acquisitions management, and (3) Decennial Census costs. The Decennial Census is at a critical stage in the 2008 Dress Rehearsal, in which the Bureau has its last opportunity to test its plans for 2010 under census-like conditions. Last week Commerce announced significant changes to the FDCA program. It also announced that it expected the cost of the decennial to be up to $3 billion greater than previously estimated. The redesign will have fundamental impacts on the dress rehearsal as well as 2010 Census operations. Changes this late in the decade introduce additional risks, making more important the steps the Bureau can take to manage those risks. The content and timing of dress rehearsal operations must be altered to accommodate the Bureau's design. For example, Commerce has selected an option that calls for the Bureau to drop the use of handheld computers (HHCs) during the nonresponse follow-up operation, and the Bureau may now be unable to fully rehearse a paper-based operation. Additionally, reverting to a paper-based nonresponse follow-up operation presents the Bureau with a wide range of additional challenges, such as arranging for the printing of enumerator forms and testing the systems that will read the data from these forms once completed by enumerators. Given the redesign effort, implementing GAO's recommendations associated with managing the IT acquisitions is as critical as ever. Specifically, the Bureau needs to strengthen its acquisition management capabilities, including finalizing FDCA requirements. Further, it also needs to strengthen its risk management activities, including developing risk mitigation plans for significant risks and improving its executive-level governance of these acquisitions. The Bureau also needs to plan and conduct key tests, including end-to-end testing, to help ensure that decennial systems perform as expected. According to the Bureau, the redesign and related revision of the FDCA program is expected to result in significant increases to the life cycle cost estimate for the 2010 Census. Even without considering the recent expected cost increases announced by the Bureau to accompany the redesign of the FDCA program, the Bureau's cost projections for the 2010 Census revealed an escalating trend from previous censuses. Previously, GAO recommended that the Bureau develop an integrated and comprehensive plan to manage operations. Specifically, to understand and manage the assumptions that drive the cost of the decennial census, GAO recommended, among other actions, that the Bureau annually update the cost of the 2010 Census and conduct sensitivity analysis on the $11.5 billion estimate. However, while the Bureau understands the utility of sensitivity analysis, it has not conducted such an analysis.
Personnel security clearances are required for access to certain national security information, which may be classified at one of three levels: confidential, secret, or top secret. A top secret clearance is generally also required for access to Sensitive Compartmented Information or Special Access Programs. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. Unauthorized disclosure could reasonably be expected to cause (1) “damage,” in the case of confidential information; (2) “serious damage,” in the case of secret information; and (3) “exceptionally grave damage,” in the case of top secret information. To ensure the trustworthiness and reliability of personnel in positions with access to classified information, government agencies rely on a multiphased personnel security clearance process that includes the application submission phase, investigation phase, and adjudication phase. The application submission phase. A security officer from an agency (1) requests an investigation of an individual requiring a clearance; (2) forwards a personnel security questionnaire (standard form 86) using OPM’s e-QIP system or a paper copy of the standard form 86 to the individual to complete; (3) reviews the completed questionnaire; and (4) sends the questionnaire and supporting documentation, such as fingerprints, to OPM or the investigation service provider. The investigation phase. Federal investigative standards and OPM’s internal guidance are used to conduct and document the investigation of the applicant. The scope of information gathered in an investigation depends on the level of clearance needed and whether an investigation for an initial clearance or reinvestigation for a clearance renewal is being conducted. For example, the federal standards require that investigators collect information from national agencies, such as the Federal Bureau of Investigation, for all initial and renewal clearances. For an investigation for a confidential or secret clearance, investigators gather much of the information electronically. For an investigation for a top secret clearance, investigators gather additional information through more time-consuming efforts, such as traveling to conduct in-person interviews to corroborate information about an applicant’s employment and education. After the investigation is complete, the resulting investigative report is provided to the agency. According to the Performance Accountability Council’s Strategic Framework, for the purposes of IRTPA timeliness reporting, investigative time is the time in days from the receipt date of the completed personnel security package by the investigative service provider to the date the final investigative file is forwarded to the adjudicative facility if sent electronically. The adjudication phase. Adjudicators from an agency use the information from the investigative report to determine whether an applicant is eligible for a security clearance. To make clearance eligibility decisions, federal requirements specify that adjudicators consider guidelines in 13 specific areas that elicit information about (1) conduct that could raise security concerns and (2) factors that could allay those security concerns and permit granting a clearance. According to the Performance Accountability Council’s Strategic Framework, for the purposes of IRTPA timeliness reporting, adjudicative time is the time in days from the date the final investigative file is forwarded (or received electronically) to the adjudicative unit to the date of the adjudicative decision. Separate from, but related to, security clearances, are suitability determinations. Executive branch agencies conduct additional suitability investigations for individuals to ensure that they are suitable for employment in certain positions. For example, the Department of Justice conducts additional suitability checks to ensure applicants for jobs with the Drug Enforcement Agency have not used drugs. In addition, Health and Human Services conducts additional suitability investigations on applicants for jobs working with children. Similarly, the Intelligence Community requires a polygraph evaluation, among other things, to determine suitability for most positions. In light of long-standing delays and backlogs in processing security clearances, Congress set goals and established requirements for improving the clearance process in the Intelligence Reform and Terrorism Prevention Act of 2004. In 2005, GAO designated DOD’s personnel security clearance program as a high risk area. As can be seen in figure 1, a number of steps have been taken to reform the process, including: Role of the Office of Management and Budget in security clearance process. In June 2005, the President issued an executive order as part of the administration’s efforts to improve the security clearance process and implement the statutory clearance requirements in IRTPA. This order tasked the Director of OMB with a variety of functions in order to ensure that agency processes relating to determining eligibility for access to classified national security information were appropriately uniform, centralized, efficient, effective, timely, and reciprocal. These actions included taking a lead role in preparing a November 2005 plan to improve the timeliness of personnel security clearance processes governmentwide. Formation of the Joint Reform Team. The Joint Security Process Reform Team, also known as the Joint Security and Suitability Reform Team or Joint Reform Team, formed in June 2007, was established by the Director of National Intelligence and Under Secretary of Defense for Intelligence through a memorandum of agreement to execute joint reform efforts to achieve IRTPA timeliness goals and improve the processes related to granting security clearances and determining suitability for government employment. Agencies included in this governmentwide reform effort are ODNI, DOD, OMB, and OPM. The Joint Reform Team continues to work on the reform effort under the Performance Accountability Council by providing progress reports, recommending research priorities, and overseeing the development and implementation of an information technology strategy, among other things. Since its formation, the Joint Reform Team under the Performance Accountability Council: (1) Submitted an initial reform plan to the President on April 30, 2008. The plan proposed a new process for determining clearance eligibility that departs from the current system in a number of ways, including the use of a more sophisticated electronic application, a more flexible investigation process, and the establishment of ongoing evaluation procedures between formal clearance investigations. The report was updated in December 2008 to include an outline of reform progress and further plans. (2) Issued an Enterprise Information Technology Strategy to support the reformed security and suitability process in March 2009. According to the report, the Joint Reform Team is pursuing an approach that leverages existing systems and capabilities, where applicable, and developing new tools where necessary. Formation of the Performance Accountability Council. Executive Order 13467 established the leadership structure for security and suitability reform headed by the Suitability and Security Clearance Performance Accountability Council as the entity responsible for aligning security and suitability, holding agencies accountable for implementation, and overseeing progress toward the reformed vision. This executive order directed, among other things, that executive branch policies and procedures be aligned and use consistent standards, to the extent possible, for investigating and adjudicating whether an individual is (1) suitable for government employment, (2) fit to be a contract employee, or (3) eligible for access to classified information. The Performance Accountability Council is accountable to the President to achieve reform goals and also oversees newly designated Security and Suitability Executive Agents. The executive order designated the Director of National Intelligence as the Security Executive Agent, the Director of OPM as the Suitability Executive Agent, and the Deputy Director for Management at OMB as the chair of the council with the authority to designate officials from additional agencies to serve as members. The council currently comprises representatives from 11 executive agencies. In May 2010, the Performance Accountability Council proposed quality measures to address the different phases of the application process including validating need, e- application, investigation, and adjudication. The measures also include the responsible organization, population covered, collection method, and whether it is a current or future measure. Strategic Framework. The Performance Accountability Council issued a Strategic Framework in February 2010 to articulate the goals of the security and suitability process reform. The Strategic Framework sets forth a mission and strategic goals, performance measures, a communications strategy, roles and responsibilities, and metrics to measure the quality of security clearance investigations and adjudications. The government has reported that significant overall progress has been made to improve the investigation and adjudication of personnel security clearance applications in a timely manner. This is largely attributable to DOD, whose clearances comprise a vast majority of governmentwide initial clearances. IRTPA required the executive branch to develop a plan under which, to the extent practical, each authorized adjudicative agency would be required to make a determination on at least 90 percent of initial security clearances within an average of 60 days by December 17, 2009. Within this 60-day period, IRTPA also includes periods of 40 days for investigations and 20 days for adjudications. As can be seen in figure 2, according to the Performance Accountability Council’s February 2010 annual report to Congress, during the first quarter of fiscal year 2010, two agencies—the National Geospatial-Intelligence Agency and the National Reconnaissance Office—met the timeliness requirement for investigations, seven agencies—the Departments of Defense, Energy, Health and Human Services, the Defense Intelligence Agency, the National Security Agency, the Federal Bureau of Investigation, and the Department of State—met the timeliness objectives for adjudications, and five agencies—the Department of Defense, Department of Energy, the National Geospatial-Intelligence Agency, the National Reconnaissance Office, and the Department of State—met the 60-day IRTPA timeliness objective. Furthermore, we found that one of the agencies included in our review—the National Geospatial-Intelligence Agency—met all of the IRTPA timeliness objectives for the second and third quarter, while the Defense Intelligence Agency met all of the IRTPA timeliness objectives in the second quarter and DOD, which comprises the vast majority of clearances, and the Federal Bureau of Investigation, met all of the IRTPA timeliness objectives in the third quarter of fiscal year 2010. We also found that timeliness varied widely among executive branch agencies. During the first three quarters of fiscal year 2010, the average for the fastest 90 percent of cases adjudicated by the 14 agencies included in our review ranged from 22 to 96 days for investigation timeliness, 2 to 59 days for adjudication timeliness, and 30 to 154 days for IRTPA timeliness. Agency officials that we spoke with largely attributed the wide variation between these agencies to variances in the adoption of information technology. According to OPM officials, timeliness varies for the agencies that use OPM as the investigative service provider due, in part, to differences in agency adoption of information technology, such as the e- QIP that speeds up investigation timeliness by providing OPM with more complete and better quality data earlier in the process. With regard to adjudication timeliness, five of the agencies we included in our review have developed electronic delivery, electronic adjudication, or case management and workflow tools to improve timeliness, while others have not. The requirements established by IRTPA have resulted in a governmentwide focus on improving the timeliness of initial security clearances through the Performance Accountability Council. In addition, several of the agency officials with whom we spoke reported that their agencies prioritize timeliness for security clearances. For example, several agency officials noted that the passage of IRTPA was an important factor leading to continued senior level leadership commitment, involvement, and oversight over timeliness reform through the Performance Accountability Council. Moreover, IRTPA’s annual reporting requirements have provided more information about agency timeliness to Congress, allowing for more oversight and increasing transparency and accountability. However, IRTPA does not set timeliness requirements for suitability determinations, Homeland Security Presidential Directive-12 investigations, and security clearance renewals. According to agency officials we spoke to, their agencies often prioritize initial security clearances for processing. Governmentwide efforts to improve the timeliness of personnel security clearance processing have focused on technology solutions. For example, the Joint Reform Team and the Performance Accountability Council have encouraged agencies to utilize information technology solutions, such as e-QIP, electronic delivery, and electronic adjudication capabilities to enhance automation. E-QIP facilitates more complete collection of a subject’s information upfront in the process and reduces errors, which lowers, on average, the time it takes for investigators to clarify the information provided. Electronic delivery reduces the adjudicative phase by eliminating delays in receiving investigative reports related to mail and courier services, such as the need at certain agencies to irradiate all incoming mail. Electronic adjudication systems use automation to review investigative reports for missing information and adjudicate the cases, under certain conditions, within seconds. Agency officials indicated that as agencies have adopted these capabilities, timeliness improved. In addition, OPM, which performs approximately 94 percent of initial clearance investigations for the federal government, has also taken steps to improve timeliness. For example, according to OPM officials, OPM also made information technology enhancements to improve its processes, such as enabling electronic delivery. Aside from these governmentwide efforts, some agency officials stated that their agencies have taken steps to improve adjudication timeliness. DOD, for example, developed and implemented an electronic case management system—the Clearance Adjudication Tracking System— within several of its Central Adjudication Facilities. In 2009, the Army began electronically adjudicating secret clearances through the Clearance Adjudication Tracking System, which was a significant factor in improving the Army’s adjudicative average timeliness. For example, according to the Performance Accountability Council’s Strategic Framework, the average of the fastest 90 percent of initial clearance adjudications for the Army fell from 187 days in the second quarter of fiscal year 2009 to 10 days in the first quarter of fiscal year 2010. The Department of Energy also took steps to enhance timeliness. For example, in addition to utilizing electronic delivery and e-QIP, the Department of Energy developed corrective action plans, implemented case prioritization procedures, and created a tiered adjudication structure to clear easier cases quickly while utilizing a complex case review board to address complex cases in a timely manner. In addition, the Department of Energy built timeliness performance into adjudicator evaluations, reduced the number of people involved with deciding cases, and increased manpower to meet workload requirements. Despite the actions that have been taken to improve timeliness, agency officials with whom we spoke identified several remaining challenges to meeting IRTPA’s timeliness objectives. These challenges include agency- specific issues, such as resource constraints and manpower limitations. For example: Resource constraints are a limiting factor for several agencies in implementing certain information technology capabilities, such as electronic delivery, case management, and workflow tools. For example, Defense Intelligence Agency officials indicated that they are constrained in implementing a key information technology that provides electronic delivery, case management, and workflow tool capabilities. Furthermore, some agencies, especially those with relatively small clearance caseloads, find it difficult to justify large investments to develop and implement information technology systems. Personnel limitations, such as personnel shortages or increased workloads, were also identified by several agency officials as an ongoing challenge. Some agency officials stated that their agencies already have or expect to experience personnel shortages. For example, officials from the Department of Homeland Security stated that a lack of resources is the primary issue for not meeting the timeliness objectives, but the Department of Homeland Security headquarters is currently backfilling 10 vacant adjudicator positions, which should help to alleviate the problem. In addition, some agency officials stated that their agency staff is subject to workload increases, such as periods of increased agency hiring, spikes in security clearance renewal cases, or additional duties related to corollary requirements. For example, in 2008, Homeland Security Presidential Directive-12 directed the implementation of a common governmentwide identification standard for federal employees and contractors, under which federal agencies have been required to begin issuing common identification and access badges for all individuals, including contractors, who need access to government facilities or computer networks. These badge requirements, while different from those required for security clearances, expanded the pool of staff needing investigations and adjudicative determinations. These issues are particularly challenging in places where staff already perform clearance processes as a collateral duty. We found other challenges to meeting timeliness objectives that were the result of systemic issues involving interagency and intergovernmental activities. Agencies are often unable to control certain processes in order to meet timeliness objectives when they are dependent on information or action from other governmental entities. For example: Information-sharing between agencies is an ongoing challenge that can manifest itself in several ways. First, investigative agencies are not in direct control of the timeliness and completeness of Federal Bureau of Investigation, state, and local law enforcement fingerprint and criminal investigation checks. For example, the results of Federal Bureau of Investigation criminal investigation checks often are returned with a classification listed as “No Pertinent,” which indicates that there is no pertinent information relevant to making a clearance eligibility determination. Some agency officials with whom we spoke indicated that this type of response leaves adjudicators with incomplete information due to the potential that the designation is either the result of a subjective judgment from an outside party as to what they believe is relevant information or a placeholder to indicate that more information is potentially available, but pending or not releasable. Second, officials said the lack of digitization of records at certain federal, state, and local agencies can be a challenge to gathering information for timely completion of investigations and adjudicative decisions. When personnel files, for example, are not stored, catalogued, and made searchable through electronic means, agencies are limited by manual checks. Finally, delays and incomplete information may occur in obtaining information from intelligence agencies. For example, some agency officials with whom we spoke stated that since they do not have direct access to clearance-related information and databases for agencies in the Intelligence Community, they rely upon manual requests for information. Investigation services quality and cost is an ongoing challenge to meeting timeliness objectives, according to agencies officials. For example, officials representing the Departments of Homeland Security, Energy, the Treasury, Justice, and four DOD component agencies that utilize OPM as their investigative service provider cited challenges related to deficient investigative reports as a factor that slows agencies’ abilities to make adjudicative decisions. The quality and completeness of investigative reports directly affects adjudicator workloads, including whether additional steps are required before adjudications can be made, as well as agency costs. For example, some agency officials we spoke with noted that OPM investigative reports do not include complete copies of associated police reports and criminal record checks. According to ODNI and OPM officials, OPM investigators provide a summary of police and criminal reports and assert that there is no policy requiring inclusion of copies of the original records. However, ODNI officials also stated that adjudicators may want or need entire records as critical elements may be left out. For example, according to Defense Office of Hearings and Appeals officials, in one case, an investigator’s summary of a police report incorrectly identified the subject as a thief when the subject was actually the victim. If the Defense Office of Hearings and Appeals had access to actual police documents, officials believe the adjudication process would be more efficient. We noted in our prior work that documentation was incomplete for most OPM-provided investigative reports based on independent review of about 3,500 investigative reports. We also noted in our previous work that incomplete investigative documentation may lead to increases in the time it takes to complete the clearance process and the overall costs of the process. Several agency officials stated that in order to avoid further costs or delays they often choose to perform additional steps internally to obtain missing information, clarify or explain issues identified in investigative reports, or gather evidence for issue resolution or mitigation. Finally, a significant challenge to meeting timeliness objectives specific to Intelligence Community agencies involve addressing the requirements that are unique to these agencies. For example, since most positions in Intelligence Community agencies require top secret clearances with Sensitive Compartmented Information access, intelligence agencies rely almost exclusively on Single Scope Background Investigations that are required for these types of clearances. These investigations have higher requirements for the types and numbers of sources of information required compared to investigations for secret or confidential clearances. According to agency officials, these higher requirements take longer, on average, to investigate and adjudicate. In addition, timeliness for intelligence agencies is often complicated by the unique issues presented by extensive suitability determination processes and precise conditions of employment that may include medical exams, psychological evaluations, drug testing, and polygraph exams. Polygraph exams, for example, may generate additional leads that require further investigative work. Moreover, agency officials also stated that scheduling a polygraph with an individual, especially if they live far from agency offices, may add months to investigation timelines. While the Performance Accountability Council, responsible for driving implementation of the reform effort and ensuring accountability, has taken steps to assist in implementation of reform efforts, it has not reported on the impediments to meeting timeliness objectives or plans to address impediments. IRTPA’s security clearance reform provision requires annual reports to the appropriate congressional committees—through 2011—on the progress made during the preceding year toward meeting its requirements, including timeliness data and a discussion of any impediments to the smooth and timely functioning of its requirements. However, in its most recent report to Congress, the Performance Accountability Council did not provide information on the impediments agencies face in meeting timeliness objectives or plans to address impediments. While the Office of the Director of National Intelligence, in its capacity as Security Executive Agent, has performed a limited number of oversight audits, according to officials at four agencies we met with, the Performance Accountability Council has not met with them to identify the impediments to meeting the timeliness objectives. The Performance Accountability Council has focused its efforts on DOD in part due to DOD’s security clearance program’s designation as one of GAO’s high-risk areas, as well as the fact that DOD clearances comprise the overwhelming majority of initial clearance cases that are processed annually. We found that due to the relative size of DOD’s clearance program, DOD’s progress towards meeting IRTPA’s timeliness objectives is a significant factor in reducing the average time required for initial security clearance processing for the government as a whole. Furthermore, Performance Accountability Council officials stated that they will begin conducting one-on-one meetings with individual agencies in September 2010 to enhance communication, assist in implementation planning, and provide a feedback mechanism for agency stakeholders to communicate information and needs to the Joint Reform Team. The Performance Accountability Council is in a position to identify certain trends and commonalities, such as those challenges related to resource constraints, manpower limitations, information-sharing, investigation services quality and cost, and Intelligence Community specific issues. However, absent complete reporting on the impediments on meeting timeliness objectives, Congress may not have visibility over agency compliance and decision makers may not have a complete picture of the progress made or the impediments to meeting timeliness objectives. This potential lack of agency transparency and accountability may impact continued efforts to improve timeliness and prevent scrutiny over agencies that are not meeting timeliness objectives. Officials representing executive branch agencies, including those within the Intelligence Community, stated that they routinely grant reciprocity (i.e., accept a background investigation or clearance determination completed by another authorized investigative or adjudicative agency). IRTPA generally requires that all security clearance investigations and determinations be accepted by all agencies, with limited exceptions when necessary for national security purposes. We have reported in the past that, according to the government’s plan for addressing problems in the personnel security clearance process, security clearances are not fully accepted governmentwide. A recent congressional committee report also suggests that even among the elements of the Intelligence Community, there are impediments and sometimes lengthy delays in granting clearances to employees detailed from one agency to another. However, in October 2008, the ODNI issued guidance on the reciprocity of personnel security clearances. The guidance requires, except in limited circumstances, that all Intelligence Community elements “accept all in- scope security clearance or access determinations.” Further, OMB guidance requires agencies to honor a clearance when: (1) the prior clearance was not granted on an interim or temporary basis, (2) the prior clearance investigation is current and in-scope, (3) there is no new derogatory information, and (4) there are no conditions, deviations, waivers, or unsatisfied additional requirements (such as polygraphs) if the individual is being considered for access to highly sensitive programs. Moreover, officials representing two agencies in our review noted that it is in their best interest to accept a prior clearance because reciprocity saves time, money, or manpower. Although officials agreed that they routinely honor another agency’s security clearance, we found that some agencies find it necessary to take additional steps to address limitations with available information. Officials representing 18 of the 21 organizations we met with to discuss reciprocity reported that they must address limitations, such as insufficient information in the databases or variances in the scope of investigations, before granting reciprocity. For example: Insufficient information. Although there is no single, integrated database, security clearance information is shared between OPM, DOD, and, to some extent, Intelligence Community databases. OPM has taken steps to ensure certain clearance data necessary for reciprocity is available to adjudicators. For example, in April 2010, OPM held an interagency meeting to determine new data fields to include in their shared database to more fully support reciprocity. However, we found that the shared information available to adjudicators contains summary-level detail that may not be complete. As a result, agencies may take steps to obtain additional information, which creates challenges to immediately granting reciprocity. For example, to accept a clearance granted by an intelligence agency, a non-intelligence agency must access information from the intelligence agencies’ Scattered Castles database. However, according to officials representing the Department of the Treasury, the Department of Justice, and the Joint Chiefs of Staff, the Scattered Castles database does not always provide enough detail to immediately grant reciprocity. According to these officials, the Scattered Castles summary screen is not detailed enough or does not include key information, such as the steps taken to mitigate negative issues. As a result, additional information, such as copies of the original background investigation, must be sought directly from intelligence agencies to verify and provide supporting detail to the information available in Scattered Castles. Similarly, to accept a clearance granted by a non-intelligence agency, an intelligence agency must access information from non-intelligence agency databases. Officials representing Intelligence Community agencies with whom we spoke noted, for example, that they must contact DOD to determine if an actual clearance was granted and verify the current status of the applicant because such detail is not available in DOD’s Joint Personnel Adjudication System. Similarly, officials representing the Department of Justice told us that while OPM’s Central Verification System shows the existence of conditions, deviations, and waivers, Department of Justice officials follow up as appropriate with the agency that granted the clearance. Variances in the scope of investigations. We found that the scope of background investigations varies by level of clearance, which may lead to duplicative work. For example, a person with a secret-level clearance may have had one of several types of background investigations and the scope of the background investigation may vary depending on the type of clearance sought. Further, officials from two agencies we spoke with told us that they typically require a certain type of background investigation and when a subject’s clearance is based on a different type of investigation, they may take additional steps to fill in the missing gaps to ensure the scope is consistent with their expectations. Officials representing other agencies included in our review told us that when the subject’s existing background investigation is different from the required investigation type, the agency will request a new background investigation. For example, officials at one agency stated that positions of public trust sometimes have higher suitability information requirements than information available from confidential/secret background investigations. Similarly, officials at another agency stated that because there are two types of investigations for secret/confidential clearances based on whether the person is military or contractor or government civilian, the agency may not be able to accept an investigation if it is the wrong one for that particular position. When an entirely new investigation is performed, we found that the current system may lead to duplicative work, limiting reciprocity. In a 2008 report to the President, the Joint Reform Team, under the Performance Accountability Council, proposed revised investigative standards to, among other things, reduce the types of initial investigations from 15 to 3. While originally planned for release in December 2010, the Performance Accountability Council extended plans to issue a new version of the revised Federal Investigative Standards to calendar year 2011. In addition to addressing limitations with available information, agency officials identified broader challenges to granting reciprocity. Officials representing 14 of the 21 agencies included in our review of reciprocity reported that challenges, such as the need to conduct suitability determinations or determine whether a prior clearance investigation and adjudication meets their quality expectations, must be addressed before granting reciprocity. For example: Conducting suitability determinations. All federal agencies may be required to conduct basic suitability determinations to ensure the applicant’s character or conduct is appropriate for the position in question, but some agencies take additional actions to determine suitability before they reciprocate a security clearance. For example, the Department of Justice must take steps to ensure that applicants for jobs with the Drug Enforcement Administration have not used drugs, according to agency officials. Similarly, the Intelligence Community requires a polygraph evaluation, among other things, to determine suitability for most positions, according to intelligence officials. We also found that agencies have varying standards for determining suitability of applicants before reciprocating a security clearance. For example, Department of Health and Human Services officials said they will not accept a prior security clearance until it makes a favorable determination of suitability. Similarly, the Department of Justice will only accept another agency’s clearance and hire the applicant on a probationary period pending a favorable suitability determination. As a result of the variances in determining suitability, OPM, as the Suitability Executive Agent for all executive agencies, and the Joint Reform Team have issued guidance in line with Executive Order 13488, which mandates, to the extent practicable and with certain exceptions, reciprocal recognition of prior favorable suitability determinations. For example, OPM issued a memorandum for the Heads of Executive Departments and Agencies that explains how to implement the Executive Order. Determining whether a prior clearance investigation and adjudication meets standards. Most agency officials we spoke with stated that since there is no governmentwide standardized training and certification process for investigators and adjudicators, a subject’s prior clearance investigation and adjudication may not meet the standards of the inquiring agency. Although OPM has developed some training, security clearance investigators and adjudicators are not required to complete a certain type or number of classes. As a result, the extent to which investigators and adjudicators receive training varies by agency. For example, according to ODNI officials, all DOD adjudicators working at DOD Central Adjudication Facilities must take a basic 2-week adjudicator course and subsequently the 1-week advanced course after some time on the job. However, according to officials we spoke with, the Air Force has an additional requirement for adjudicators to attend a 3-week training course while the Defense Industrial Security Clearance Office relies on on- the-job training. Other agencies have different requirements. For example, the Department of Energy relies on a mandatory annual security refresher. Consequently, as we have previously reported, agencies are reluctant to be accountable for investigations and/or adjudications conducted by other agencies or organizations. To achieve fuller reciprocity, clearance- granting agencies seek to have confidence in the quality of prior investigations and adjudications. The annual reports to Congress indicate that the Performance Accountability Council is taking steps to make investigations and adjudications more consistent across the government by standardizing the training of investigators and adjudicators. For example, the reports describe the development of core courses, as well as a formalized certification for investigators and adjudicators. According to senior leaders of the reform effort, these steps will facilitate reciprocal acceptance of clearance decisions governmentwide. Although agency officials have stated that reciprocity is regularly granted, agencies do not have complete records on the extent to which previously granted security clearance investigations and adjudications are honored governmentwide. While the Performance Accountability Council has identified reciprocity as a governmentwide strategic goal, we found that agencies do not consistently and comprehensively track when reciprocity is granted, and lack a standard metric for tracking reciprocity. For example, Department of Justice and Department of Energy officials said they track both when reciprocity is granted and reasons for denying a previously granted security clearance, while Navy and Department of the Treasury officials said they only document when reciprocity is granted and not when reciprocity is denied. The Navy checks a box in its electronic database, and the Department of Energy and the Department of the Treasury manually track when reciprocity is honored. In contrast, the Army and Air Force do not track reciprocity at all, according to agency officials. Moreover, it is unclear the extent to which agencies that do track reciprocity are reporting the data to oversight agencies, such as ODNI, or sharing information on reciprocity with each other. OPM and the Performance Accountability Council have developed quality metrics for reciprocity, but the metrics do not measure the extent to which reciprocity is being granted. We previously reported that developing metrics for assessing and regularly monitoring all aspects of the clearance process could add value in current and future reform efforts as well as supply better information for greater congressional oversight. While the existing metrics are a positive step, more is needed to comprehensively capture the extent to which reciprocity is being granted. For example, OPM created a metric in early 2009 to track reciprocity, but this metric measures limited information. OPM’s metric measures the number of investigations requested from OPM that are rejected based on the existence of a previous investigation and does not track the number of cases in which reciprocity was or was not successfully honored. The Performance Accountability Council developed quality metrics, including metrics to track reciprocity, in response to a March 2010 congressional inquiry. For example, the Performance Accountability Council proposes as a metric the average percentage of cases for which prior database checks are conducted as reported by executive branch agencies. However, this metric does not account for agencies that checked other databases and relies on agency self-reporting rather than a systematic method of data collection. Although the metric helps to create an overall picture of reciprocity, it does not track which cases were and were not reciprocated. Similarly, the other metrics included in the Performance Accountability Council’s proposal, such as the number of duplicate requests for investigations, percentage of applications submitted electronically, number of electronic applications submitted by applicant but rejected by OPM as unacceptable due to missing information or forms, and percentage of fingerprint submissions determined to be “unclassifiable” by the Federal Bureau of Investigation, provide useful information, but do not track the extent to which reciprocity is or is not ultimately honored. Tasked with establishing a single, integrated database, the executive branch has opted to focus on leveraging existing systems rather than establish a new database. IRTPA required that not later than 12 months after the date of enactment of the act, the Director of the Office of Personnel Management and the Director of the Office of Management and Budget establish and commence operating and maintaining a single, integrated database of security clearance information. This database was to house information regarding the granting, denial, or revocation of security clearances or access pertaining to military, civilian, and contractor personnel, from all authorized investigative and adjudicative agencies. Information from this database would be used to validate whether a person has or had a clearance, potentially including such information as the type of investigation that was conducted and the date of the investigation, thereby assisting responsible officials in determining whether a new investigation is required. However, the Performance Accountability Council is not pursuing a single, integrated database according to our analysis of a series of recent reports that the Joint Reform Team, under the Performance Accountability Council, issued between 2008 and 2010. For example, according to the Enterprise Information Technology Strategy, the Performance Accountability Council has opted to pursue an approach that leverages existing systems and involves the development of new tools when necessary. According to the Strategic Framework, which was included with the most recent annual report to Congress, the reform efforts are focused on leveraging OPM’s existing system—the Central Verification System—to enable access to records on investigations and adjudications. Agency officials from both OPM and ODNI confirmed that there are no plans to create a new single, integrated database. Instead, the focus will be on using a single search capability of existing databases as the means by which they intend to address the IRTPA requirement. According to an OPM official with whom we spoke, a single database would not provide any additional functionality over the single-search capability that they are pursuing. OPM, DOD, and ODNI officials with whom we spoke explained that establishing, operating, and maintaining a single, integrated database is not a viable option due to concerns related to privacy, security, and data ownership. First, DOD and OPM mentioned privacy concerns, which involve the unintentional disclosure of personal identifying information, such as name and Social Security number. Second, merging the different systems into one database raises security concerns. For example, according to an ODNI official, since the Intelligence Community’s database is classified and separate from the databases used by non- intelligence agencies, even an aggregation of unclassified information from its database could lead to unintentional disclosure of personal identifying information that could compromise security. Moreover, breaches in the system could also compromise security. For example, some officials mentioned an enhanced threat from hackers if there were consolidation of multiple information technology systems. Finally, according to DOD officials, there are issues related to data ownership and the copying and transferring of information between systems that are owned by different agencies. For example, according to OPM officials, OPM can not provide information from investigations it did not conduct to another agency. When investigations are conducted by agencies with delegated authority, the reports are owned and maintained by the investigating agency. Requests for these investigative records must be referred to the owning agency. Although there are no plans to create a new governmentwide database, non-intelligence agencies in our review are sharing information about personnel who hold or are seeking security clearances through two main databases that can be accessed through a single entry point. Two primary databases are used by non-intelligence agencies to store investigative and adjudicative information and according to the Performance Accountability Council, they account for decisions on about 90 percent of all security clearance holders in the federal government. Data are stored in either OPM’s Central Verification System or DOD’s Joint Personnel Adjudication System. The Central Verification System includes security clearance data for all non-Intelligence Community, non-DOD executive branch agencies. The Joint Personnel Adjudication System is a repository for security clearance information on both DOD civilian and military personnel, as well as determinations of contractor clearance eligibility and access for the National Industrial Security Program. Data from the two databases can be searched and obtained from a single entry point in the Central Verification System. The Central Verification System was upgraded in spring 2010 and now provides access to more information than was previously accessible. Specifically, the upgraded system provides users with a summary of information on: Characteristics of clearances reported to the system. This summary includes information on active, inactive, and denied clearances, as well as information on whether there is a condition, deviation, or waiver. Characteristics of investigations reported to the system. This summary includes information on pending, closed, and discontinued investigations, as well as requests that were deemed unacceptable due to inadequate or inaccurate information. Suitability and fitness. This summary provides information on adjudication decisions for suitability for federal employees and fitness for excepted service and contract employee determinations. Homeland Security Presidential Directive-12 Personal Identification Verification Credentials. This summary includes information on the status of credentials issued to the subject indicating whether the credentials are active, suspended, revoked, administratively withdrawn, or other. Polygraph data. This summary includes information on the type of polygraph conducted, including Counter-Intelligence or Expanded Scope, but does not include results from examinations. According to ODNI officials and Intelligence Community Directive 704, the Intelligence Community agencies share information with one another through a separate classified database known as Scattered Castles. Scattered Castles is a repository for records from all intelligence agencies by which each agency uploads relevant information from individual agency databases. All personnel who have access to Sensitive Compartmented Information are listed in Scattered Castles. This system is not linked to OPM’s Central Verification System due to concerns about protecting classified information. According to ODNI officials, the system has not been linked to non-intelligence databases due to the need to protect information on covert personnel. However, officials representing Intelligence Community agencies stated that they do enter some information from the Joint Personnel Adjudication System into Scattered Castles. Although the Intelligence Community maintains a separate database, we found that most of the non-intelligence agencies included in our review had some access to Scattered Castles. For example, five non-intelligence, non-DOD agencies included in our review had some access through a Sensitive Compartmented Information Facility located in their agency. All of the military departments, as well as the Joint Chiefs of Staff, also had some access. Moreover, according to agency officials, DOD adjudicators with the appropriate clearance and need to know will have access to a Sensitive Compartmented Information Facility with access to Scattered Castles when DOD collocates all of its clearance adjudication facilities at Fort Meade in Maryland as part of the DOD base realignment and closure process in 2011. According to Performance Accountability Council officials, the Performance Accountability Council is participating in an effort to explore ways to enhance information sharing between the Intelligence Community agencies and the non-intelligence agencies. A working group has been established to study alternatives to support a single access point from which to search clearance information and plans to complete its review in December 2010. According to an ODNI official, alternatives currently being considered include a help desk staffed with employees from the Intelligence Community who would have access to the Joint Personnel Adjudication System, Central Verification System, and Scattered Castles and could, upon request, provide the results of Scattered Castles searches to non-Intelligence Community agencies. Continued personnel security clearance reform relies on strong, committed executive leadership to sustain the momentum created by the current reform effort. This type of leadership commitment, in turn, helps provide oversight and accountability for the improvement processes. Key to these efforts has been the Performance Accountability Council, which has provided direction for clearance reform across the federal government. As a result of the Performance Accountability Council’s actions, federal agencies have made progress in moving closer to the objectives and requirements outlined in IRTPA. Under the Performance Accountability Council’s leadership, timeliness data—particularly at DOD—have improved, steps have been taken to improve information sharing, and there has been focus on honoring reciprocity of existing clearances. However, while agencies are moving closer to meeting the objectives and requirements of IRTPA, continued oversight and accountability for personnel security clearance reform is still needed. Specifically, executive branch agencies that are currently not meeting timeliness objectives may need help in identifying challenges and developing plans with appropriate timelines to overcome these obstacles. The recent activities undertaken by the Performance Accountability Council to assist the agencies in developing plans to implement the reformed approach is a step in the right direction. Continued reporting required by the Intelligence Authorization Act for Fiscal Year 2010 will also help ensure that momentum gained through the reform efforts will continue. However, without developing more comprehensive metrics to track reciprocity, executive branch agencies will not have a complete picture of the degree to which reciprocity is honored. To improve the overall personnel security reform efforts across the federal government, we recommend that the Deputy Director of Management, Office of Management and Budget, in the capacity as Chair of the Performance Accountability Council, take the following actions: Collaborate with the agencies that are not meeting timeliness objectives to take the following five actions: Identify challenges to timeliness; Develop mitigation strategies to enable each agency to comply with the IRTPA timeliness objectives; Set timelines for accomplishing the required actions; Monitor agency progress; and Report on these plans and progress in the annual reports to Congress. Develop comprehensive metrics to track reciprocity and then report the findings from the expanded tracking to Congress. We provided a draft of our report to OMB, ODNI, and OPM. In response to this draft, we received oral comments from OMB and written comments from ODNI and OPM. All three agencies concurred with all of our recommendations. OMB, ODNI, DOD (through ODNI), and OPM also provided us with technical comments, which we incorporated in this report, as appropriate. ODNI and OPM’s written comments are reprinted in their entirety in appendixes II and III, respectively. In oral comments, OMB generally concurred with both of our recommendations directed to OMB’s Deputy Director of Management in the capacity as Chair of the Performance Accountability Council. OMB noted the report’s thoroughness and that it highlighted the significant progress that has been made to improve the timeliness of security clearance determinations. In response to our recommendations, OMB described some of the steps that the Performance Accountability Council was taking to address the recommendations. Regarding our first recommendation, OMB noted that the Performance Accountability Council was committed to the timeliness and reciprocity goals of IRTPA and that it was working with agencies currently not meeting the IRTPA timeliness goals by taking steps to assist these agencies. Regarding our second recommendation to develop additional performance measures to track reciprocity, OMB stated that the Performance Accountability Council is working to develop these additional metrics. In written comments, ODNI and OPM both noted the significant overall progress that has been made in the reform efforts. Specifically, ODNI noted that DOD, with the majority of clearances, achieved timeliness goals for adjudications for fiscal year 2010. As we noted in our report, significant overall progress has been made, largely attributable to DOD because the department represents a vast majority of the initial clearances. In agreeing with and providing comments related to our recommendations, ODNI described a number of ongoing and future actions related to our recommendations. For example, ODNI stated that it is working through the Joint Reform Team to assist executive agencies that are not meeting IRTPA objectives to develop mitigation strategies and will report these strategies to Congress in its February 2011 IRTPA Annual Report. Similarly, ODNI stated that it will continue to work with the Performance Accountability Council’s Performance Management and Measures subcommittee to develop additional measures for reciprocity, timeliness, and quality, which will also be included in its annual report to Congress. We are encouraged to see a continued commitment by executive leaders of the security clearance reform effort and if implemented in accordance with our recommendations, the ODNI’s actions appear to be a positive step in helping sustain the momentum of security clearance reform. In addition to agreeing with our recommendations, OPM provided four specific comments: First, OPM provided comments on the timeliness data provided by the Performance Accountability Council that we used to frame agency compliance with IRTPA timeliness objectives. Specifically, OPM stated in its written comments that some of the Performance Accountability Council’s timeliness data for the second and third quarters of fiscal year 2010 varies with the data that OPM collects and reports to the Performance Accountability Council. We acknowledge that in some instances there are discrepancies between the timeliness data provided by OPM and the timeliness data that the Performance Accountability Council provided to us. In some cases, OPM asserts that timeliness data for investigations is marginally better than reported by the Performance Accountability Council and in other instances, is marginally worse. However, none of the discrepancies reported by OPM affects our findings as it relates to agency compliance with IRTPA timeliness objectives for the period reported. Agencies we note in figure 2 continue to either meet or not meet the IRTPA timeliness goals. As we note in our methodology, for the purposes of this report, we ultimately selected and relied on data provided by the Performance Accountability Council’s Subcommittee on Performance Management and Measures. The Performance Accountability Council is responsible for collecting and reporting agency timeliness data to Congress and providing oversight to agencies regarding the timeliness of personnel security clearance processes. The data were provided by the Performance Accountability Council in August 2010. We conducted a series of data reliability interviews with knowledgeable officials with the Performance Accountability Council’s subcommittee and concluded that the data provided were sufficiently reliable for our purposes. Second, OPM also provided comments related to a section of our report on investigation services quality and cost. Specifically, OPM noted in its comments that they felt some policies are ambiguous and that there are customer misperceptions of the sufficiency of OPM investigations. Further, OPM noted that there is no policy requiring police and criminal records to be included in its investigative reports. The ODNI provided a similar technical comment and we made changes, as appropriate, to reflect this point. However, related to OPM’s comment on agency misperception of the sufficiency of OPM investigations, we spoke with, and note in our report, several agencies stated challenges related to deficient investigative reports provided by OPM. According to these agencies—including DOD, which constitutes the vast majority of personnel security clearances in the federal government—the deficiencies in investigative reports slows their agencies ability to make adjudicative decisions. In fact, as we note in this report and based on our prior work, documentation was incomplete for most OPM-provided investigative reports based on independent review of about 3,500 investigative reports provided to DOD. Third, OPM suggested modifications to our discussion of quality metrics on reciprocity. In its comments, OPM noted that some of the metrics may have been developed prior to the Performance Accountability Council’s response to a March 2010 congressional inquiry. We disagree with OPM’s characterization of the accuracy of this section and its suggested modification for two reasons: 1) the Performance Accountability Council submitted proposed metrics to congress in May 2010 in response to the congressional inquiry we already note. Our evidence is derived from this letter to congress for which the Performance Accountability Council— including OPM—and GAO are signatories; and 2) OMB, in its capacity as chair of the Performance Accountability Council, stated in its technical comments that referring to the proposed metrics as originating from the Performance Accountability Council was appropriate. Finally, OPM noted in its comments pertaining to data ownership, that OPM can not provide information from investigations it did not conduct to another agency it did not own. Instead, OPM noted that requests for these investigative records must be referred to the owning agency. As a result, we incorporated changes based on this comment as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 12 days from the report date. We will then send copies of this report to the Senate Appropriations Committee, Senate Committee on Homeland Security and Governmental Affairs, Senate Select Committee on Intelligence, Senate Armed Services Committee, House Appropriations Committee, House Oversight and Government Reform Committee, and House Armed Services Committee and to members of the Performance Accountability Council, including the Director of the Office of Management and Budget, Office of the Director of National Intelligence, Secretary of the Department of Defense, and the Director of the Office of Personnel Management. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. In conducting our review of the ongoing efforts to reform the personnel security clearance process, the scope of work included the Office of Personnel Management (OPM), the Department of Defense (DOD), and the Office of the Director of National Intelligence (ODNI) as members of the Performance Accountability Council. Our review included select members of the Intelligence Community, including the Under Secretary of Defense for Intelligence, the Central Intelligence Agency, Defense Intelligence Agency, Federal Bureau of Investigation, National Geospatial-Intelligence Agency, National Reconnaissance Office, National Security Agency, and the Department of State. Our review also included six additional executive branch agencies, including the Departments of Energy, Health and Human Services, Homeland Security, Justice, the Treasury, and Veterans Affairs. These agencies were selected based on the volume of initial personnel security clearances they process per year for civilians, military, and industrial personnel, and their use of OPM to conduct background investigations. To assess the overall personnel security clearance reform efforts, as well as each of our objectives, we obtained relevant documentation and interviewed key federal officials from the following organizations: Office of Personnel Management; The Department of Defense; The Office of the Under Secretary of Defense for Intelligence, Department of the Army, Central Clearance Facility, Department of the Navy Central Adjudication Facility, Department of the Air Force Central Adjudication Facility, Defense Industrial Security Clearance Office Central Adjudication Facility, Defense Office of Hearings and Appeals, Defense Personnel Security Research Center, Joint Chiefs of Staff, and Washington Headquarters Services. Office of the Director of National Intelligence; Department of Health and Human Services; Department of Homeland Security; Department of the Treasury; and Department of Veterans Affairs. We conducted a roundtable discussion with members of the Intelligence Community, including officials from the Office of the Director of National Intelligence, the Under Secretary of Defense for Intelligence, the Central Intelligence Agency, Defense Intelligence Agency, Federal Bureau of Investigation, National Geospatial-Intelligence Agency, National Reconnaissance Office, National Security Agency, and the Department of State to discuss broader challenges the Intelligence Community faces regarding timeliness, information sharing, and reciprocity. To assess the extent to which executive branch agencies investigate and adjudicate initial personnel security clearance applications in a timely manner, we analyzed the timeliness objectives specified in the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA) and reviewed the self-reported timeliness data contained in the Performance Accountability Council’s Security and Suitability Process Reform Strategic Framework provided by the Performance Accountability Council Subcommittee on Performance Management and Measures for the first three quarters of fiscal year 2010. The data provided by the Performance Accountability Council was provided in August 2010. Further, we obtained and reviewed timeliness data provided by OPM for agencies that utilize OPM as the investigative service provider for the first three quarters of fiscal year 2010. While IRTPA sets timeliness objectives for 90 percent of cases, the Performance Accountability Council excludes certain cases from its analysis before calculating and reporting on agency timeliness. For example, OPM officials stated that cases that are returned to OPM for additional work, such as work to address missing scope items, are excluded from timeliness data. Due to the additional investigative work involved with these cases and the additional time required for agencies to negotiate the terms of the requests, these cases take longer to complete. Furthermore, the Performance Accountability Council excludes certain cases involving industrial personnel. DOD’s Defense Industrial Security Clearance Office adjudicates clearances for industrial personnel. When the Defense Industrial Security Clearance Office cannot mitigate issues and has decided that a denial or revocation is warranted, they submit the cases to the Defense Office of Hearings and Appeals. Timeliness information on cases pending with Defense Office of Hearings and Appeals is excluded from DOD’s timeliness data. Moreover, by not including end-to-end timeliness information on cases that require additional work in the query for calculating the fastest 90 percent of cases, the Performance Accountability Council is excluding many of the cases that took the longest to complete and, therefore, the average for agency timeliness may be reduced. We assessed the reliability of the data by reviewing the existing data and interviewing agency officials knowledgeable about how the data was collected, stored, and reported, as well as the quality assurance steps that were taken to ensure completeness and accuracy. We determined these data were sufficiently reliable for purposes of our audit. Additionally, we supplemented this data reliability analysis with information obtained through our interviews with executive branch agencies about their timeliness performance in fiscal year 2010 to date. These agencies were selected based on the volume of security clearances processed annually, among other things. To assess the extent to which executive branch agencies accept previously granted security clearances and the challenges, if any, that exist related to reciprocity, we reviewed the requirements specified in IRTPA and analyzed Executive Orders, OMB memorandums, ODNI policy guidance and directives, congressional reports, and individual agency guidance related to reciprocity. We also analyzed existing and planned metrics developed by the Performance Accountability Council to track the extent to which reciprocity is honored. We met with security officials, managers, and adjudicators from DOD, the Intelligence Community, and a non- probability sample of additional executive branch agencies. We supplemented this analysis with information obtained from a roundtable discussion that we conducted with representatives of Intelligence Community agencies to examine the challenges these agencies face as it relates to granting reciprocity. Because the scope of this engagement is limited to security clearances, we did not analyze the extent to which agencies reciprocally accept prior suitability investigations and adjudications. For the purposes of our report reciprocity is an agency’s acceptance of a background investigation or clearance determination completed by another authorized investigative or adjudicative agency. We excluded from the scope of our work issues related to access to facilities, detailed employees, or classified information. To assess the extent to which executive branch agencies share personnel clearance information in a single, integrated database, we reviewed and analyzed the Joint Reform Team’s Enterprise Information Technology Strategy, the Performance Accountability Council’s 2010 Strategic Framework, and the two most recent Joint Reform Team Security and Suitability Process Reform reports. We interviewed knowledgeable officials within OPM, ODNI, and DOD to determine what, if any, limitations, barriers, or challenges existed in creating a single, integrated database. In addition, we received demonstrations of both OPM and DOD’s databases and interviewed officials to determine how they shared information about personnel who hold or are seeking security clearances in the absence of a single, integrated database. We conducted this performance audit from October 2009 through November 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. In addition to the contact named above, Liz McNally (Assistant Director); David Moser (Assistant Director); James Ashley; Joseph M. Capuano; Sara Cradic; Cindy Gilbert; Linda Keefer; James Krustapentus; Greg Marchand; Richard Powelson; Jillena Roberts; and Amie Steele made key contributions to this report. DOD Personnel Security Clearance Reform: Preliminary Observations on Timeliness and Quality. GAO-11-185T. Washington, D.C.: November 16, 2010. Privacy: OPM Should Better Monitor Implementation of Privacy-Related Policies and Procedures for Background Investigations. GAO-10-849. Washington, D.C.: September 7, 2010. Personnel Security Clearances: An Outcome-Focused Strategy and Comprehensive Reporting of Timeliness and Quality Would Provide Greater Visibility over the Clearance Process. GAO-10-117T. Washington, D.C.: October 1, 2009. Personnel Security Clearances: Progress Has Been Made to Reduce Delays but Further Actions Are Needed to Enhance Quality and Sustain Reform Efforts. GAO-09-684T. Washington, D.C.: September 15, 2009. Personnel Security Clearances: An Outcome-Focused Strategy Is Needed to Guide Implementation of the Reformed Clearance Process. GAO-09-488. Washington, D.C.: May 19, 2009. DOD Personnel Clearances: Comprehensive Timeliness Reporting, Complete Clearance Documentation, and Quality Measures Are Needed to Further Improve the Clearance Process. GAO-09-400. Washington, D.C.: May 19, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 22, 2009. DOD Personnel Clearances: Preliminary Observations about Timeliness and Quality. GAO-09-261R. Washington, D.C.: December 19, 2008. Personnel Security Clearances: Preliminary Observations on Joint Reform Efforts to Improve the Governmentwide Clearance Eligibility Process. GAO-08-1050T. Washington, D.C.: July 30, 2008. Personnel Clearances: Questions for the Record Regarding Security Clearance Reform. GAO-08-965R. Washington, D.C.: July 14, 2008. Personnel Clearances: Key Factors for Reforming the Security Clearance Process. GAO-08-776T. Washington, D.C.: May 22, 2008. Employee Security: Implementation of Identification Cards and DOD’s Personnel Security Clearance Program Need Improvement. GAO-08-551T. Washington, D.C.: April 9, 2008. DOD Personnel Clearances: Questions for the Record Related to the Quality and Timeliness of Clearances. GAO-08-580R. Washington D.C.: March 25, 2008. DOD Personnel Clearances: DOD Faces Multiple Challenges in Its Efforts to Improve Clearance Processes for Industry Personnel. GAO-08-470T. Washington, D.C.: February 12, 2008. Personnel Clearances: Key Factors to Consider in Efforts to Reform Security Clearance Processes. GAO-08-352T. Washington, D.C.: February 27, 2008. DOD Personnel Clearances: Improved Annual Reporting Would Enable More Informed Congressional Oversight. GAO-08-350. Washington, D.C.: February 13, 2008. DOD Personnel Clearances: Delays and Inadequate Documentation Found for Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed to Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06-233T. Washington, D.C.: November 9, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005.
In light of long-standing problems with delays and backlogs, Congress mandated personnel security clearance reforms through the Intelligence Reform and Terrorism Prevention Act of 2004 (IRTPA). These included requirements related to timeliness, reciprocity, and the creation of a single database to house personnel security clearance information. In 2008, Executive Order 13467 established the Performance Accountability Council. GAO was asked to review the extent to which executive branch agencies (1) investigate and adjudicate personnel security clearance applications in a timely manner, (2) honor previously granted security clearances, and (3) share personnel security clearance information in a single, integrated database. GAO reviewed and analyzed Performance Accountability Council timeliness data for fiscal year 2009 and the first three quarters of fiscal year 2010. GAO also examined key clearance reform documents and conducted interviews with executive branch agencies, including members of the Intelligence Community, to discuss the three stated objectives. Significant overall progress has been made to improve the investigation and adjudication of personnel security clearance applications in a timely manner. This is largely attributable to the Department of Defense (DOD), whose clearances comprise a vast majority of governmentwide initial clearances. IRTPA establishes an objective for all agencies to make a determination on at least 90 percent of all applications for a personnel security clearance within an average of 60 days. The majority of clearances are processed in line with the IRTPA 60-day objective. Certain agencies, however, continue to face challenges for meeting timeliness objectives. Out of the 14 agencies included in GAO's review, DOD, the Department of Energy, and the National Geospatial-Intelligence Agency met the IRTPA 60-day timeliness objective in the first three quarters of fiscal year 2010. Timeliness among the other executive branch agencies ranged from 62 to 154 days. IRTPA and the recent Intelligence Authorization Act for Fiscal Year 2010 also require annual reporting on the progress made towards meeting objectives, including a discussion of impediments related to timeliness and quality. While the Performance Accountability Council has taken steps to assist in implementation of reform efforts, it has not reported on the impediments to meeting timeliness objectives for specific agencies not yet achieving this goal. Executive branch agency officials stated that they often honor previously granted personnel security clearances (i.e., grant reciprocity), but the true extent of reciprocity is unknown because governmentwide metrics do not exist. IRTPA generally requires that all personnel security clearance investigations and determinations be accepted by all agencies, with limited exceptions when necessary for national security purposes. Agency officials stated that they grant reciprocity, but some noted that they have taken steps to obtain additional information before granting reciprocity. For example, officials stated that they may request copies of background investigation reports before they will honor a security clearance because information available in databases contain limited, summary level detail. Agency officials also reported that steps must be taken to conduct suitability determinations to ensure an applicant's character is appropriate for the position. The extent to which reciprocity is occurring is unknown because no metrics exist to consistently and comprehensively track reciprocity. Although there are no plans to develop a single, integrated database, steps have been taken to upgrade existing systems and increase information sharing. The Performance Accountability Council has opted to leverage existing systems in lieu of the single, integrated database required by IRTPA. Officials assert that a single database is not a viable option due to concerns related to privacy, security, and data ownership. Therefore, a single search capability of existing databases is being used to address the IRTPA requirement. For example, information from two primary databases can now be accessed from a single entry point, allowing executive branch agencies to share clearance information with one another. The Intelligence Community agencies share information through a separate database. GAO recommends that the Performance Accountability Council collaborate with executive agencies to develop a plan to improve timeliness for those agencies not yet achieving the 60-day timeliness objective and metrics to track reciprocity. In commenting on this draft, the Performance Accountability Council concurred with all recommendations.
From 1962 through 1991, HHS’ system for protecting human research subjects was created, piece by piece, largely in response to disclosures of dangerous or controversial biomedical and behavioral research. (See app. II for more historical information.) The tragic consequences of thalidomide use in the United States and revelation of the Tuskegee syphilis study shocked the public and convinced national policymakers that unregulated biomedical research represented a clear threat to research subjects. Two expressions of this concern were the passage of the National Research Act and the promulgation of human subject protection regulations by the Department of Health, Education, and Welfare (HEW) in 1974. The act also established the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research to guide federal human subject protection policy. When the core of the human subject protection regulations was adopted by 15 other departments and agencies in 1991, it became known as the Common Rule. The Common Rule requires research institutions receiving federal support and federal agencies conducting research to establish committees to review research proposals for risk of harm to human subjects and to perform other duties to protect human research subjects. It also stipulates requirements related to informed consent—how researchers must inform potential subjects of the risks to which they, as study participants, agree to be exposed. (See fig. 1 for Basic Elements of Informed Consent.) HHS regulations contain additional protections not included in the Common Rule for research involving vulnerable populations—namely, pregnant women, fetuses, subjects of in vitro fertilization research, prisoners, and children. In the late 1970s and early 1980s, HHS considered but did not adopt recommendations by two national commissions for specific regulations to protect institutionalized mentally disabled subjects. A statement stipulating that research is involved, what the purpose of the research is, what the duration of the subject's involvement will be, and what procedures the subject will undergo. A description of foreseeable risks or discomforts to the subject. A description of expected benefits, if any, to the subject and others. The disclosure of alternative procedures or courses of treatment. A statement describing the extent to which confidentiality of records identifying the subject will be maintained. For research that poses more than minimal risk to subjects, an explanation of the availability and nature of any compensation or medical treatment if injury occurs. Names of people to contact for further information about the research, the subjects’ rights, and notification of research- related injury. A statement stipulating that participation is voluntary and no penalties will be imposed for refusal to participate in research; subject can choose to discontinue participation at any time. Within the HHS oversight system, OPRR and FDA are the key federal entities overseeing compliance with informed consent and other human subject protection regulations. Both entities carry out oversight functions central to the operation of the human subject protection system, including policy setting, prevention, monitoring, and enforcement. Institutional review boards (IRB)—that is, review panels that are usually associated with a particular university or other research institution—are responsible for implementing federal human subject protection requirements for research conducted at or supported by their institutions. OPRR is located within the National Institutes of Health (NIH), the principal federal agency responsible for supporting biomedical and behavioral research. About one-half of OPRR’s 28 full-time employees are responsible for overseeing protections in the approximately 16,000 HHS awards involving human subjects. The other half are devoted to ensuring the humane care and use of laboratory animals. Three physician volunteers augment OPRR’s human subject protection staff. OPRR has an annual budget of $1.9 million, about one-half of which is targeted to human subject protection activities. FDA is responsible for protecting the rights of human subjects enrolled in research with products it regulates—drugs, medical devices, biologics, foods, and cosmetics. Our review focused on oversight activities of FDA’s Center for Drug Evaluation and Research, which carries out most of FDA’s human subject protection activities. At CDER, responsibility for human subject protection activities is shared between the Office of Drug Evaluation and the Division of Scientific Investigations. The Office of Drug Evaluation reviews manufacturers’ and researchers’ requests to conduct drug studies on human subjects. The Division of Scientific Investigations reviews FDA’s field inspection reports on IRBs and investigators and makes final determinations regarding compliance violations. Routine and for-cause on-site inspections are conducted by field staff, who are also responsible for examining the integrity of research data, assessing compliance with good manufacturing practices, and examining other issues related to FDA’s oversight of all its regulated products. Within research institutions, oversight is done primarily by IRBs responsible for examining research proposals and ongoing studies. No data exist on the exact number of IRBs in the country but estimates range from 3,000 to 5,000. Most are found at universities, hospitals, and private research facilities; a few are free standing. Human subject research conducted by NIH itself, for example, is governed by the 14 IRBs of the NIH Intramural Research Program. In general, IRBs are composed chiefly of scientists at their respective institutions. They are required to have a minimum of five members, at least one of whom is a scientist, one a nonscientist, and one a person not otherwise affiliated with the research institution. They are also required to have a diverse membership; in determining membership, consideration must be given to race, gender, and cultural background. The presence of local review bodies and federal oversight agencies appears to have heightened the awareness and sensitivity of the research community to the importance of respecting subjects’ rights and welfare. Written commitments, which bind research institutions to comply with human subject protection requirements, are an important element of the protection system. By requiring individual researchers and IRBs to uphold their institution’s commitments, the system works to prevent harm to participants in most experimental studies. However, the effectiveness of the HHS human subject protection regulations in ensuring compliance by institutions and individual researchers has not been systematically studied. Research institutions must commit to uphold human subject protection requirements before engaging in research with human subjects conducted or funded by any of the departments or agencies that adopted the Common Rule. To be eligible to receive such funding, an institution must enter into a contract-like agreement, called an assurance. This is the written promise of an institution housing research studies to comply with federal ethical conduct standards. OPRR, the federal office within NIH that approves assurances for research funded by HHS, requires assurances to (1) include a statement of ethical conduct principles, (2) stipulate that a review board has been designated to approve and periodically review the institution’s studies, and (3) specify the review board’s membership, responsibilities, and process for reviewing and approving proposals. Assurances serve as one of the system’s chief preventive measures. OPRR’s authority to require assurances derives from the 1974 National Research Act, which formalized the practice of obtaining from institutions receiving HHS funding written assurances of their commitment to the ethical conduct of research. When the legislation was enacted, NIH had already developed assurance-type documents with many universities, which OPRR reviewed. Approving an assurance involves no site visits by OPRR to the institution; rather, negotiations are handled through correspondence and telephone calls with institution officials. OPRR assurances are of several types. Multiple project assurances are approved for universities and other major research centers that conduct a substantial number of studies and have demonstrated a willingness and the expertise to comply with human subject protection requirements. Through a multiple project assurance, an institution does not need to reapply through OPRR for eligibility to receive HHS funds for each new study approved by its IRB. An assurance covers the institution’s human subject studies for 3 years, at which time the institution must renew its assurance. Renewals are for a 5-year period. As a practical matter, multiple project assurances allow institutions to conduct research with no further OPRR involvement until the assurance is up for renewal. As of November 1995, 451 active OPRR multiple project assurances covered more than 500 research institutions. These institutions receive most of HHS’ funding for research with human subjects. Primary responsibility for negotiating all multiple project assurances in OPRR rests with a retired physician who used to be employed for this purpose by OPRR. Since retiring, she has continued this work on an unpaid, part-time basis. Currently, the assurance branch chief is responsible for approving all multiple project assurances OPRR negotiates. At institutions without a multiple project assurance, an assurance agreement must be negotiated with OPRR for each individual study. These are called single project assurances and require OPRR to review, for each study, documentation similar to that required for a multiple project assurance. In addition, OPRR reviews the study’s informed consent form before approving a single project assurance. As of November 1995, OPRR had 3,063 active single project assurances. Primary oversight of these assurances rests with three full-time staff in OPRR’s assurance branch. A third type of assurance—the cooperative project assurance—recognizes that research is frequently conducted at multiple sites under joint institutional sponsorship. One example is the National Surgical Adjuvant Project for Breast and Bowel Cancers, sponsored by the National Cancer Institute and conducted at over 300 sites. OPRR requires each participating institution to have a cooperative project assurance for all its joint research, regardless of other assurances held by the institution. For projects conducted under cooperative project assurances, OPRR designates reviewers to approve each research protocol and a prototype informed consent form. IRBs at the participating institutions must also approve the protocol and the informed consent document. IRBs can require additional explanations to be included in the informed consent document. However, they cannot modify the core elements of the protocol, which is to be consistent across all sites. Nor can they delete or substantially modify the discussion of risks and alternative treatments in the prototype consent document without notice and justification. As of November 1995, OPRR had 1,333 active cooperative project assurances. Assurance branch staff responsible for single project assurances also review cooperative assurances with additional support provided by other OPRR staff and others. FDA also works to prevent the occurrence of human subject protection violations in the drug research it regulates. Before permitting drug research with human subjects, FDA requires researchers to submit a brief statement that they will uphold ethical standards and identify the institutional review board that will examine the study. Sponsors are required to provide the results of chemical and animal studies with the new drug, submit the proposed study procedures for using human subjects, and commit to ensuring that a properly constituted IRB will review the proposed study. FDA reviews this information to ensure the study poses no unacceptable risks to subjects, is ethically sound, and is likely to achieve the study objectives. FDA can request modifications to or reject proposals deemed to present unacceptable risk. FDA’s prevention efforts overlap OPRR’s if the drug study is supported by HHS funds. Both OPRR and FDA educate the research community on issues related to protecting human research subjects. Both respond directly to questions from individual researchers, IRBs, and institutional officials. They cosponsor about four human subject protection workshops annually across the country that are attended on a voluntary basis by IRB members, research institution officials, and researchers. OPRR also issues written guidance that defines terms and clarifies ambiguities in human subject protection requirements. OPRR may provide additional information to individual institutions during its negotiation of assurances. FDA also provides guidelines on informed consent, research proposal review, and recordkeeping to IRBs, research sponsors, and researchers. Federal officials and the research community alike commonly cite IRBs as a key line of defense protecting patients and healthy volunteers participating in research. Federal regulations authorize IRBs to approve, approve with modification, or withhold approval from new research projects. Researchers must get approval from the appropriate IRB associated with their institution before beginning research with human subjects. IRBs are required to review ongoing projects annually or more often depending on the level of risk. HHS will not fund new human subject research or authorize ongoing research to continue without the local IRB’s approval. Specifically, IRBs are required to ensure that, for each project reviewed, risks are minimized and reasonable in relation to anticipated benefits, subjects are properly informed and give consent to participate, and the rights and welfare of subjects are maintained in other ways as well. IRBs are required to include scientists and nonscientists as members. IRBs must also consider gender, racial, and ethnic diversity in their membership selection in order to be sensitive to a broad range of social as well as scientific issues. IRB members are also expected to recognize that certain research subjects—such as children, prisoners, the mentally disabled, and individuals who are economically or educationally disadvantaged—are likely to be vulnerable to coercion or undue influence. The local nature of most IRBs enables members to be familiar with the research institution’s resources and commitments, the investigators’ capabilities and reputations, and the prevailing values and ethics of the community and subject population. In deciding whether to approve new research, IRBs are required to determine that a study’s procedures are consistent with sound research design and do not unnecessarily expose subjects to risk. In addition, IRBs are required to examine the study investigators’ efforts to obtain subjects’ consent, including examining the informed consent document when applicable. They do this to ensure that the document specifies the procedures the subject will undergo in language and terminology the subject can understand, the risks to the subject, and alternative treatments available and that the document makes explicit, among other things, the right of individuals to decline to participate in the study or to withdraw at any time. IRB members told us that they spend most of their time reviewing the informed consent document associated with a study. IRB reviews generally do not involve direct observation of the research study or of the process in which a subject’s consent is obtained, however. As a result, IRBs must rely on investigators’ and consent monitors’ assessments of subjects’ reading skills, fluency in English, and mental capacity. An IRB can authorize the use of a consent monitor to observe the delivery of informed consent, for example, when potential subjects might not have the mental capacity to understand all aspects of the consent process. IRBs are also required to review previously approved research periodically. The purpose of these continuing reviews is for IRBs to keep abreast of a study’s potential for harm and benefit to subjects so that IRBs can decide whether the study should continue. Principal investigators must therefore report the presence of adverse effects on study subjects, which allows the IRB to assess whether the seriousness of risk has changed. IRBs should also consider whether advances in knowledge or technology have occurred that would require reconsidering the appropriateness of the study’s purpose or protocol. In addition, they should review such details as whether the number of subjects in the study corresponds to the number initially approved. No system of prevention is foolproof—indeed, FDA’s and OPRR’s monitoring identifies abuses and other evidence of noncompliance. Federal monitoring efforts for human subject protection violations include reviews of study documentation, IRB operations, and allegations of misconduct. Federal enforcement activities serve to stem further adverse consequences. In fact, FDA officials, researchers, and drug industry representatives we interviewed told us that the FDA’s oversight of drug research motivates researchers and IRBs to follow proper human subject protection procedures. FDA monitors drug research for compliance with human subject protections. By conducting on-site inspections of IRBs, reviewing progress reports from researchers and sponsoring drug companies, and making on-site inspections of clinical studies and investigators, FDA becomes aware of noncompliance with federal regulations. FDA officials told us that most institutions and researchers respond quickly and positively to inspection findings, and the presence of an FDA inspection process deters human subject protection violations. FDA’s inspection of IRBs is its primary monitoring tool for human subject protection. FDA inspects IRBs to determine their adherence to federal human subject protection requirements. FDA inspections of IRBs consist primarily of an on-site examination of the IRBs’ minutes, written operating procedures, and other documentation that substantiates initial and continuing review and proper IRB membership. During these inspections, FDA interviews the chair or the administrator of the IRB to learn details about the IRB’s operation. FDA also determines whether consent forms contain all required elements and are signed by subjects. FDA has three levels of priority for inspecting the roughly 1,200 IRBs that oversee drug research. FDA gives top priority to the reinspection of IRBs for which it found serious deficiencies in the IRBs’ review of studies. FDA’s next priority is examining IRBs that were unknown to FDA until identified by researchers in their applications to begin drug studies with human subjects. FDA’s lowest priority is the routine reinspection of IRBs. Between fiscal years 1990 and 1995, CDER issued each year, on average, the results of 158 inspections of IRBs overseeing drug research. Between January 1993 and November 1995, FDA issued 31 Warning Letters to institutions regarding significant deficiencies in the performance of their IRBs’ oversight of drug research. These Warning Letters imposed sanctions—until CDER received adequate assurance that the IRB had taken corrective action—on the IRBs’ ability to approve new studies, allow entry of new subjects into ongoing studies, or both. Among the more serious violations cited were the following: researchers participated as IRB members in the review of their own studies; institutional officials falsely claimed no trials had been conducted that would have required IRB review; IRBs had no process to track ongoing studies; IRBs used expedited rather than full review to approve major study changes; IRBs failed to correct deficiencies noted during a previous FDA inspection; IRBs failed to ensure that required elements of informed consent were contained in consent documents; and IRBs allowed their members to vote by telephone instead of convening the board. FDA officials told us that FDA has never had to invoke its ultimate sanction—disqualification—for seriously deficient IRBs. On about 60 occasions, institutions disbanded their IRBs upon FDA’s findings of serious noncompliance. In most of these instances, the research projects approved by the IRBs had already been completed. FDA’s examination of individual drug studies is another component of its human subject protection monitoring. Before a manufacturer can receive FDA approval to market a drug, it must satisfy FDA that it has complied with FDA’s human subject protection regulations during clinical trials. The monitoring includes reviews of progress reports and on-site inspections. Although FDA examines documentation on protection matters, its principal focus in these efforts is to verify the accuracy and completeness of study data as well as the researcher’s adherence to the approved protocol. When researchers begin clinical trials, FDA’s Office of Drug Evaluation requires them, through their sponsors, to submit annual progress reports and also to report within 10 working days any serious and unexpected adverse incidents involving subjects as well as major changes to the study protocol. If these reports indicate potential or actual harm to subjects, FDA can suspend or terminate the study. FDA’s on-site inspections of drug studies generally occur after clinical trials have concluded. There are two types of inspections: routine and for-cause. Routine inspections are conducted after a manufacturer has completed its clinical trials and submits a new drug application (NDA) to FDA for approval to market the product. During fiscal years 1990 through 1995, FDA issued each year, on average, the results of about 265 routine inspections of drug studies. The sites visited are typically university-based research facilities, independent testing laboratories, and the offices of physicians participating in drug trials. Inspections of drug studies also include an assessment of how well subjects were protected during the study: whether the consent document, study protocol, and required revisions to them were reviewed and approved by an IRB before enrolling subjects; whether signed consent forms were obtained from each enrolled subject; whether adverse incident and status reports were submitted to the IRB once research began; and whether subjects were recruited properly. FDA inspectors look for evidence that researchers reported all safety-related information to the sponsor, reasons why subjects dropped out of the study, and other matters related to the integrity of study data. In addition, FDA often interviews researchers and sometimes interviews subjects. While routine inspections generally occur after completion of clinical trials, for-cause inspections can occur at any time during the course of drug testing with humans. FDA conducts for-cause inspections when its review of status reports submitted by researchers indicates possible misconduct, or when it receives allegations of serious misconduct. FDA conducts about a dozen for-cause inspections annually. Most of the violations FDA identifies through its routine inspections of individual drug studies are relatively minor. From 1977 to 1995, about one-half of the violations related to the adequacy of the informed consent forms. For example, FDA frequently found violations of the requirement to specify in the informed consent document whom subjects can contact if they have concerns about research, subjects’ rights, or research-related injury. FDA also identified more serious violations in its routine and for-cause inspections. We reviewed 69 of the 84 letters describing deficiencies that FDA issued to drug researchers between April 1980 and November 1995. These letters cited instances of serious misconduct, including failure to obtain informed consent; forgery of subjects’ signatures on informed consent forms; failure to inform patients that a drug was experimental; fabrication of data to make subjects eligible for study; submission of false electrocardiograms, X rays, and lab test results to the company underwriting the research; failure to report subjects’ adverse reactions to drugs under study, including a subject’s death; failure to obtain informed consent and an IRB’s approval for a study touting a human growth hormone as a cure for Alzheimer’s disease; proceeding with a cancer study after FDA had suspended it for protocol deficiencies; and failure to inform patients that a drug sold to them was experimental and contained a steroid. Since 1980, FDA has taken 99 actions against 84 clinical investigators regarding their conduct of drug research with human subjects. FDA has used four types of actions to enforce its regulations: (1) obtaining a promise from a researcher to abide by FDA requirements for conducting drug research; (2) invoking a range of restrictions on a researcher’s use of investigational drugs; (3) disqualifying a researcher from using investigational drugs; and (4) criminally prosecuting a researcher. OPRR also responds to inquiries and investigates allegations, but few investigations result in site visits; inquiries and investigations are largely handled by telephone and correspondence. OPRR receives complaints about human subject protection issues from a variety of sources, including NIH inspection teams, FDA, subjects and their families, staff from research institutions, news media, and the Congress. The majority of noncompliance reports come from the institutions themselves, which are required to report unanticipated problems, such as injuries and serious or continuing noncompliance, to OPRR as part of the assurance agreement. The number of compliance cases investigated by OPRR grew from 32 open cases in January 1993 to 107 cases under investigation in June 1995. OPRR officials and others attribute the increase to a heightened awareness of human subject protection issues and more extensive media coverage of untoward research events rather than to an increase in the actual occurrence of noncompliance. Over the past 5 years, OPRR’s compliance staff of four full-time employees and two volunteers have investigated several studies for allegations involving serious human subject protection violations. One such example was OPRR’s investigation of whether informed consent procedures clearly identified the risk of death to volunteers in the tamoxifen breast cancer prevention trial. OPRR found that informed consent documents at some sites failed to identify some of tamoxifen’s potentially fatal risks, such as uterine cancer, liver cancer, and embolism. In another instance, OPRR compliance investigators found deficiencies in informed consent and in IRB review procedures in a joint NIH-French study of subjects who had tested positive for the human immunodeficiency virus (HIV) in Zaire. In a third case, OPRR compliance staff investigated a study of schizophrenia at a major university because of complaints from families of two subjects associated with the study. In that investigation, OPRR found that the informed consent documents failed to adequately describe the research procedures, research risks, and alternative courses of treatment. In addition, OPRR found that the researchers inappropriately obtained the subjects’ oral consent rather than written consent as required by HHS regulations. Among cases currently under investigation, OPRR is reviewing allegations that researchers at a university-based fertility clinic transferred eggs from unsuspecting donors to other women without the consent of the donors. Our review of OPRR files showed that OPRR found such deficiencies as the failure of an IRB to give full review of projects at a convened meeting or to adequately review ongoing research. OPRR also found IRB approval of informed consent documents that did not clearly state the study’s purpose, did not identify the study’s risks of the research, and did not present information that would be understandable to the subjects. In many cases, OPRR has required institutions to take corrective action. In some instances, OPRR has suspended an institution’s authority to conduct further research in a particular area until problems with its IRBs were fixed. From 1990 to mid-1995, there were 17 instances in which OPRR imposed some type of restriction on an institution’s authority to conduct human subject research. For example, in some cases, OPRR suspended the enrollment of new subjects; in others, OPRR excluded certain types of research from coverage by multiple project assurances, thereby requiring single project assurances and the direct involvement of OPRR in reviewing each study’s informed consent forms and other documents. To document corrective actions, institutions are generally required to submit quarterly reports to OPRR. OPRR lifts a restriction when it is satisfied that the institution has taken appropriate corrective actions—in most cases, after receiving quarterly reports for about a year to 18 months. Oversight systems are by nature limited to minimizing, rather than fully eliminating, the potential for mishap, and HHS’s system for protecting human subjects is no exception. Various factors reduce or threaten to reduce the system’s effectiveness. IRBs face the pressure of heavy workloads and competing professional demands. OPRR is often remote from the institutions it oversees. FDA’s processes, while including on-site inspections, may permit human subject protection violations to go undetected. Moreover, the complexity and volume of research under review and the difficulty of ensuring that individuals truly understand the risks they may experience as research subjects can weaken the effectiveness of human subject protections. Federal officials, experts, and research community members we interviewed consistently mentioned several concerns about the operations of IRBs. First, IRB reviews are labor intensive and time consuming, forcing boards to balance the need to make reviews thorough against the need to get them done. IRB members are usually physicians, scientists, university professors, and hospital department heads who are not paid for their IRB service. Board members themselves told us they face a heavy workload, and others in the research community have raised concerns that heavy workload impairs IRB review. In some cases, the sheer number of studies necessitates that IRBs spend only 1 or 2 minutes of review per study. FDA found one IRB that had reviewed as many as 200 proposals and ongoing studies at a meeting. Several experts told us of other instances in which IRBs had reviewed 100 to 150 studies in one meeting. In many such cases, one, two, or several individuals—known as “primary reviewers”—may be assigned to examine a study comprehensively in advance of the IRB meeting, often held monthly. In these cases the other IRB members rely on the conclusions drawn by the primary reviewers and may be less prepared to identify and discuss potential problems with proposals. In addition, IRB members and researchers told us that, given the time constraints, a good portion of the meetings is devoted to assessing the adequacy of the consent forms at the expense of reviewing research designs. Second, federal officials and experts in IRB issues have been particularly concerned with IRBs’ conduct of continuing reviews. They assert that these reviews are typically either superficial or not done at all. According to OPRR officials, IRBs have not always understood the requirements for continuing review, and, in other cases, IRB workload demands have reduced the quality of this review. In some cases, IRB administrative staff with no scientific expertise—not IRB members themselves—review continuing review forms, ensuring only that the information has been provided. Heavy workload also necessitates that IRBs rely largely on investigators’ self-assessments in conducting continuing reviews. That is, IRBs review statements completed by the study’s investigators and, with rare exceptions, do not verify the accuracy of the reported information. Although experts disagree on the desired level of IRB verification, its value was demonstrated recently in a report by HHS’ Office of Inspector General. The report cited one instance in which nine researchers failed to notify their IRBs, as required, of major deviations from a study protocol. In another instance, a surgeon reported to the IRB the implantation of an experimental device in 37 subjects. The HHS review team found that this surgeon and his coinvestigators had actually implanted the device in 258 subjects, thus far exceeding the limit of 75 subjects specified in the research protocol and approved by the IRB. In cases such as these, the possibility exists that a researcher could selectively report favorable results. Third, experts we interviewed raised concerns about the independence of IRB reviews. For example, they told us that close collegial ties with researchers at their institutions, pressures from institution officials to attract and retain government or corporate research funding, financial ties to the research study, and reluctance to criticize studies led by leading scientists can compromise the independence of IRB reviews. Although most experts we interviewed agreed that instances of these problems occur, they did not have enough evidence to determine the frequency or the extent of the problem. Finally, some IRBs are viewed by their institutions and by researchers as a low-priority administrative hurdle. As a result, these IRBs have difficulty securing the administrative and computer support they require. For example, OPRR has found instances of IRB staff working in office space insufficient to conduct review board business effectively, manual filing systems too primitive to ensure that continuing reviews were conducted at the required times, and lack of privacy for IRB staff to take the sensitive telephone calls of subjects who may want to register complaints. At such institutions, researchers may not always follow IRB requirements, such as revising informed consent forms or reporting adverse events. OPRR’s reliance on the assurance process for preventing the violation of human subject protections requires that OPRR have sufficient basis for judging an institution’s ability to satisfy human subject protection requirements. At times, however, OPRR’s assurance negotiation process falls short of that goal. OPRR staff are rarely direct observers of the institutions they oversee. They make no site visits during assurance negotiations, but instead review solely an institution’s written application and conduct written or oral follow-up. Usually, document review does not include an examination of the manuals that detail the human subject protection procedures that the institution requires its IRBs and researchers to follow. Similarly, almost all of OPRR’s compliance investigations— reviews in response to allegations of misconduct—are carried out through correspondence. In the 5 years preceding April 1995, OPRR made 15 site visits as part of the 202 compliance investigations it completed. What OPRR has found in its site visits made in the course of investigating allegations of violations illustrates the value of such visits. For example, when we accompanied OPRR on a compliance site visit to a major research university, OPRR learned details about the institution’s IRB operations and reporting chain idiosyncrasies that it was previously unaware of despite having reviewed the institution’s assurance documents. This visit resulted in the temporary suspension of the human subject research under the surveillance of one of the university’s two IRBs. OPRR officials told us that they lack the time and funds for more site visits for assurance negotiations or compliance. They acknowledged, however, that when they did make site visits, their investigations were significantly enhanced by communicating face-to-face with officials, researchers, and the administrative staff assigned to the institution’s IRB. On-site investigations have also been more thorough and expeditious because OPRR had ready access to study files and IRB records and could quickly follow leads. Site visits also provided OPRR the opportunity to educate institutional staff about ethical conduct practices by enabling OPRR staff to be immediately available to discuss and answer questions about human subject protection issues. Through these exchanges, OPRR staff learned about problems, such as those with continuing review, that other institutions could be experiencing. Experts we interviewed also said that OPRR’s prevention efforts would be more effective if it were to make site visits to institutions in the process of approving and renewing assurances. In addition, NIH’s organizational structure may hamper OPRR’s independent oversight and enforcement of human subject protection regulations, although we found no specific instance in which this occurred. Although OPRR is located within the Office of Extramural Research, OPRR is responsible for enforcing compliance with human subject protection regulations for research conducted or supported by both the Office of Intramural Research and the Office of Extramural Research. Under this structure, the OPRR Director reports to the Deputy Director for Extramural Research, who, in turn, reports to the Director of NIH. Because the Deputy Director for Intramural Research also reports to the Director of NIH, OPRR has no direct authority over the research conducted by the intramural program. As a result, when OPRR cited NIH’s Office of Intramural Research in 1991 for compliance violations, for example, OPRR had to depend on that office’s good will and professional conduct to implement the corrective action plan proposed by OPRR, since OPRR did not have direct authority to require NIH to correct violations. According to OPRR, NIH will complete implementation of the plan by April 1996, 5 years after the problems were noted. From a broader organizational perspective, a potential weakness exists because NIH is both the regulator of human subject protection issues as well as an institution conducting its own human subject research. The Director of NIH, therefore, has responsibility for both the success of NIH’s intramural research program and for the enforcement of human subject protection regulations by OPRR. In some instances, FDA’s oversight efforts may permit violations of human subject protections to go undetected. For example, researchers who use human subjects in drug research are required to submit to their sponsor periodic progress reports during the course of the trials. These reports include adverse events, project status, and changes to the research protocol. The sponsor, in turn, reports adverse events to FDA. The reporting process, however, is a passive one in which FDA relies on researchers and their sponsors to report potential or actual adverse medical events during clinical trials. Violations of subjects’ rights, such as inadequate informed consent or IRB review, however, are not required to be reported. Two gaps in FDA’s inspection of drug studies have implications for human subject protections. First, FDA only conducts routine on-site inspections after clinical trials have concluded and subjects have completed their participation. Second, FDA officials told us that because of resource limitations, FDA does not inspect all studies; instead, it concentrates its efforts on those products that both are likely to be approved for consumer use and could pose high risk to consumers. FDA officials told us that the primary reason for these inspections is to review the integrity of the study’s data before initiating a review of the drug’s safety and effectiveness. In essence, then, FDA’s inspection program is geared more toward protecting the eventual consumer of the drug than the subjects on whom the drug was tested. Gaps also exist in FDA’s inspection of IRBs. CDER annually issues the results of about 158 inspections of the approximately 1,200 IRBs reviewing drug studies, although its goal has been to complete and issue reports on about 250 inspections each year. We found that in one of FDA’s 21 districts—a district that contains several major research centers conducting studies with human subjects—12 IRBs had not been inspected for 10 or more years on behalf of CDER, CBER, or CDRH. Furthermore, although FDA’s policy is to accelerate the timetable for reinspecting IRBs found to have significant problems, we noted instances in which FDA conducted its reinspection 3 to 5 years later. FDA officials told us that, because of resource constraints, IRB inspections receive lower priority than inspections of FDA-regulated products or manufacturing practices. Finally, experts we interviewed raised concerns about the unevenness of FDA inspectors’ expertise, which they believe could enable human subject protection violations to go undetected. FDA officials acknowledge that some inspectors may be inadequately prepared to understand the human subject protection implications of drug studies and to ask meaningful follow-up questions on the research protocols they review. FDA officials also noted that some inspectors lack practical experience in reviewing drug studies because they work in districts with few bioresearch sites and therefore usually inspect other types of regulated products. Several additional pressures make guaranteeing the protection of human subjects difficult. Many of the experts we interviewed raised concerns about the growing complexity of science, the increasing number of multicenter trials, and the vulnerability of certain subject populations. The extent of these problems, however, has not been studied. First, the increasing complexity of research makes it difficult for IRBs to assess human subject protection issues when members are not conversant with the technical aspects of a proposed study. In such cases, the IRB’s ability to assess the risks or benefits posed to subjects and the adequacy of language found in the consent document is questionable. In addition, cutting edge science can present new ethical dilemmas for IRBs to confront. Experimental human reproductive techniques and ownership of genetic material, for example, have raised ethical questions that thus far have not been satisfactorily resolved. Second, the growing number of large-scale trials carried out at multiple research sites presents other problems for IRBs, both at initial and continuing review. Proposals for multicenter trials are reviewed by an IRB associated with each local research site. If most involved IRBs have approved a proposed study—that is, determined that the study is safe, ethical, and appropriately described in consent forms—then remaining IRBs at other institutions may feel pressured to mute their concerns about the study. Furthermore, during the course of a multicenter trial, each participating IRB receives numerous reports of adverse events from other research sites. Because of the volume of reports, IRB members may have difficulty discerning which adverse events are both relevant and serious enough to warrant their taking note of them. “...patient-subjects who have serious illnesses may have unrealistic expectations both about the possibility that they will personally benefit by being a research subject and about the discomforts and hardships that sometimes accompany research.” Volunteers who want to be included in biomedical or behavioral studies because they believe in the advancement of science or because researchers offer financial incentives are another group whose personal stake in the research may go unnoticed by IRBs and researchers, thereby weakening oversight. Fourth, an inherent conflict of interest exists when physician-researchers include their patients in research protocols. If the physicians do not clearly distinguish between research and treatment in their attempt to inform subjects, the possible benefits of a study can be overemphasized and the risks minimized. Fifth, pressures to recruit subjects can lead researchers and IRBs to overlook deficiencies in efforts to inform subjects of potential risks. This problem has been exacerbated, a consultant to IRBs told us, by NIH and FDA guidelines that now require that subjects selected for the studies over which the agencies have jurisdiction reflect the gender and racial composition of potentially affected populations. These guidelines are in place for the purpose of generalizing research results to the widest possible range of population groups. Finally, the line between research and medical treatment is not always clear to clinicians. Controversy exists regarding whether certain medical procedures should be categorized as research. For example, in some cases physicians may use an innovative but unproven technique to treat patients without considering the procedure to be research. From the standpoint of the physicians, they are providing treatment to individual patients rather than conducting a clinical trial. Given this view, they do not seek IRB approval. From the standpoint of experts we interviewed, however, such treatments could constitute unregulated research and place people at risk of harm from unproven techniques. With the issuance of federal regulations covering much human subject research and the maturation of the HHS oversight system, researchers have become more aware of ethical conduct standards and more often comply with them. Because no oversight system can be designed to guarantee complete protection for each individual, holes inevitably exist in the regulatory net. Federal and IRB reviewers rarely observe the interaction between researchers and subjects during the informed consent process or throughout the course of the study. Whether research institutions are examined by OPRR for eligibility to receive HHS funding, research studies are assessed by IRBs for their compliance with HHS regulations, or applications to conduct drug trials are reviewed by FDA, oversight is present, but at a distance. There is consensus among experts and regulators about the benefits of first-hand review, but continuous on-site inspections of every research institution and its studies are neither feasible nor desirable because of the regulatory burden this would impose on both the research community and regulators. Finding the balance, however, between that extreme and a process that relies almost exclusively on paper reviews is the fundamental challenge facing regulators and IRBs in the current HHS oversight system. Individuals participating in biomedical and behavioral research are essential to the advancement of science and medicine. Federal regulators and research institutions, therefore, continually strive to improve the protection of human participants without imposing an unwieldy, burdensome regulatory apparatus. To continue to prevent the occurrence of human subject protection violations and to identify and correct violations that do occur remain essential objectives of the system. Given the many pressures that can weaken the effectiveness of the protection system, continued vigilance is critical to ensuring that subjects are protected from harm. NIH and FDA reviewed a draft of this report and provided comments, which are reproduced in appendixes III and IV. NIH and FDA found the report to be generally accurate and suggested revisions to clarify specific aspects of our discussion of the human subject protection system. We incorporated these as appropriate, basing the changes in some instances on further discussions with officials from each agency. In its comments, NIH recognized the importance of on-site visits to research institutions by OPRR staff and noted that the number of technical assistance visits would be increased to 12 to 24 per year. This action should help strengthen human subject protection efforts by institutions and investigators as well as improve OPRR’s assurance, monitoring, and enforcement efforts. In its comments, NIH also stated that OPRR’s independent oversight and authority to enforce human subject protection regulations within NIH are not compromised by OPRR’s location within the NIH organizational structure. NIH said that the lines of authority of the NIH Deputy Director for Intramural Research and the OPRR Director do not cross within NIH and, therefore, that OPRR’s authority is not compromised. We disagree with NIH’s conclusion and believe that a potential weakness exists in OPRR’s ability to enforce human subject protection regulations within NIH. This weakness results from the chain of command within NIH and the NIH Director’s dual responsibilities for the success of the intramural research program and OPRR’s enforcement of human subject protection regulations. We have amplified our discussion of these issues in the report. In its comments on our draft report, FDA raised concerns that our work understates FDA’s accomplishments and the efforts to protect human subjects of product testing by the industries regulated by FDA. Because human subject protection activities in drug research account for most of FDA’s efforts in this area, we limited the scope of our work to an examination of CDER’s oversight. We have modified the report to acknowledge the human subject protection activities of the Center for Biologics Evaluation and Research and the Center for Devices and Radiological Health. Furthermore, we have clarified that the inspection reports and actions to enforce regulations we discuss are for CDER’s oversight of IRBs and drug studies, and we have included additional information FDA provided on fiscal year 1995 activities. FDA also focused on our presentation of aspects of its IRB inspection programs. FDA commented that (1) the IRB inspection program is the principal way in which FDA addresses the issue of human subject protection, (2) IRB inspections can enhance protection for subjects in specific studies, and (3) an IRB inspection conducted for one center—for example, CDER—can serve to protect subjects in studies regulated by CBER and CDRH. We have modified the report to address these points. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 7 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS, the Director of NIH, the Commissioner of FDA, and other interested parties. This report was prepared under the direction of Mark V. Nadel, Associate Director for National and Public Health Issues. If you or your staff have any questions, please call me at (202) 512-7119 or Bruce D. Layton, Assistant Director, at (202) 512-6837. Other major contributors to this report include Frederick K. Caison, Linda S. Lootens, and Hannah F. Fein. We focused our work on the Department of Health and Human Services (HHS)—the federal department sponsoring biomedical and behavioral research with the largest human subject research budget, over $5 billion in fiscal year 1995. Within HHS, we examined the policy and oversight roles of the two entities with primary responsibility for protecting human research subjects: the National Institutes of Health’s (NIH) Office for Protection from Research Risks (OPRR) and the Food and Drug Administration (FDA). OPRR is responsible for enforcing compliance with HHS human subject protection regulations when human subject research is conducted or supported by HHS. FDA is responsible for protecting the rights of human subjects enrolled in research with products it regulates—drugs, medical devices, and biologics. We limited our review to FDA’s Center for Drug Evaluation and Research (CDER) because drug research is the largest segment of biomedical research. Because of this volume, FDA conducts more oversight activities in the drug products area than it does for medical devices and biological products, with CDER carrying out most of FDA’s human subject protection activities. Although FDA’s Center for Biologics Evaluation and Research and Center for Devices and Radiological Health also have programs to protect human subjects, these Centers were not included in our review. To gather information about the federal role in protecting human subjects, we interviewed NIH, OPRR, and FDA officials and reviewed regulations, policies, procedures, guidelines, and educational materials the entities provide to institutional review boards (IRB) and researchers. To learn about the nature of OPRR findings and corrective actions, we reviewed 40 of the 166 compliance case files handled by OPRR from 1988 through March 1995, including 30 files we randomly selected and 10 files OPRR officials selected as representing the most serious violations. We accompanied OPRR staff on a compliance site visit to a major research institution and reviewed OPRR site visit reports from compliance visits conducted from September 1990 through December 1994. We also reviewed examples of inspection files, 69 of the 84 letters describing deficiencies that FDA issued to drug researchers from April 1980 through November 1995, and all 31 Warning Letters issued to IRBs regarding their oversight of drug research between January 1993 and November 1995. In addition, we reviewed correspondence between FDA and institutions in cases where FDA inspections found that IRBs did not comply with human subject protection regulations. To examine how local level protections work, we reviewed the professional literature, including the reports of presidential and congressional commissions; interviewed research institution officials, IRB members, and researchers; and reviewed research documents, such as institutional guidelines for IRBs and researchers, IRB minutes, and informed consent forms. We attended an IRB meeting to observe an IRB review of proposed research. We interviewed numerous experts from across the nation with experience in bioethics, medicine, social science, law, and human subject protection issues. These experts included university and hospital researchers, subjects’ rights advocates, IRB members, human subject protection consultants, and representatives from the drug industry. We performed our field work from September 1994 to December 1995 in accordance with generally accepted government auditing standards. HEW issues first federal human subject protection regulations. President orders creation of National Bioethics Advisory Commission. Congress enacts National Research Act (P.L. 93-348) requiring written assurances from research institutions and IRB review. Presidential Advisory Committee on Human Radiation Experiments formed to investigate Cold War radiation experiments. National Commission established by Congress to make recommendations on bioethical issues. Fifteen other federal agencies adopt regulations based on the core of the HHS regulations, known as the Common Rule. NIH institutes require awardees to provide statement of responsibilities for conduct of hazardous research. HHS and FDA human subject protection regulations made substantially identical. Surgeon General issues subject protection policy for all Public Health Service-supported research. HHS adopts regulations for research involving fetuses, pregnant women, human in vitro fertilization, and prisoners. HHS adopts regulations for research involving children. Public Health Service concedes that in a 40-year study in Tuskegee, Alabama, treatment was withheld from black men with syphilis. Previously classified Cold War- era human radiation experiments revealed. Advisory Committee on Human Radiation Experiments reports deficiencies in current human protection system and recommends specific improvements. Human radiation experiments at University of Cincinnati in which adequacy of informed consent is questioned. Study commissioned by NIH finds that few research institutions have effective subject protections. Unsuspecting patients given investigational drug thalidomide, causing severe birth defects in children. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the federal oversight systems for protecting human subjects in federally sponsored scientific experiments, focusing on whether the oversight procedures: (1) have reduced the likelihood of abuses of human subjects; and (2) have weaknesses that could limit their effectiveness. GAO found that: (1) federal efforts to prevent the abuse of human research subjects include establishing institutional review boards, educating the research community, and requiring written commitments from researchers to comply with standards for the protection of human subjects; (2) although these efforts work to prevent harm to participants in most experimental studies, the effectiveness of those standards in ensuring compliance has not been systematically studied; (3) federal monitoring activities for the protection of human research subjects include on-site inspections and reviews of study documentation, institutional review board operations, and allegations of misconduct; (4) actions to enforce the human research subject protection requirements include research restrictions, researcher disqualification, criminal prosecution, and suspensions from conducting further research; and (5) the oversight procedures are impaired by institutional review boards' heavy workloads and competing demands, limited funds for on-site inspections, the complexity and volume of research under review, and reliance on researchers' self-assurances that they are complying with requirements.
In the public and private sectors alike, concerns about quality of health care are intensifying as purchasers of health insurance shift from traditional indemnity plans to managed care. With plans’ increased focus on controlling the skyrocketing costs of health care benefits, there are concerns about the value of the health benefits purchased. As a result, several large private purchasers have begun to examine “value-based purchasing.” Key to value-based purchasing is the measurement of health plan quality using different types of quality-related data to hold plans accountable and encourage improvements. Lessons learned from the experience of large purchasers may be applicable to the Health Care Financing Administration (HCFA), the nation’s single largest payer for health care. HCFA administers the Medicare program, which provides care for about 38 million beneficiaries, over 5.5 million of whom are currently in health maintenance organizations (HMO). It purchases health care coverage for almost all of the nation’s elderly population and more than 4 million disabled beneficiaries. Like purchasers in the private sector, the federal government has looked to managed care as a way to help contain costs associated with providing health care to Medicare beneficiaries. At the same time, the agency wants to ensure that the beneficiaries currently enrolled in health plans and those who enroll in the future are receiving high-quality care. With the passage of the Balanced Budget Act of 1997—a major piece of legislation affecting the Medicare program—HCFA will have more plans and more types of plans to monitor for the quality of care provided to beneficiaries. In an effort to curb the double-digit inflation in health care costs of the 1980s, large purchasers increasingly turned to managed care. The rise in managed care enrollment has been swift. From 1987 to 1996, enrollment in managed care provided through private employers nearly tripled. According to a 1997 survey of health benefits offered by firms with 200 or more workers, only 19 percent of employees are still enrolled in indemnity programs, which allow a free choice of providers and reimburse physicians and hospitals with limited or no review of the appropriateness of services rendered. In addition, traditional indemnity coverage uses a fee-for-service payment mechanism to reimburse providers. The remainder of employees with health insurance receive care through a variety of health plans. These can include (1) HMOs, (2) preferred provider organizations (PPO), and (3) point-of-service (POS) plans. HCFA has also seen a rapid increase in managed care enrollment in Medicare. However, unlike the private sector, the vast majority of Medicare beneficiaries still receive care through fee-for-service arrangements. In the early 1970s, the Congress encouraged commercial and Medicare use of HMOs by authorizing federal standards and oversight to ensure reasonable care and service. Between 1994 and 1997, enrollment in Medicare HMOs increased by 75 percent. There has also been a dramatic increase in the number of plans Medicare contracts with. Currently, HCFA contracts with close to 400 health plans to provide health care to over 5.5 million beneficiaries, about 14 percent of the total Medicare population. With the passage of the Balanced Budget Act, even greater growth in Medicare beneficiary enrollment in managed care can be expected. The act permits contracts between HCFA and a variety of different managed care entities, including PPO and POS plans, which are similar to HMOs but are directly controlled by groups of providers. The Congressional Budget Office (CBO) projects that as a result of the passage of the act, all types of managed care organizations will account for 25 percent of Medicare enrollees in 2002, 38 percent in 2008, and about 50 percent by 2030. With increased use of managed care, public and private purchasers must consider strategies to monitor plans and ensure the quality of the care they provide. The Institute of Medicine has formally defined quality of care as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge.” In evaluating plans, one or more of the following dimensions of quality can be measured: Appropriateness: Are providers giving patients the care they need? Technical excellence: How well are providers using medical science and knowledge to deliver care to patients? Accessibility: Are patients able to obtain care when needed and within reasonable proximity to where they live or work? Acceptability: Are patients satisfied with the care they receive? Since the concept of quality is multidimensional, experts describe the importance of using different types of measures to evaluate care. For example, the Foundation for Accountability (FACCT)—a forum for consumers and purchasers, including HCFA—argues for the importance of balancing the use of quality measures to reflect (1) the results of care, (2) whether patients are satisfied with the care received, and (3) whether the appropriate processes have been followed. Performance indicators are used to measure the various attributes of quality. For example, for clinical attributes, they can measure appropriateness and technical excellence—that is, providers’ actions and the outcomes of those actions. Process-related indicators refer to clinical interventions, such as the diagnostic tests performed by a physician when examining a patient. In contrast, outcome indicators measure the results of providers’ activities, such as mortality and morbidity. Outcome measures are critical to evaluating the quality of care, but experts recognize that these measures are not fully developed. A number of questions have been raised about the reliability and validity of certain measures and the data sources for performance indicators. For example, data from computerized administrative databases maintained by managed care plans and from individual patient medical records kept in providers’ offices may be inaccurate, incomplete, or misleading. This is because most administrative databases were designed for financial—not clinical—purposes. In addition, providers may enter incorrect information in medical records or not document certain interventions. In an earlier report, we expressed concerns about the reliability of satisfaction data, since most people lack the knowledge needed to adequately evaluate the appropriateness of the care that they receive or do not receive. We also noted in the report that plan-reported data on access-related measures, such as what constitutes a sufficient provider network, do not necessarily ensure that access to care is received. Such data must be checked by independent and systematic monitoring efforts that go beyond plan-reported, paper-based indications of compliance. Despite problems in measurement, some large companies—concerned with absenteeism and reduced productivity from illness—have begun to apply value-based purchasing concepts when purchasing health plan services. For example, these companies have considered information about quality to assess, rank, and select health plans and to monitor ongoing plan performance against standards and negotiate rates based on these standards. In addition, these companies are providing information on plan performance to employees to help inform their selection of health plans. Large purchasers have spearheaded several initiatives as they search for credible tools to help them identify and demonstrate to others the “value” resulting from premiums paid to managed care plans. For purchasers, standardized measures can help them to set desirable goals or “benchmarks” for health plans in different areas of interest or concern to the purchaser, provide feedback to plans on the results of such performance, and monitor the progress of plans against these goals. In the early 1990s, a committee of health plan representatives and corporate purchasers began to work on a set of standardized performance measures, which were later revised by the National Committee for Quality Assurance (NCQA)—a nonprofit institution that reviews and accredits health plans. The result of these efforts, the Health Plan Employer Data and Information Set (HEDIS), is now in its third generation and currently covers the following categories: effectiveness of care, access and availability of care, satisfaction with the experience of care, informed health care choices, descriptive information on health plans, the cost of care, health plan stability, and the use of services. Another major effort by purchasers, with participation by HCFA and other government agencies, was the creation of FACCT to develop standardized outcome measures. In 1996 and 1997, FACCT endorsed comprehensive measurement sets for asthma, diabetes, breast cancer, major depression, as well as other areas; some of these indicators focus on outcomes. Now FACCT is coordinating efforts with NCQA and others to create comprehensive measures for children’s health, HIV/AIDS, end-of-life care, coronary artery disease, and alcohol misuse. FACCT has also developed a “consumer information framework” for purchasers, which emphasizes the importance of a consistent and understandable framework for presenting quality-related information to consumers. One example of this information is the ability of health care organizations to maximize functioning and quality of life when a consumer faces chronic, incurable illnesses, such as diabetes and asthma. Despite the involvement of some major purchasers in the development of quality-related measures, surveys conducted by the Watson Wyatt consulting firm with the Washington Business Group on Health (WBGH) in 1996 and 1997 concluded that cost still prevails as the principal concern when most employers evaluate a managed care plan. The surveyed employers noted, however, that they are beginning to look more closely at issues such as plan coverage and access in judging health plan value. And a significant number of employers are requiring plans to report HEDIS data, with some making it a prerequisite for health plans that wish to contract with them. They also view accreditation as providing assurance that a health plan is attempting to manage the quality of care. While employers are beginning to make increased use of quality-related data in screening plans with which to contract, they may not necessarily be using it throughout the purchasing and monitoring process to the extent desired by proponents of value-based purchasing. A recent mapping of activities by individual employers and business coalitions concluded that only a limited number are actually implementing the principles of value-based purchasing. The Chairman and the Ranking Minority Member of the Senate Special Committee on Aging asked us to study how large corporate purchasers use quality-related information collected from health plans and the applicability of purchasers’ experiences to HCFA. Specifically, we agreed to describe (1) how large purchasers use quality-related data to seek or promote better quality of care and (2) lessons that can be learned from their experiences for HCFA in administering the Medicare program. In conducting our review, we analyzed and synthesized relevant literature about managed care and discussed value-based purchasing and quality measurement with employers and with HCFA officials. We then conducted detailed case studies with four large purchasers of managed care for employees. During site visits with these purchasers, we discussed how they incorporated quality-related data into their purchasing and monitoring decisions and the results they believe are attributable to their efforts. We also reviewed available data on results achieved through these efforts. For the purposes of the case study analysis, we defined “results” in terms of improved health plan performance on dimensions measured; increased health plan accountability to the purchaser or enrollee; and actions taken by purchasers, health plans, providers, or consumers in response to quality-related data. As such, we defined results not in terms of outcomes in the sense of clinical quality but rather those that indicated improvement in the performance of health plans in the dimensions measured by the purchaser. We selected purchasers for case studies that met the following criteria: the purchaser had (1) received performance measurement information from managed care plans at least twice, (2) documentation of specific examples of data uses and results, and (3) experience with managed care markets in several regions of the country or was able to exercise major leverage as a purchaser in at least one market. Also, we sought large purchasers that were willing to allow us access to their information and to spend time responding to our questions. Given these criteria, we selected four purchasers that represented a range of characteristics and experience with managed care: the California Public Employees’ Retirement System (CalPERS), Federal Express, Johnson & Johnson, and Southern California Edison. Of the four purchasers we studied, Federal Express and Johnson & Johnson can be characterized as national, as they purchase care for large concentrations of employees in multiple markets. Southern California Edison and CalPERS can be characterized as regional, as the vast majority of the employees for whom they purchase care are located in a single state or market. Two of the purchasers began offering managed care to their employees before 1994, and two began offering managed care since 1994. (See app. I for additional details on each purchaser.) We performed our work for this study between August 1996 and May 1998 in accordance with generally accepted government auditing standards. We also provided a draft of the report to HCFA and the four purchasers we visited for review and comment. They provided technical suggestions, which we have incorporated where appropriate. The four purchasers we studied achieved results—in health plan access, service by health plans to employees of the purchaser, satisfaction, and cost savings—by making use of multiple types of quality-related data, primarily those relating to satisfaction with care. They used these data to negotiate increased services from health plans, improve health plan performance, and inform employees about their health care choices. This chapter examines more closely those uses that have achieved demonstrable results. To date, purchaser assessments of health plan quality have largely focused on issues of accessibility and acceptability and whether health plans effectively administer their daily operations. As the four purchasers evaluate the benefits derived from their and others’ use of quality measures, they anticipate making even greater use of quality-related data. Purchasers can require quality-related data from health plans as a contracting requirement in order to focus the plans’ attention on purchaser priorities and set the stage for subsequent quality improvement and accountability activities. To collect and analyze quality-related data, purchasers use different types of information from a variety of sources. Improvements in access to services and in health plan capacity to report on HEDIS measures are some of the results from these activities, according to the purchasers we visited. Purchasers use a variety of data sources to assess whether or not to contract with health plans, monitor their ongoing performance, and develop quality-related information to provide employees. Data sources range from formal data on whether health plans have met accreditation standards set by entities such as NCQA and JCAHO, how health plans perform on certain HEDIS measures, and surveys of employees satisfaction to more qualitative data gathered through the judgments made by health benefits staff in the process of assessing health plans during the selection process. According to the 1997 Watson Wyatt/WBGH survey, the use of health care data is a resource-intensive activity; therefore, most purchasers who do so are large companies. As of 1997, 62 percent of large employers said they use HEDIS data in making purchasing decisions. In contrast, only 7 percent of small employers (those with fewer than 1,000 employees) use HEDIS data. Two of the purchasers we visited augment quality-related measures with site visits when selecting a health plan. To screen and conduct initial rankings of plans, these two purchasers requested evidence of NCQA and JCAHO accreditation, various HEDIS measures, and patient satisfaction surveys. They also used benefits consulting firms to assist them in selecting quality-related measures and analyzing health plan performance against targets, using HEDIS and other data. Once plans were screened and ranked, benefits staff conducted site visits. For example, one purchaser that we visited used these visits to observe plan operations, touring plan facilities including the customer service and claims processing centers and receiving an overview of the plan’s internal quality assurance processes. Site visits can weight heavily when final decisions on health plan selection are made. For example, one purchaser ultimately selected a plan that had not received the highest quality rankings based on the analysis of quality-related data. According to the purchaser’s staff, observations during site visits changed the ranking of the plans. For example, during site visits at one plan that had received a high ranking, the purchaser’s staff found that medical directors at some locations in the state did not always know what medical directors at other locations in the state were doing. At a site visit at another plan, the purchaser’s staff began to question the plan’s commitment to customer service, given the plan’s reaction to the purchaser’s concerns about the process for employee selection of a primary care physician. As a result of these site visits, the purchaser did not select either of these plans. Purchasers also acquire data from other sources, such as regional business coalitions. One purchaser we visited participated in a business coalition to augment quality efforts in areas with small populations of employees. Two other purchasers we visited said they benefited from a regional reporting initiative to collect, analyze, and report audited HEDIS data. One of these purchasers stressed a philosophy of building on information that is already publicly available rather than imposing another reporting requirement on health plans. As purchasers move into managed care, their first step often is to ensure access to care. Purchasers consider data on access as well as customer service to be particularly important—both to their employees and as indicators of quality. The two purchasers that used quality-related data to select health plans said they had required health plans to submit data on access-related measures. One purchaser, for example, required plans to report on the percentage of employees who would have access to at least two primary care physicians within 8 miles of their residence, the average time to obtain appointments, the percent of primary care providers who were not accepting new patients, and the timeliness of response to telephone and member inquiries. In this case, the purchaser required a commitment from plans to undertake actions to fill gaps in provider networks. Several purchasers we visited required plans to continue to submit data on HEDIS measures to ensure the plans gathered and maintained data on quality. One purchaser found that the initial HEDIS data received from plans during the plan selection process may have lacked validity and reliability. After requiring HEDIS data for 3 years from plans and contracting with a consultant to perform a data quality assessment, the purchaser described significant improvement in the plans’ ability to report and in the reliability of the data reported. For example, in 1993, only 50 percent of the managed care plans under contract could submit the HEDIS data requested, and purchaser officials described these data as only poor to fair in quality. In 1994, over 90 percent of the plans could provide HEDIS data of “fair quality.” By 1995, 100 percent of the plans under contract reported HEDIS data, and the data submitted by all but three plans were judged to be of acceptable quality. The purchaser now plans to make more use of these improved data during performance monitoring. The four purchasers we visited suggested that their philosophies about their relationship with health plans helped shape the approaches they use to hold plans accountable for providing quality health care and bring about improvements in plan performance. The four purchasers generally used a combination of collaborative- and compliance-oriented approaches. The collaborative approach, based on a “quality partnering” philosophy, is characterized by a close and informal relationship between purchaser and plan staff, frequent discussions about progress made against performance goals and benchmarks, and jointly developed plans for performance improvement. The compliance approach is characterized by techniques such as the establishment of specific and quantifiable performance standards, periodic assessment of plan performance against the standards, and financial penalties for failure to meet the standards. Each of the four purchasers were able to identify results achieved from both approaches, including projects to streamline member access to specialty care and improvements in employee satisfaction and cost savings. While each purchaser tended to use a blend of both approaches—working collaboratively with plans to improve performance while holding the same plans accountable against contractual standards and penalizing them if they do not meet these standards—all four cited the importance of close interaction with plans to influence changes in behavior and said that close and continuous interaction is easier when dealing with a small number of plans. In employing a collaborative approach, several purchasers we visited used quality-related data to highlight problems for discussion with health plans. These discussions then triggered actions for improvement at an individual plan or resulted in the dissemination of best practices at various plans. Results achieved through this approach included the creation of a provider directory to assist employees in accessing care, the development of joint projects between purchasers and health plans to enhance ease of referrals to specialists and to educate employees with diabetes, and streamlining of procedures for complaints and grievances. One purchaser, for example, has been working closely with plans to improve in areas related to customer service and referrals to specialists. The purchaser identified problems in these areas using an employee satisfaction survey, employee complaints, and feedback from employee committees established to improve communications between the purchaser and employees. For example, approximately 20 percent of employees surveyed were very dissatisfied with the procedures for changing primary care physicians. The purchaser discussed these problems with the health plan during a site visit. One month later, the plan distributed listings of primary care physicians and specialists, including their hospital affiliation. The plan also committed to meet weekly with the purchaser to continue discussing the purchaser’s concerns. Another purchaser began a joint activity with a health plan after analyzing data from the purchaser’s open enrollment survey and a member satisfaction survey. Results of the survey revealed, among other items, that only 55 percent of employees were satisfied with ease of referral to a specialist. In response to these concerns and the plan’s own satisfaction data, the purchaser and one of its health plans designed a specialist referral project to streamline member access to specialty care. Telephone surveys and focus groups were conducted with four provider groups and members receiving services from those groups to evaluate the impact of this project. All parties—providers, the purchaser, the plan, and member representatives—are currently meeting with provider groups to design solutions to member and physician concerns. The first purchaser also addressed the issue of specialty referrals on the basis of data from a satisfaction survey. These data indicated that employees perceived specialty referrals as being too slow and too hard to get. In some cases, members had to wait for a review committee at the health plan to approve a referral to a specialist. The purchaser’s analysis of satisfaction survey data, coupled with a health plan’s own analysis, prompted the appointment of a task force to develop a referral system. The system developed by the health plan gives primary care physicians the authority to approve referrals on the spot. This purchaser also collaborated with a health plan in developing a diabetes management program, designed to improve patient quality of life and to reduce emergency room visits. This program was developed in response to the prevalence of diabetes among employees and the purchaser’s examination of quality-related data from HEDIS measures. After the purchaser initiated discussions with the health plan as part of its collaborative approach, the plan used its pharmacy database to identify diabetic employees of the purchaser. Employees recruited to participate in the program received educational materials on diabetes as well as the opportunity to participate in classes at various work sites. The plan subsequently surveyed participants to obtain information on their evaluation of materials provided and classes attended as well as outcome measures, such as perception of health status and diabetes-related quality-of-life measures. The purchasers also used quality-related data to identify and disseminate best practices after holding discussions with a health plan. One of the four purchasers, for example, conducts annual visits at the various sites operated by the plan that serves most of the purchaser’s employees nationwide. During these visits, the purchaser and plan managers evaluate plan policies and procedures, review HEDIS data, conduct clinical audits, and analyze satisfaction survey data. At one site, the purchaser’s staff identified what they viewed as an exceptional process for handling appeals. After they suggested that this site share its process with other sites managed by the health plan, the process was implemented in other locations. Purchasers have also achieved results by using quality-related data to assess plan compliance with established contractual standards and to discipline or reward plan performance. After applying financial penalties, one purchaser said it achieved improvements in employee satisfaction. This purchaser also documented that it used HEDIS data as part of the rate negotiation process. Through this process, the purchaser communicated its unwillingness to accept higher rate increases from plans that had not performed as well as others. Purchasers often held health plans accountable against contractually specified standards to meet the purchasers’ goals. For example, one purchaser developed standards to meet its goal of enhancing the value of health care services delivered to its members by the year 2000. Purchaser standards measuring performance included timeliness of identification card issuance, evidence of coverage booklet distribution, speed of written responses, and average time for the telephone to be answered by a person and telephone abandonment rates. In the case of one plan, its performance deteriorated over 2 quarters on two specific standards: having a plan representative answer the telephone within 35 seconds after a caller opted to speak with the representative and a telephone abandonment rate of less than 5 percent. As a result, the purchaser sent a letter to the plan requesting that it explain its poor performance and outline its corrective action. The plan was also asked to send continuing commentary on performance in these areas when submitting its quarterly results on required performance measures. The plan responded by consolidating the management of its member services and by improving the capability of its database and has since improved its performance. Other standards developed by purchasers to address areas of particular concern included identification card accuracy, appeal and grievance turnaround times, timeliness of data submissions, and physician turnover. Two purchasers imposed financial penalties when specific contractual standards described in the contract were not met. The standards selected described, among other items, specific purchaser expectations related to the plans’ ability to maintain or improve access and employee satisfaction. Purchasers also used the rate negotiation process to reward or penalize plans for their performance. Since its 1994 move into managed care, one purchaser has required the five health plans covering a majority of its enrolled population to meet standards in the areas of appeals and grievances; customer service, including member satisfaction, call abandonment rate, and telephone response rate; and data reporting, including accuracy and timeliness. These standards are specified in a partnership agreement. The success of individual health plans at meeting these standards is subsequently captured in a purchaser scorecard on individual plan performance. A distinctive feature of this scorecard is a subjective, collective assessment by health benefits staff of how well plans respond to purchaser demands. If the standards are not met, this purchaser assesses a financial penalty equal to a designated small percentage of total revenues under the contract. According to the purchaser and health plans, this minor penalty has helped effect changes in the behavior of health plans, since they generally wish to avoid the embarrassment of a penalty. This purchaser annually evaluates health plan performance with regard to how well the purchaser’s staff thinks the plan responds to these and other concerns. The purchaser’s staff base their ratings on their interaction with plan staff during weekly meetings. For example, for 1995, one plan was penalized about $9,000 because the purchaser was dissatisfied with, among other issues, its responsiveness to purchaser concerns. For 1996, the purchaser found the plan to be more responsive and no penalties were levied for the plan’s failing to meet this performance standard. However, during the same 1995 to 1996 time period, the purchaser staff continued to be dissatisfied with the plan’s commitment to customer service. For 1995, it was penalized approximately $6,000, and for 1996, it was penalized about $7,000. Another purchaser also attributed improvements in quality to the use of financial penalties. As an example, this purchaser established a contract standard requiring plans to maintain an 85-percent satisfaction rate among its employees. Data submitted for the plan’s midyear review showed that its rate fell from 91 to 84 percent. When the plan investigated, it found that the purchaser’s employees felt plan providers lacked empathy. The plan instituted training for the providers. Six months later, employee satisfaction had risen to 93 percent. Another tool used by purchasers to evaluate health plans at the end of a period is the annual contract renewal and rate negotiation process. Purchasers can use quality-related data to reward or penalize plans as part of this process and, as a result, believe that they are improving the value of their health care purchasing decisions. Officials at one purchaser said they were able to improve the quality of health care while holding the line on costs because the purchaser’s rates are based in part on rewarding health plans for high performance. Beginning in 1996, another purchaser began to target plans that proposed rate increases but had low overall HEDIS scores for further conversation.According to the consultant hired by the purchaser, the first year this strategy was used, four targeted managed care plans had proposed a 2-percent increase in premiums. After rate negotiations, the premiums decreased by 7 percent. For the next year, 21 targeted plans had proposed a 6-percent increase. A 4-percent decrease was achieved through rate negotiations. The purchaser attributes the premium decreases to the use of HEDIS data in negotiations in addition to the analysis of administrative fees, average charge per member per month, and a comparison with similar plans in the same geographic area and with their regional claims experience. The purchaser is currently studying the relationship between the cost savings achieved through rate negotiations and quality of care. In addition to taking actions to elicit changes at health plans, purchasers can also use data about quality to help employees make informed choices in selecting plans. Report cards provide the results of cost and quality indicators, as well as other descriptive information, comparing the performance of competing health plans. Some believe that as consumers become better informed and decide not to select health plans of lesser quality, such plans may be motivated to initiate improvements in the quality of care they provide. Research on report cards indicates that these formats are continuing to evolve as a way of presenting quality-related data. We found that of the two purchasers using report cards, one purchaser surveyed employees and concluded that employees found the information useful. The other saw only a modest increase in employee selection of the plan with the highest quality ranking in the report card. One purchaser that disseminated information to employees collaborated with the magazine Health Pages to report information about the quality of health plans the purchaser offered to its employees. Information in this magazine included general descriptions of the plans; characteristics of the plans; physician and hospital networks; information about preventive care, such as the rates at which plans administer childhood immunizations or perform cholesterol screenings; and satisfaction ratings. The other purchaser that disseminated information to its employees produced and distributed its own report cards comparing offered health plans during the open enrollment period. For example, one report card gave prospective enrollees comparative information on HEDIS measures in three areas: preventive health services (childhood immunizations and cholesterol screening), women’s preventive health (prenatal care and pap smear and mammographies), and care for chronic illness (diabetic eye examinations). The report cards used by each purchaser also contained narrative material explaining the importance of such measures. The purchasers using report cards to educate employees as part of the enrollment process saw some initial results from their decision to disseminate comparative information. For example, from an employee survey intended to assess the effect of its first report card on enrollment behavior, one purchaser found that 66 percent of those responding viewed the purchaser’s report card as very or somewhat important in assisting members in selecting their plan. The purchaser did not use a survey to assess the effect of its second report card; however, they did examine several hundred write-in responses returned on an enclosed tear-out sheet. The most frequent employee recommendation for future report cards was to include more data about the quality of each plan. Members also recommended providing (1) easy-to-read comparisons, such as those found in Consumer Reports; (2) feedback from existing or previous plan members; and (3) information on complaints filed against physicians or hospitals. A subsequent report card reflected the first two recommendations. This report card also contained information based on most frequently asked questions in such areas as administrative policies, prescription drugs, disenrollment statistics, type of physician specialties offered in the plan, and NCQA accreditation status. The purchaser has not evaluated if employees moved into health plans on the basis of report card information. The other purchaser assessed the effect of providing employees with comparative information by examining the extent to which enrollees actually shifted into the plan with the highest quality ranking. The purchaser concluded that a modest shift had occurred. The purchaser subsequently froze enrollment in plans with continuing quality problems and saw a more significant shift as a result. To achieve greater results from the use of quality-related data in the future, the purchasers we visited see future opportunities to rely on such data for selecting and monitoring the performance of health plans, rewarding or penalizing plans through rate negotiations, as well as informing and educating their employees. They have already begun or are planning to use quality-related data to (1) discriminate among and contract with fewer plans to make quality oversight and monitoring efforts more effective; (2) decide whether to renew contracts with plans; (3) translate performance goals into contractual standards; (4) present multiple types of data to health plans through combined formats, known as scorecards; and (5) negotiate rates with and provide financial incentives to employees to choose plans with higher quality rankings. The purchasers we visited are also beginning to use quality-related data to more closely focus their efforts on issues of particular concern to them, such as provider and health plan relationships. Despite concerns over existing measures, several purchasers plan to make greater use of HEDIS measures. Another approach to care has been taken by a national purchasing coalition, which conducts in-depth reviews of costly and seriously ill cases for their purchaser members as part of ensuring health plan quality. Purchasers intend to make various changes in how they select health plans in the future. For example, one purchaser first focused use of quality-related data to select plans in areas where its employees are geographically concentrated. This purchaser now plans to begin using such data in selecting plans in areas with fewer employees. Other purchasers would like to use quality-related data to contract with fewer plans to make quality oversight and monitoring efforts more effective and cost efficient and to eliminate poorly performing health plans that are unable to demonstrate improvement. For some purchasers, quality-related data have not been sufficiently reliable and valid for decisionmaking. However, once these concerns are resolved, several purchasers may move to use quality-related data as a basis for not renewing contracts with poorly performing plans. Contracting with fewer plans may mean that the purchaser does not need to expend as many resources for monitoring. Purchasers see numerous ways to increase the use of quality-related data when monitoring health plan performance. Several purchasers we visited plan to develop new contractual standards to more effectively hold health plans accountable. For example, one purchaser plans to translate its existing performance goals into contractual standards. Originally, it had issued these goals with the expectation that health plans would continuously strive to address areas of importance to the purchaser regardless of whether the goals appeared in the contract. By translating performance goals into contractual standards, this purchaser hopes health plan accountability will improve. For another purchaser, if a plan’s general satisfaction performance falls below a certain level, the health plan becomes a candidate for quality improvement dialogues and may be selected for more in-depth surveys or reviews of employee satisfaction. This purchaser, which has had multiple measurement initiatives, also plans to analyze satisfaction, HEDIS, and other measures that need to be consolidated to create an overall scorecard—an approach already taken by another purchaser we visited. By assigning weights to various indicators of performance—including financial, clinical, and customer service—purchasers can give health plans an overall quality index score and present the results in a quality assessment instrument, or scorecard. The advantage of this approach is that multiple sources of information can be presented in a comprehensive format, which can be used by purchasers to discuss health plan performance. Intended to reflect an employer’s specific health care benefits strategy, in some cases, these scorecards can be associated with rewards for good performance and incentives for improving poor performance. A purchaser other than the four we visited, for example, will use its scorecard to reward those plans that have performed well with incentive payments and give plans with low scores the opportunity to improve over a reasonable time frame. However, if such plans do not improve their performance, they could risk losing this company’s business. The four purchasers we visited all recognize the need to incorporate additional performance and quality measures into the annual contract renewal and rate negotiation process. As explained by two purchasers, future negotiations of rates with health plans must achieve a balance between cost and quality. For these purchasers, a focus on costs to the exclusion of quality will result in a decline in the overall value of care. One purchaser plans to incorporate results of its health plan performance scorecard into rate negotiations. As data improve, purchasers plan to improve their report cards that compare plan quality. Both our study and the Watson Wyatt/WBGH survey have found increasing use of these reports by large purchasers. We have reported that many purchasers are moving toward greater use of report cards and that others plan to do so in the near future. According to the Watson Wyatt/WBGH survey, 33 percent of large purchasers give their employees information about accreditation status and 26 percent give their employees HEDIS information. While many purchasers are moving toward using report cards, there are concerns about performance reports, such as the reliability and validity of data, the need for more readily available and standardized information, and a greater emphasis on outcome measures. One purchaser that published a report on plans for employees has not seen desired movement to the most highly ranked plan. Therefore, it intends to implement preferred pricing to encourage employees to move to more highly ranked plans by setting lower employee premiums for these plans. In the meantime, this purchaser froze enrollment in one plan, which had continuing quality problems. According to the Watson Wyatt/WBGH survey, 32 percent of the large purchasers who responded to the survey offered some type of financial incentive to employees to choose plans deemed to be of exceptional quality by the purchaser. This technique also rewards plans designated by a purchaser as being of high quality because it encourages enrollment in these plans. One purchaser that we did not visit attributes desirable results to preferred pricing. The purchaser ranked performance in eight selected quality categories for managed care plans and disseminated this information as part of a medical plan guide during the annual health care and benefits enrollment process. The purchaser claims significant enrollment increases in top-rated plans and decreases for below-average plans. According to a purchaser official, its efforts to reward workers for selecting good plans led to an almost 13-percent increase in enrollment for these plans. In considering the next step in the use of quality-related data, some purchasers plan to move from a focus on health plan quality to exploring the use of data related to provider quality. Two purchasers plan initiatives based on the use of satisfaction data to identify problems with specialist physician referrals. One purchaser, for example, will launch an initiative to collect quality-related data on providers and to later issue report cards on provider performance. In addition, this purchaser has begun conversations with providers to gain a better understanding of how health plans and providers could relate more effectively. The purchaser hopes to develop an approach that will financially reward health plans for prompting desired changes in provider behavior. The purchasers that we visited told us that they plan to make more use of data from HEDIS and other measures as they become available. One purchaser noted that NCQA’s database of managed health care information, Quality Compass, will be helpful in producing user-friendly reports for employers. Quality Compass contains performance, accreditation, and patient satisfaction information from more than 300 managed health care plans throughout the United States. In general, purchasers appear to have mixed views on the use of HEDIS measures. We have found that some purchasers are reluctant to disseminate information on HEDIS measures to their employees.Purchasers have expressed concerns over self-reported data that are not independently audited, and a recent study notes that many health plans are struggling to provide data on all of the measures and some fail to produce any data. However, NCQA recently announced that it will certify organizations to perform audits of HEDIS data. This may further improve the quality of data that purchasers receive from health plans. Purchasers other than those we visited appear to have made much more extensive use of HEDIS measures. They use the data to select plans, monitor changes in performance over time, and establish benchmarks and minimum standards. According to one study, many work with their health plans to identify best practices and develop strategies for quality improvement. Also, some companies have incorporated performance on HEDIS measures as part of their pricing strategies. The purchasers we visited plan to continue making use of HEDIS and other quality-related measures as they are refined and new ones become available. In contrast, a national purchasing coalition does not rely exclusively on existing quality-related measures but rather uses medical audits to determine whether managed care plans have the systems in place to respond to and appropriately manage patients with potentially serious and costly episodes of illness. Although the characteristics of the Medicare program have distinguished it from other purchasers and shaped HCFA’s major strategies for ensuring quality care for beneficiaries enrolled in HMOs, the passage of the Balanced Budget Act of 1997 makes the experience of other purchasers more relevant to HCFA. This legislation gives the Medicare program authority to contract with new types of managed care plans and calls for the program to provide quality-related and other comparative data to beneficiaries to promote a more informed selection of health plans. It is expected to result in more plans contracting with the program and more beneficiaries enrolling in plans. The legislation also requires managed care plans to take action to improve quality of care. As a result, HCFA will now begin to look more like purchasers that enroll most of their employees in managed care and those that provide comparative information on health plans to their employees as well as use quality-related data to prompt health plans to improve their performance. The experiences of purchasers we visited have implications for HCFA in three ways: (1) educating beneficiaries as to the meaning of quality-related measures when providing comparative information on health plan quality; (2) interacting with health plans to take action, either through a collaborative- or a compliance-oriented approach, when problems with health plan performance are surfaced by quality-related data; and (3) continually looking for additional opportunities to make use of quality-related data. Perhaps the most striking difference between HCFA and other purchasers has to do with the enormity of HCFA’s presence in the marketplace. Although HCFA is the nation’s largest purchaser of health care, only a small percentage of Medicare beneficiaries have decided to enroll in HMOs, although this number has been rising sharply in recent years. Nevertheless, the sheer number of Medicare beneficiaries in managed care far exceeds the number of employees who would be enrolled in managed care by a private company. The purchasers we visited now enroll most of their employees in managed care health plans. The largest purchaser we visited serves 1 million people, while HCFA in its Medicare managed care program currently serves over 5.5 million beneficiaries, with potentially many more expected according to CBO estimates. HCFA also differs from other purchasers in the freedom of choice enjoyed by Medicare beneficiaries who have had far more latitude in selecting options for health care than others. Much of the privately insured population under 65 only has access to those health plans selected by their employer, and in many cases, the employer just chooses one plan. They also only have the option to enroll or disenroll during a specified “open season.” In contrast, Medicare beneficiaries have been able to select any of the Medicare-approved HMOs in their area and may switch plans monthly or choose the fee-for-service program. HMOs have been able to market their plans to Medicare beneficiaries throughout the year, not only during the required 30-day open enrollment period. The structure of the Medicare program, unlike private sector care, is determined by law and regulation. Any eligible health plan that agrees to meet minimum standards may participate in the Medicare program. In contrast, private sector purchasers can engage in “selective contracting” to select plans with lower costs and to use quality-related and other data in their selection decisions. As a result, they can exclude plans as part of the selection process. In contrast, HCFA does not have the flexibility of refusing to contract with plans that meet its minimum standards. In markets such as Los Angeles, HCFA contracts with 14 health plans; other large purchasers in that area contract with a smaller number of plans and claim that contracting with fewer plans enhances a purchaser’s ability to more effectively oversee the quality of health plan performance. While other purchasers have more flexibility than HCFA in selecting plans, Medicare HMO beneficiaries in certain parts of the country have the ability to choose among more managed care plans than may be available to employees. HCFA also differs from other purchasers in how HMO prices are set. Other purchasers can negotiate rates with health plans on the basis of performance measured against preestablished standards. In contrast, Medicare HMO rates are determined by statutory formula, which does not allow the flexibility of negotiation. Like other purchasers, HCFA monitors HMO performance but does so according to law. Its two principal strategies are its HMO monitoring program and review by peer review organizations. The monitoring program implements requirements ranging from financial solvency requirements to grievance procedures. After a Medicare contract is awarded, HCFA regional staff have the responsibility of monitoring HMOs against federal statutory and regulatory requirements as part of on-site biennial reviews. HCFA is also required to contract with peer review organizations, also known as quality improvement organizations, which are physician organizations in each state that review HMO quality of care. In the past, these organizations attempted to determine instances of poor care through medical record reviews. In recent years, quality improvement organizations and plans have begun to conduct quality improvement projects in different clinical areas. For example, in one project, quality-related measures have been used to collect information, provide feedback to plans on their performance, and design interventions to improve the quality of care in outpatient diabetes management. A goal of title IV of the Balanced Budget Act is to encourage Medicare beneficiaries to enroll in managed care health plans. CBO estimates suggest that HCFA’s presence as a purchaser of managed care will become even more pronounced than at present and that HCFA will need to become more active in its oversight functions. In this regard, the information the purchasers provide employees and purchasers’ monitoring experiences are especially relevant. The Balanced Budget Act establishes specific time frames for HCFA to meet in providing beneficiaries comparative information on covered benefits, premiums, and quality and performance of managed care plans to guide their enrollment decisions. Although HCFA had the authority to provide beneficiaries with such information prior to the act, past work by GAO found that HCFA was not doing so and recommended that HCFA help elderly consumers choose among competing Medicare HMOs by distributing comparative information on HMOs. According to HCFA, the agency had already begun to move in this direction prior to the passage of the Balanced Budget Act. The new legislation, however, not only establishes specific time frames for HCFA to meet but also couples the provision of comparative information with an annual open enrollment season. By the year 2002, with limited exceptions, Medicare beneficiaries who enroll in a health plan will only be able to enroll in another plan during periodic coordinated open enrollment seasons, whereas at present they can switch at any time. The reduced ability of Medicare HMO enrollees to freely change plans places an additional responsibility on HCFA for ensuring the quality of care that HMO enrollees receive. Purchasers have had a variety of experiences in distributing comparative information on health plans to their employees. One purchaser we visited surveyed employees on its report card and concluded that employees found the information to be useful. The purchaser later used feedback from employees to modify the report card and enhance its usefulness. The effect of the report cards, however, is not yet clear. For example, another purchaser found only a modest shift of employees into the plan with the highest quality ranking and decided to encourage changes in employee behavior by freezing enrollment in plans with continuing quality problems. In designing these report cards, both purchasers provided explanatory material so that employees would be able to better understand the meaning of the measures employed. These experiences by purchasers are relevant to HCFA. Not only do they provide comparative information to their employees, but in using feedback from employee surveys and by assessing the impact of such information on employee behavior, purchasers demonstrate that they continually review the value and utility of the information they present to their employees. For HCFA, this implies continual monitoring of how consumer information is used. Other lessons for HCFA relate to the need for purchasers to educate employees on the meaning of the measures contained in the report card and how to interpret these measures when deciding between health plans. In addition to providing quality-related data to employees, purchasers also provided this information to plans and expected the plans to take action on the basis of this information. The Balanced Budget Act provides a more explicit listing of required elements for plan quality assurance programs than had been required before. These elements include requirements for plans to take action to improve quality and assess the effectiveness of such action through systematic follow-up. In relation to this requirement, HCFA is considering how to use standardized measures to prompt quality improvement activities. Elements of this approach have already been present in collaborative projects between quality improvement organizations and health plans. Again, the experience of the four purchasers we visited can inform how HCFA addresses the quality assurance provisions in the Balanced Budget Act. The four we visited did not simply provide health plans with data from HEDIS measures, satisfaction surveys, and other sources of information. They also met and took follow-up steps to ensure the plans were taking action to improve health plan performance and achieve what they described to us as promising results. In addition, some purchasers used the information to penalize plans that were not meeting their standards. For example, one purchaser alerted employees to problems in health plan performance by freezing enrollments in a plan that had performance problems. Purchasers emphasized the importance of interaction with plans and of blending techniques from two purchasing philosophies—one oriented toward quality partnering and the other toward ensuring compliance with standards set by the purchaser. In the same way that purchasers refine the information provided to employees, they continue to reevaluate the ways in which they provide information to health plans on their performance. While HCFA can examine how other purchasers use quality-related data in some areas, it would need new legislative authority to implement other purchaser practices in using quality-related data. These include the use of quality-related data to selectively contract and negotiate rates with health plans. Tables I.1 through I.4 provide brief descriptions—including number covered, enrollee locations, purchaser goals, and managed care experience—of the large corporate purchasers we visited as well as the purchasers’ quality strategies. California, Florida, New Jersey, and Texas — Reduce the rate of increase in health care costs. — Offer employees a choice of quality medical plan options. — Ensure provider choice and access. — Ensure plan quality and employee satisfaction. J&J first offered a managed care point-of-service option to its employees in 1995; it had previously offered a traditional indemnity option along with HMOs. By 1997, nearly 80 percent of the company’s enrollees were covered by self-funded managed care health plans, including point-of-service and HMO options. J&J also contracts with fully insured HMOs in areas of the country with fewer employees and maintains its traditional indemnity plan. J&J initially focused on selecting and monitoring plans that enrolled a majority of employees. J&J also required health plan account representatives to attend a training and orientation program. J&J assesses financial penalties when expectations are not met according to a performance scorecard. J&J also analyzes individual complaints to determine whether they are symptoms of an underlying, systemwide problem and demands documentation from health plans on how complaints are resolved. J&J plans to extend monitoring efforts by collecting more quality-related data on fully insured HMOs, use multiple sources of data in a balanced scorecard format, drop health plans for poor performance based on data, gauge enrollee satisfaction by reviewing enrollment trends, and develop a methodology for independent verification of quality-related and other performance data. California, Florida, Illinois, Indiana, New Jersey, New York, Tennessee, and Texas Improve value and provide employees with a choice of plans and providers. In 1982, FedEx offered the first “local HMO” (an HMO serving a limited geographic area) in the New York area. Between 1984 and 1991, the company launched 46 local HMOs. Additional options were rolled out by market; California-based employees offered a self-funded POS/HMO option in 1993 with the same option extended in 1994 to employees in other locations. By 1997, about 78 percent of employees were enrolled in one of three managed care options: a self-funded POS/HMO, a basic preferred provider organization, or a fully insured local HMO; the remaining 20 percent were still enrolled in the basic indemnity plan. FedEx established preferred pricing for its national plan and has worked extensively with a consultant to collect and analyze first utilization data. More recently, FedEx has used HEDIS data and provided feedback in “dialogues” if plans scored poorly either due to low HEDIS scores or incomplete data submissions. FedEx was also an early participant in the areawide business coalition that acheived improvements in quality at area hospitals. After achieving success in cost control through managed care, FedEx has reorganized its benefits function in a step to improve measurement and improvement of quality at health plans. FedEx plans to communicate more quality-related data to employees (with an emphasis on satisfaction data) and, through scorecards combining HEDIS and other types of quality-related data, provide consolidated feedback to plans. Table I.3: Brief Description of Southern California Edison (SCE) Arizona, California, and Nevada — Manage costs. — Improve health plan quality and service. — Promote consumer education. SCE introduced several HMOs in the mid-1970s and from 1989 to 1995 administered a self-insured PPO and acted as both purchaser and provider, with on-site doctors and clinics. In 1995, the PPO was replaced with seven plans offering standardized benefits. About 94 percent of the company’s enrolled population is now covered by four health plans, three of which offer both a POS and HMO option. SCE emphasizes a quality partnering approach, which it describes as relationship-driven, with continued refinement of its quality strategy over a 3-year period. Health plan site visits to discuss HEDIS, satisfaction survey results, complaints, report cards, and measurement of plan performance against the company’s performance goals are a central tool of this strategy. SCE participates with a business coalition to administer a comprehensive member satisfaction survey and collaborates with other purchasers, health plans, and medical groups to obtain audited HEDIS data. SCE also uses report cards to present quality-related data to employees and holds meetings with consumer committees (representing employees, retirees, and union representatives) to discuss issues in health plan performance and emerging trends in the management of health care delivery. SCE plans to continue efforts to improve performance and accountability at the health plan, medical group, and provider levels and to communicate health plan and medical group quality indicators to plan participants. SCE may implement incentives to encourage enrollment in plans that are the highest performers. — Ensure the availability of affordable, quality health care for all participants. — Provide leadership in health care purchasing and quality. CalPERS has offered managed care since 1962. In 1989, its fee-for-service plans were consolidated into one PPO; in 1993, a second PPO product was introduced. Currently, 19 percent of covered lives are enrolled in PPOs and 81 percent in fully insured HMOs. In 1992, a standard benefit design was implemented to allow the purchaser and enrollees to make more meaningful comparisons of HMOs. The number of HMOs contracted has dropped from 23 to 10 due to changes in the health care industry, such as mergers; new plans will be added only if they cover previously unserved areas. Successful cost containment efforts raised concerns over impact on quality care. CalPERS, however, found early efforts to measure quality were inhibited by lack of reliable, comparable data and characterizes its approach as “conservative and incremental.” CalPERS distributed health plan report cards for the 1995, 1996, and 1997 benefit years, with the first presenting comparative HEDIS and satisfaction data; the second was expanded to provide survey results on why members changed health plans; and the third added answers to frequently asked questions by enrollees, including NCQA accreditation and other plan information. CalPERS participates with a business coalition in a comprehensive member satisfaction survey and collaborates with other purchasers, health plans, and medical groups to obtain audited HEDIS data. CalPERS plans to expand its use of report cards to include comparative disease management outcomes and complaint monitoring results. As data improve, CalPERS plans to increase its use of contractual quality standards and consider financial incentives for plans to improve. Nancy Donovan, Evaluator-in-Charge, (202) 512-7136 Dawn Shorey, Senior Evaluator The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO determined how large purchasers use quality-related data to seek or promote better quality of care and lessons that can be learned from their experiences for the Health Care Financing Administration (HCFA). GAO noted that: (1) after collecting and making use of quality-related data, the purchasers GAO studied reported that in addition to cost savings, they saw improvements in access to care and health plan services, as well as in employee satisfaction with health plan performance; (2) they realized such improvement by identifying opportunities to use quality-related data in selecting health plans, monitoring health plan performance, developing quality improvement initiatives with plans and taking other actions, and providing information on health plans to their employees; (3) while HCFA is a unique purchaser of managed care--by virtue of the size of the Medicare program and the freedom of choice provided to beneficiaries--a number of private purchasers' quality of care strategies could be relevant to HCFA's administration of the Medicare program; and (4) major lessons from large purchasers' experiences relate to the importance of: (a) educating employees as to the meaning of quality-related measures when providing comparative information on health plan quality; (b) using collaborative- and compliance-oriented approaches to achieve improvements in plan performance; and (c) continually looking for additional opportunities to make use of quality-related data, such as developing standards and benchmarks for plan performance.
Alaska encompasses an area of about 365 million acres—more than the combined area of the next three largest states of Texas, California, and Montana. The state is bound on three sides by water, and its coastline, which stretches about 6,600 miles (excluding island shorelines, bays and fjords) and accounts for more than half of the entire U.S. coastline, varies from rocky shores, sandy beaches, and high cliffs to river deltas, mud flats, and barrier islands. The coastline constantly changes through wave action, ocean currents, storms, and river deposits and is subject to periodic, yet often severe, erosion. Alaska also has more than 12,000 rivers, including three of the ten largest in the country: the Yukon, Kuskokwim, and Copper Rivers. (See fig. 1.) While these and other rivers provide food, transportation, and recreation for people, as well as habitat for fish and wildlife, their waters also shape the landscape. In particular, ice jams on rivers and flooding of riverbanks during spring breakup change the contour of valleys, wetlands, and human settlements. Permafrost (permanently frozen subsoil) is found over approximately 80 percent of Alaska. It is deepest and most extensive on the Arctic Coastal Plain and decreases in depth further south, eventually becoming discontinuous. In northern Alaska, where the permafrost is virtually everywhere, most buildings are elevated to minimize the amount of heat transferred to the ground to avoid melting the permafrost. However, rising temperatures in recent years have led to widespread thawing of the permafrost, causing serious damage. As permafrost melts, land slumps and erodes, buildings and runways sink, and bulk fuel tank areas are threatened. (See fig. 2.) Rising temperatures have also affected the thickness, extent, and duration of sea ice that forms along the western and northern coasts. Loss of sea ice leaves coasts more vulnerable to waves, storm surges, and erosion. When combined with the thawing of permafrost along the coast, loss of sea ice seriously threatens coastal Alaska Native villages. Furthermore, loss of sea ice alters the habitat and accessibility of many of the marine mammals that Alaska Natives depend upon for subsistence. As the ice melts or moves away early, walruses, seals, and polar bears move with it, taking themselves too far away to be hunted. Federal, state, and local government agencies share responsibility for controlling and responding to flooding and erosion. The U.S. Army Corps of Engineers has responsibility for planning and constructing streambank and shoreline erosion protection and flood control structures under a specific set of requirements. The Department of Agriculture’s Natural Resources Conservation Service (NRCS) is responsible for protecting small watersheds. The Continuing Authorities Program, administered by the Corps, and the Watershed Protection and Flood Prevention Program, administered by NRCS, are the principal programs available to prevent flooding and control erosion. Table 1 below lists and describes the five authorities under the Corps’ Continuing Authorities Program that address flooding and erosion, while table 2 identifies the main NRCS programs that provide assistance for flooding and erosion. In addition to the Corps’ Continuing Authorities Program, other Corps authorities that may address problems related to flooding and erosion include the following: Section 22 of the Water Resources Development Act of 1974, which provides authority for the Corps to assist states in the preparation of comprehensive plans for the development, utilization, and conservation of water and related resources of drainage basins. Section 206 of the Flood Control Act of 1960, which allows the Corps’ Flood Plain Management Services’ Program to provide states and local governments technical services and planning guidance that is needed to support effective flood plain management. A number of other federal agencies, such as the Departments of Transportation, Homeland Security (Federal Emergency Management Agency), and Housing and Urban Development, also have programs that can assist Alaska Native villages in responding to the consequences of flooding by funding tasks such as moving homes, repairing roads and boardwalks, or rebuilding airport runways. In additional to government agencies, the Denali Commission, created by Congress in 1998, while not directly responsible for responding to flooding and erosion, is charged with addressing crucial needs of rural Alaska communities, particularly isolated Alaska Native villages. On the state side, Alaska’s Division of Emergency Services responds to state disaster declarations dealing with flooding and erosion when local communities request assistance. The Alaska Department of Community and Economic Development helps communities reduce losses and damage from flooding and erosion. The Alaska Department of Transportation and Public Facilities funds work to protect runways from erosion. Local governments such as the North Slope Borough have also funded erosion control and flood protection projects. Flooding and erosion affects 184 out of 213, or 86 percent, of Alaska Native villages to some extent, according to studies and information provided to us by federal and Alaska state officials. The 184 affected villages consist of coastal and river villages throughout the state. (See fig. 3.) Villages on the coast are affected by flooding and erosion from the sea. For example, when these villages are not protected by sea ice, they are at risk of flooding and erosion from storm surges. In the case of Kivalina, the community has experienced frequent erosion from sea storms, particularly in late summer or fall. These storms can result in a sea level rise of 10 feet or more, and when combined with high tide, the storm surge becomes even greater and can be accompanied by waves containing ice. Communities in low-lying areas along riverbanks or in river deltas are susceptible to flooding and erosion caused by ice jams, snow and glacial melts, rising sea levels and heavy rainfall. Flooding and erosion are long-standing problems in Alaska. In Bethel, Unalakleet, and Shishmaref for example, these problems have been well documented dating back to the 1930s, 1940s, and 1950s, respectively. The state has made several efforts to identify communities affected by flooding and erosion over the past 30 years. In 1982, a state contractor developed a list of Alaska communities affected by flooding and erosion. This list identified 169 of the 213 Alaska Native villages, virtually the same villages identified by federal and state officials that we consulted in 2003. In addition, the state appointed an Erosion Control Task Force in 1983 to investigate and inventory potential erosion problems and to prioritize erosion sites by severity and need. In its January 1984 final report, the task force identified a total of 30 priority communities with erosion problems. Of these 30 communities, 28 are Alaska Native villages. Federal and state officials that we spoke with in 2003 also identified almost all of the Native communities given priority in the 1984 report as still needing assistance. While most Alaska Native villages are affected to some extent by flooding and erosion, quantifiable data are not available to fully assess the severity of the problem. Federal and Alaska state agency officials that we contacted could agree on which three or four villages experience the most flooding and erosion, but they could not rank flooding and erosion in the remaining villages by high, medium, or low severity. These agency officials said that determining the extent to which villages have been affected by flooding and erosion is difficult because Alaska has significant data gaps. These gaps occur because remote locations lack monitoring equipment. The officials noted that about 400 to 500 gauging stations would have to be added in Alaska to attain the same level of gauging as in the Pacific Northwest. While flooding and erosion has been documented in Alaska for decades, various studies and reports indicate that coastal villages in Alaska are becoming more susceptible. This increasing susceptibility is due in part to rising temperatures that cause protective shore ice to form later in the year, leaving the villages vulnerable to storms. According to the Alaska Climate Research Center, mean annual temperatures have risen for the period from 1971 to 2000, although changes varied from one climate zone to another and were dependent on the temperature station selected. For example, Barrow experienced an average temperature increase of 4.16 degrees Fahrenheit for the 30-year period from 1971 to 2000, while Bethel experienced an increase of 3.08 degrees Fahrenheit for the same time period. Alaska Native villages have difficulty qualifying for assistance under the key federal flooding and erosion programs, largely because of program requirements that the project costs not exceed economic benefits, or because of cost-sharing requirements. For example, according to the Corps’ guidelines for evaluating water resource projects, the Corps generally cannot undertake a project whose costs exceed its expected economic benefits as currently defined. With few exceptions, Alaska Native villages’ requests for the Corps’ assistance are denied because of the Corps’ determination that project costs outweigh the expected economic benefits. Alaska Native villages have difficulty meeting the cost/benefit requirement because many are not developed to the extent that the value of their infrastructure is high enough to equal the cost of a proposed erosion or flood control project. For example, the Alaska Native village of Kongiganak, with a population of about 360 people, experiences severe erosion from the Kongnignanohk River. However, the Corps decided not to fund an erosion project for this village because the cost of the project exceeded the expected benefits and because many of the structures threatened are private property, which are not eligible for protection under a Section 14 Emergency Streambank Protection project. Meeting the cost/benefit requirement is especially difficult for remote Alaska Native villages because the cost of construction is high—largely because labor, equipment, and materials have to be brought in from distant locations. Even villages that do meet the Corps’ cost/benefit criteria may still not receive assistance if they cannot provide or find sufficient funding to meet the cost-share requirements for the project. By law, the Corps generally requires local communities to fund between 25 and 50 percent of project planning and construction costs for flood prevention and erosion control projects. According to village leaders we spoke to, they may need to pay hundreds of thousands of dollars or more under these cost-share requirements to fund their portion of a project—funding many of them do not have. NRCS has three key programs that can provide assistance to villages to protect against flooding and erosion. One program—the Watershed Protection and Flood Prevention Program—has a cost/benefit requirement similar to the Corps program and as a result, few projects for Alaska Native villages have been funded under this program. In contrast, some villages have been able to qualify for assistance from NRCS’s two other programs—the Emergency Watershed Protection Program and the Conservation Technical Assistance Program. For example, under its Emergency Watershed Protection Program, NRCS allows consideration of additional factors in the cost/benefit analysis. Specifically, NRCS considers social or environmental factors when calculating the potential benefits of a proposed project, and the importance of protecting the subsistence lifestyle of an Alaska Native village can be included as one of these factors. In addition, while NRCS encourages cost sharing by local communities, this requirement can be waived when the local community cannot afford to pay for a project under this program. Such was the case in Unalakleet, where the community had petitioned federal and state agencies to fund its local cost-share of an erosion protection project and was not successful. Eventually, NRCS waived the cost-share requirement for the village and covered the total cost of the project itself. (See fig. 4.) Another NRCS official in Alaska estimated that about 25 villages requested assistance under this program during the last 5 years, and of these 25 villages, 6 received some assistance from NRCS and 19 were turned down—mostly because there were either no feasible solutions or because the problems they wished to address were recurring ones and therefore ineligible for the program. Unlike any of the Corps’ or NRCS’s other programs, NRCS’s Conservation Technical Assistance Program does not require any cost-benefit analysis for projects to qualify for assistance. An NRCS official in Alaska estimated that during the last 2 years, NRCS provided assistance to about 25 villages under this program. The program is designed to help communities and individuals solve natural resource problems, improve the health of the watershed, reduce erosion, improve air and water quality, or maintain or improve wetlands and habitat. The technical assistance provided can range from advice or consultation to developing planning, design, and/or engineering documents. The program does not fund construction or implementation of projects. Four of the nine villages we reviewed are in imminent danger from flooding and erosion and are making plans to relocate, while the remaining five are taking other actions. Of the four villages relocating, Kivalina, Newtok, and Shishmaref are working with relevant federal agencies to locate suitable new sites, while Koyukuk is just beginning the planning process for relocation. Because of the high cost of construction in remote parts of Alaska, the cost of relocation for these villages is expected to be high. For example, the Corps estimates that the cost to relocate Kivalina could range from $100 million for design and construction of infrastructure, including a gravel pad, at one site and up to $400 million for just the cost of building a gravel pad at another site. Cost estimates for relocating the other three villages are not yet available. Of the five villages not currently planning to relocate, Barrow, Kaktovik, Point Hope, and Unalakleet each have studies under way that target specific infrastructure that is vulnerable to flooding and erosion. The fifth village, Bethel, is planning to repair and extend an existing seawall to protect the village’s dock from river erosion. In fiscal year 2003, the Senate Committee on Appropriations directed the Corps to perform an analysis of costs associated with continued erosion of six of these nine villages, potential costs of relocating the villages, and to identify the expected timeline for complete failure of useable land associated with each community. Table 3 summarizes the status of the nine villages’ efforts to respond to their specific flooding and erosion problems. The unique circumstances of Alaska Native villages and their inability to qualify for assistance under a variety of federal flooding and erosion programs may require special measures to ensure that the villages receive certain needed services. Alaska Native villages, which are predominately remote and small, often face barriers not commonly found in other areas of the United States, such as harsh climate, limited access and infrastructure, high fuel and shipping prices, short construction seasons, and ice-rich permafrost soils. In addition, many of the federal programs to prevent and control flooding and erosion are not a good fit for the Alaska Native villages because of the requirement that project costs not exceed the economic benefits. Federal and Alaska state officials and Alaska Native village representatives that we spoke with identified several alternatives for Congress that could help mitigate the barriers that villages face in obtaining federal services. These alternatives include (1) expanding the role of the Denali Commission to include responsibilities for managing a new flooding and erosion assistance program, (2) directing the Corps and NRCS to include social and environmental factors in their cost/benefit analyses for projects requested by Alaska Native villages, and (3) waiving the federal cost- sharing requirement for flooding and erosion projects for Alaska Native villages. In addition, we identified a fourth alternative—authorizing the bundling of funds from various agencies to address flooding and erosion problems in these villages. Each of these alternatives has the potential to increase the level of federal services to Alaska Native villages and can be considered individually or in any combination. However, adopting some of these alternatives will require consideration of a number of important factors, including the potential to set a precedent for other communities and programs as well as resulting budgetary implications. While we did not determine the cost or the national policy implications associated with any of the alternatives, these are important considerations when determining appropriate federal action. In conclusion, Alaska Native villages are being increasingly affected by flooding and erosion problems being worsened at least to some degree by climatological changes. They must nonetheless find ways to respond to these problems. Many Alaska Native villages that are small, remote, and have a subsistence lifestyle, lack the resources to address the problems on their own. Yet villages have difficulty finding assistance under several federal programs, because as currently defined the economic costs of the proposed project to control flooding and erosion exceed the expected economic benefits. As a result, many private homes and other infrastructure continue to be threatened. Given the unique circumstances of Alaska Native villages, special measures may be required to ensure that these communities receive the assistance they need to respond to problems that could continue to increase. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee my have at this time. For further information, please contact Anu Mittal on (202) 512-3841. Individuals making key contributions to this testimony and the report on which it was based were José Alfredo Gómez, Jeffery Malcolm, Cynthia Norris, Amy Webbink, and Judith Williams. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Approximately 6,600 miles of Alaska's coastline and many of the low-lying areas along the state's rivers are subject to severe flooding and erosion. Most of Alaska's Native villages are located on the coast or on riverbanks. In addition to the many federal and Alaska state agencies that respond to flooding and erosion, Congress established the Denali Commission in 1998 to, among other things, provide economic development services and meet infrastructure needs in rural Alaska communities. This testimony is based on GAO's report, Alaska Native Villages: Most Are Affected by Flooding and Erosion, but Few Qualify for Federal Assistance ( GAO-04-142 , December 12, 2003). Specifically, GAO identified (1) the number of Alaska Native villages affected by flooding and erosion, (2) the extent to which federal assistance has been provided to those villages, (3) the efforts of nine villages to respond to flooding and erosion, and (4) alternatives that Congress may wish to consider when providing assistance for flooding and erosion. Flooding and erosion affects 184 out of 213, or 86 percent, of Alaska Native villages to some extent. While many of the problems are long-standing, various studies indicate that coastal villages are becoming more susceptible to flooding and erosion caused in part by rising temperatures. Small and remote Alaska Native villages have generally not received federal assistance under federal flooding and erosion programs largely because they do not meet program eligibility criteria. Even villages that do meet the eligibility criteria may still not receive assistance if they cannot meet the cost-share requirements for the project. Of the nine villages that GAO reviewed, four--Kivalina, Koyukuk, Newtok, and Shishmaref--are in imminent danger from flooding and erosion and are planning to relocate, while the remaining five are in various stages of responding to these problems. Costs for relocating are expected to be high. GAO, other federal and state officials, and village representatives identified alternatives that could increase service delivery for Alaska Native villages. These alternatives include (1) expanding the role of the Denali Commission; (2) directing federal agencies to consider social and environmental factors in analyzing project costs and benefits; (3) waiving the federal cost-sharing requirement for these projects; (4) authorizing the "bundling" of funds from various federal agencies. Although the Denali Commission and two federal agencies raised questions about expanding the role of the Denali Commission in commenting on GAO's report, GAO still believes it continues to be a possible alternative for helping to mitigate the barriers that villages face in obtaining federal services.
DOD obtains nearly all of its clearance investigations through OPM, which is currently responsible for 90 percent of the personnel security clearance investigations for the federal government. DOD retained responsibility for adjudicating clearances of servicemembers, DOD civilians, and industry personnel. Two DOD offices are responsible for adjudicating cases involving industry personnel. The Defense Industrial Security Clearance Office (DISCO) within DSS adjudicates cases that contain only favorable information or minor issues regarding security concerns (e.g., some overseas travel by the individual). The Defense Office of Hearings and Appeals (DOHA) within the Defense Legal Agency adjudicates cases that contain major security issues (e.g., an individual’s unexplained affluence or criminal history) which could result in the denial of clearance eligibility and possibly lead to an appeal. Like servicemembers and federal workers, industry personnel must obtain a security clearance to gain access to classified information, which is categorized into three levels: top secret, secret, and confidential. The level of classification denotes the degree of protection required for information and the amount of damage that unauthorized disclosure could reasonably be expected to cause to national security. For top secret information, the expected damage that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage;” for secret information, it is “serious damage;” and for confidential information, it is “damage.” DOD provided information on each issue specified by the mandate, but certain important information on funding, processing times, and quality was limited or absent. DOD divided its nine-page report into five sections, corresponding to the five sections of the law. DOD began with a discussion of the personnel security clearance investigation funding requirements—$178 million for fiscal year 2007 and approximately $300 million for fiscal year 2008—and indicated that funds exist to cover the fiscal year 2007 projected costs. In section two, DOD reported the size of the investigative backlog by showing that 21,817 (48 percent) of the applications for clearance investigations for industry personnel which were still pending as of July 14, 2007, were more than 90 days old. In section three, DOD reported OPM statistics that showed the average number of days required to complete investigations as of May 2007. An initial top secret clearance took an average of 211 days; top secret renewals, an average of 334 days; and all secret/confidential initials and renewals, an average of 127 days. The fourth section of DOD’s report highlighted seven areas that DOD characterized as progress toward implementing planned changes in the process. These areas included timeliness-improvement actions that were DOD-specific (e.g., adding a capability to electronically submit the applicant’s form authorizing the release of medical information) and governmentwide (e.g., submitting all requests for clearances using OPM’s Electronic Questionnaires for Investigations Processing). In the fifth section, the Under Secretary of Defense for Intelligence certified that the department had taken actions to improve the industry personnel clearance program during the 12 months preceding the report date. DOD supported this finding by including a table showing that the monthly average number of completed industry investigations increased from 13,227 in July 2006 to 16,495 in July 2007. Certain important information on three of the mandated issues—the program funding requirements, the average processing time, and quality in the clearance process—was limited or absent. First, DOD reported program-funding requirements covering less than 2 years. DOD reported an annualized projected cost of $178.2 million for fiscal year 2007, a projected funding requirement of approximately $300 million for fiscal year 2008, and a department statement indicating that it was able to fund the industry personnel security clearance program for the remainder of 2007. The mandate directed DOD to report its funding requirements for the program and the Secretary of Defense’s ability to fulfill them. While the report described DOD’s immediate needs and ability to fund those needs, it did not include information on (1) the funding requirements for fiscal year 2009 and beyond, even though the survey used to develop the funding requirements asked contractors about their clearance needs through 2010; and (2) the tens of millions of dollars that the DSS Director testified to Congress in May 2007 were necessary to maintain the infrastructure supporting the industry security clearance program. The OUSD(I) Director of Security and the DSS Director told us that the department did not include funding requirements beyond fiscal year 2008 because of concerns about the accuracy of the data used to identify the requirements. They told us that the funding requirements of the program depend on the estimates of the future number of investigations that DSS will obtain from OPM, which DSS determines using its annual survey. They, as well as the report, indicated that because projections made farther into the future are more likely to be inaccurate, DOD decided not to include funding projections beyond 1 future year in the report. The report also stated that the data used to construct the projected funding requirements are available through fiscal year 2010, but the report did not include that information. DOD regularly submits longer-term financial planning documents to Congress. Specifically, the future years defense program (FYDP), which is submitted annually to Congress, contains detailed data projections for the budget year in which funds are being requested and at least the 4 succeeding years. The FYDP is a long-term capital plan and as such provides DOD and Congress with a tool for looking at future funding needs beyond immediate budget priorities. Second, DOD reported the average investigation times cited earlier but did not include the times for other specific phases of the end-to-end clearance process. DOD reported the average number of days it took to complete investigations for all clearances closed between May 2006 and May 2007 and the average numbers of days to process DOD industry clearances from end to end for all cases adjudicated during the first 6 months of fiscal year 2007. The mandate directed DOD to report the length of the average delay for an individual case pending in the personnel security clearance investigation process. The Intelligence Reform and Terrorism Prevention Act of 2004 requires the processing of at least 80 percent of clearances to be completed within an average of 120 days, including no more than 90 days for the investigation. Although it did not provide times for other clearance phases and was not mandated to do so, DOD’s report stated that a joint study conducted by OPM, DSS, and industry identified average times to complete six discrete phases—including the investigation, the time needed to mail investigation reports from OPM to a DOD adjudication facility, and the adjudication. Our September 2006 report showed that longer delays are found in some phases of the process than in others (e.g., our analysis of 2,259 cases showed that the application-submission phase took an average of 111 days to complete instead of the goal of 14 days) and suggested that monitoring each of the phases would help DOD to identify where actions are needed to improve timeliness. The OUSD(I) Director of Security and the DSS Director told us that because the DOD report included both the average time to complete an investigation and the time to process the clearance from start to finish, the department did not include the times to process the additional discrete phases of the clearance process. While the information included in the report provides visibility to the processing times for the investigation and for the entire process, monitoring and reporting times for each phase would help DOD and Congress to identify where actions are most needed to improve timeliness. Third, DOD documented improvements in the process but was largely silent regarding quality in the clearance processes. While DOD described several changes to the process it characterized as progress, it provided little information on measures of quality used to assess the clearance processes or procedures to promote quality during clearance investigations and adjudications processes. Specifically, the DOD report’s section describing improvements noted that DSS, DOD’s adjudicative community, and OPM are gathering and analyzing measures of quality for the clearance processes that could be used to provide the national security community with a better product. However, the DOD report did not include any of those measures. When we asked the OUSD(I) Director of Security why the measures of quality were not included, he said the department did not include them because stakeholders in the clearance processes have not agreed on how to measure quality. In September 2006, we identified several areas where OPM-supplied investigative reports and DOD adjudicative data were incomplete. We noted that while eliminating delays in the clearance process is an important goal, the government cannot afford to achieve that goal by providing reports of investigations and adjudications that are incomplete in key areas. We additionally noted that the lack of full reciprocity of clearances—when a security clearance granted by one government agency is not accepted by another agency—is an outgrowth of agencies’ concerns that other agencies may have granted clearances based on inadequate investigations and adjudications. In deciding not to provide certain important information in its first annual report to Congress, DOD has limited the information available to Congress as it oversees the effectiveness of DOD’s industry personnel security clearance processes. Specifically, by not including funding requirements for 2009 and beyond, DOD left out information Congress could use in making longer-term appropriation and authorization decisions for this program. In addition, by not including the times to complete phases of the clearance process other than the investigation, DOD makes it less apparent to Congress where the most significant timeliness gains can be made relative to the costs of improving the processes. Finally, by not including measures of quality in the clearance processes, DOD has only partially supported its assertion that it has made improvements to the clearance processes. DOD reported that OPM conducted 81,495 investigations for the department in fiscal year 2005 and 138,769 in fiscal year 2006 and that DOD staff granted clearance eligibility to 113,408 industry personnel in fiscal year 2005 and 144,608 industry personnel in fiscal year 2006. However, we are unable to report the numbers and unit costs of investigations and adjudications for industry personnel for fiscal years 2000 through 2004, because DOD either was not able to provide data or supplied data that we found to be insufficiently reliable to report. Reliable information for fiscal years 2000 through 2004 was not available because of factors such as the abandonment of an electronic database for recording investigative and adjudicative information. Although some limitations are present for the numbers and costs data for industry personnel for fiscal years 2005 and 2006, our assessments show that they are sufficiently reliable for us to report them, along with explicit statements about their limitations. Our assessments of data on the numbers and costs of investigations and adjudications for industry personnel for fiscal years 2000 through 2004 showed that DOD-provided information was not sufficiently reliable for us to report. The shaded portion of table 1 summarizes underlying factors that contributed to DOD’s inability to provide us with reliable data. (In the next section, we report information provided to us by DOD on the numbers and costs of investigations and adjudications for fiscal years 2005 and 2006). When we assessed the reliability of DOD-provided information on the numbers of investigations for industry personnel, we found discrepancies in the fiscal years 2000 through 2004 summary records kept by two DOD offices: DSS and OUSD(C). The discrepancies in the annual numbers of investigations ranged from 3 to 48 percent. Relative to the numbers found in DSS records, OUSD(C) records showed 3 percent more investigations for secret clearances had been completed in fiscal year 2001 and 48 percent fewer investigations for initial top secret investigations had been completed in fiscal year 2000. The original source of data for both offices’ records was DOD’s Case Control Management System (CCMS), which had formerly been used to electronically store data on DOD personnel security clearance investigations. DOD stopped maintaining CCMS in conjunction with the department’s transfer of DSS’s investigative functions and personnel to OPM in February 2005. DOD estimated that it could save $100 million over 5 years in costs associated with maintaining and updating CCMS by instead using OPM’s Personnel Investigations Processing System for electronically storing investigations data. Because CCMS is no longer available, we were unable to determine which—if either—office’s data were sufficiently reliable for the purposes of this report. While DOD no longer has access to the CCMS software tool needed to aggregate the associated personnel security clearance data, individual files on industry personnel have been archived and are available for access (e.g., when someone renews a clearance). We are similarly unable to report the number of adjudications for fiscal years 2000 through 2004, because DOD could not provide information that was sufficiently reliable for the purposes of this report. Sufficiently reliable data were not available for this period because the Joint Personnel Adjudications System (JPAS) did not become the official DOD adjudication database until February 2005. In the prior years, DSS had stored adjudication-related information on industry personnel in CCMS— which is no longer operational. A DSS official indicated that JPAS provides pre-2005 adjudication information inaccurately because of problems DOD experienced when transitioning from CCMS to JPAS. We found cost data on industry personnel clearances for fiscal years 2000 through 2004 to be insufficiently reliable, as evidenced by the inconsistency of the information that we obtained from DSS and OUSD(C). At the most extreme, the DSS records show that the cost for an investigation of a secret clearance in fiscal year 2004 was 486 percent higher than the rate reported in OUSD(C) records. DOD’s ability to provide us with more reliable information was hampered by two factors. First, when DOD transferred its investigative function and 1,800 authorized positions to OPM in February 2005, the transfer resulted in lost or misplaced records and reduced institutional knowledge in DSS’s financial management office. The DSS Director told us that DSS record keeping has not been a “strong suit” of the agency in the past. Second, DSS leadership has frequently changed over the past 5 years. For example, DSS had four acting directors in the 4 years before getting its current permanent Director, and it had nine comptrollers during the same period. The unit cost for adjudications for fiscal years 2000 through 2004 for industry personnel clearances could not be computed, because the total cost of all adjudications and the number of adjudications—key variables in computing unit cost—were either unavailable or unreliable. For example, DSS officials told us that the budget records for this period did not differentiate the portion of DSS’s budget used to fund DISCO, which adjudicates the majority of DOD’s clearances for industry. Additionally, officials from DOHA, which adjudicates some industry cases, told us that they similarly could not accurately identify a unit cost for adjudications. DOHA officials told us that because their adjudicators conduct additional work besides security clearance work and those costs are not accounted for separately, estimates of the unit cost of the adjudicative work they perform would be speculative. Finally, as we discussed above, the data that DOD provided on the number of adjudications for 2000 through 2004 were not sufficiently reliable for the purposes of this audit. DOD reported that OPM conducted 81,495 investigations of industry personnel for the department in fiscal year 2005 and 138,769 such investigations in 2006 (see table 2). The difference in the numbers of investigations for the 2 years is due largely to the fact that DOD could not provide reliable information on the number of investigations that DSS completed before the February transfer of investigative staff and functions to OPM. In both years, OPM provided DOD with more investigations for secret or confidential clearances than for top secret clearances. More secret/confidential clearances are historically required and performed as compared to top secret clearances, and data presented in table 2 are consistent with this trend. Using OPM-provided data, DSS determined that it had granted clearance eligibility to 113,408 industry personnel in fiscal year 2005 and 144,608 industry personnel in fiscal year 2006 (see table 3). The number of clearances granted in a year may not match the number of investigations conducted in that year because of the time that elapses between completion of the investigation and completion of the adjudication. For the 2 most recent of the 7 fiscal years specified in the mandate, the total estimated unit cost for the entire clearance process varied from $290 for an initial or a renewal of a secret/confidential clearance to $3,850 for the initial top secret clearance that is determined with a standard investigation (see table 4). The lower half of table 4 shows that investigations that are given higher priorities cost more. Regardless of whether the clearance was based on a standard or priority investigation, the primary reason for the difference in costs is due to the effort required to complete the different types of investigations. For example, our September 2006 report noted that OPM estimated that approximately 60 total staff hours are needed for each investigation for an initial top secret clearance and 6 total staff hours are needed for each investigation to support a secret or confidential clearance. Another factor that causes variability in the cost of the clearance determination is whether investigators can use a phased reinvestigation. Starting in fiscal year 2006, the President authorized the use of phased reinvestigations, which do not require some types of information to be gathered during the renewal of a top secret clearance unless there are potentially derogatory issues found in earlier portions of the reinvestigation. While the information in table 4 provides the estimated unit costs of investigations and adjudications and estimated total costs, several considerations suggest that the actual unit costs would be somewhat different from those shown in the table if OPM and DOD were to account for all of the costs. For example, the fixed costs for the investigations do not include any additional costs that DOD might incur should adverse information be revealed that requires an additional subject interview to address this information. In these instances, OPM charges DOD for an additional interview to resolve the issue before the case is adjudicated. In addition, if DOD sends an investigation report back to OPM with a request for additional interviews in order to reconcile conflicting information, there may be additional fees. DOD officials stated that cases requiring subsequent resolution of multiple issues could result in additional charges to address each issue. These special interviews cost $515 in 2005 and $430 in 2006. DOD was unable to provide data identifying the number of investigations that included these special interviews. Conversely, the 2006 investigation costs do not address a $7 million refund that OPM made to DOD in September 2006; the refund pertained to a surcharge covering all DOD investigations that DOD had paid to OPM. In fiscal years 2005 and 2006, DOD paid OPM a surcharge in addition to the base rate OPM charged DOD to conduct investigations. The surcharge amounts were 25 percent in fiscal year 2005 and 19 percent in fiscal year 2006. DOD and OPM agreed to this surcharge in a memorandum of understanding that defined the terms of the transfer of the investigative functions and personnel from DSS to OPM. This surcharge was intended to offset any potential operating losses that OPM incurred in taking over the investigative function from DSS. However, disagreements between DOD and OPM about the amount of the surcharge led to mediation between the agencies in September 2006 and resulted in a retroactive reduction of the surcharge to 14 percent for the third quarter of fiscal year 2006 and an elimination of the surcharge for fiscal year 2007 and beyond. The unit costs of the adjudications—$100 in fiscal year 2005 and $90 in fiscal year 2006—are approximations that must be viewed with some caution. DOD officials acknowledged that while they provided a single value for the unit cost of both top secret and secret/confidential adjudications, the actual time to adjudicate top secret clearance-eligibility determinations is roughly twice that required to adjudicate secret/confidential clearance-eligibility determinations. Furthermore, the DOD-supplied unit cost estimate for adjudications does not account for the cost associated with the additional work required to adjudicate derogatory information in some of the cases that are sent to DOHA. Prior to 2005, DSS had not differentiated the adjudication portion of its budget from other functions in its budget. Changes are occurring in the way in which DOD estimates its future investigations needs, as well as its plans and funding for modifying the personnel security clearance program for industry personnel. The procedures for estimating the numbers of clearance investigations needed annually for industry personnel are being revised in an attempt to improve the accuracy of those estimates. Similarly, DOD is not pursuing DOD- specific planning for reducing backlogs and delays as well as steps to adequately fund its clearance process but instead is participating in governmentwide planning efforts to improve clearance processes. DOD is changing the methods it uses to estimate the numbers of security clearance investigations it will need for industry personnel in the future in an effort to improve the accuracy of those estimates. Since 2001, DOD has conducted an annual survey of contractors performing classified work for the government in order to estimate future clearance-investigation needs for industry personnel. In November 2005, OMB reported a governmentwide goal whereby agencies have been asked to work toward refining their projections to be within 5 percent of the numbers of actual requests for investigation. However, DOD has had difficulties in projecting its departmentwide clearance needs accurately. For the first half of fiscal year 2006, OPM reported that DOD had exceeded its departmentwide projection by 59 percent. The negative effects of such inaccurate projections include impediments to workload planning and funding. We have addressed the impact that inaccurate projections have on workload planning in our prior work. In 2004, we recommended that OUSD(I) improve the projections of clearance requirements for industry personnel—for both the numbers and types of clearances—by working with DOD components, industry contractors, and the acquisition community to identify obstacles and implement steps to overcome them. At that time, DOD officials attributed inaccurate projections to (1) the fact that the voluntary annual survey was answered by only a small fraction of the more than 10,000 cleared contractor facilities, (2) the use of some industry personnel on more than one contract and often for different agencies, (3) the movement of employees from one company to another, and (4) unanticipated world events such as the September 11, 2001, terrorist attacks. In its efforts to improve its estimates of future clearance investigation needs, DSS has made recent changes to the methods it uses to develop these estimates; and it is conducting research that may change these methods further. First, starting in 2006, DSS made its annual survey accessible through the Internet. Second, DSS field staff made a more concerted effort to actively encourage industry representatives to complete the voluntary survey. According to a DSS official, these two changes increased the response rate of the survey, from historical lows of between 10 and 15 percent of surveyed facilities in previous years, to 70 percent of facilities responding in 2007, representing 86 percent of industry personnel with a clearance in fiscal year 2007. Third, during fiscal year 2007, DSS began performing weekly updates to the analysis of future investigation needs, rather than relying on the previous method of performing a onetime annual analysis. Fourth, DSS has changed its analysis procedures by including variables (e.g., company size) not previously accounted for and is using a statistical method that substitutes values for missing survey data. In addition, DOD’s Personnel Security Research Center is assessing a statistical model for estimating future investigation needs in order to determine if a model can supplement or replace the current survey method. Modifications to DOD’s personnel security clearance program are changing from a DOD-specific emphasis to one that focuses on governmentwide efforts. Consequently, DOD does not have a comprehensive plan to address department-specific clearance backlogs, delays, and program funding. The principles of the Government Performance and Results Act of 1993 provide federal agencies with a basis for a results-oriented framework that they can use to construct comprehensive plans that include setting goals, measuring performance, and reporting on the degree to which goals are met. In addition, the Intelligence Reform and Terrorism Prevention Act of 2004 provides DOD with timeliness requirements that would need to be met in any such comprehensive plan addressing clearance backlogs and delays. In our 2004 report on personnel security clearances for industry personnel, we recommended that DOD develop and implement an integrated, comprehensive management plan to eliminate the backlog, reduce the delays in conducting investigations and determining eligibility for security clearances, and overcome the impediments that could allow such problems to recur. At that time, DOD had been reacting to the impediments in a piecemeal fashion, rather than establishing an integrated approach that incorporated objectives and outcome-related goals, set priorities, identified resources, established performance measures, and provided milestones for permanently eliminating the backlog and reducing the delays. The DSS Director told us that DSS had been drafting a comprehensive plan to improve the security clearance process for industry personnel, but new governmentwide efforts have supplanted the larger-scale initiatives that DSS was planning. However, according to OUSD(I) officials, DOD continues to pursue a limited number of smaller-scale initiatives to address backlogs and delays and to ensure that funding is available for its security clearance processes. For example, to address delays in the processes, DOD is working with OPM to introduce methods of obtaining applicants’ fingerprints electronically and to implement a method that would enable OPM to transfer investigative records to DOD adjudicators electronically. To help ensure that funding is available for its security clearance program, DOD is examining the number of clearances it funds and undertakes for industry personnel who work with 23 other federal agencies and departments. The DSS Director indicated that DOD is considering the cost it incurs for providing clearance-related services and the feasibility of shifting the funding responsibility back to the federal agencies and departments that request the clearances through DOD. High-level attention has been focused on improving personnel security clearance processes governmentwide. Since June 2005, OMB’s Deputy Director of Management has been responsible for a leadership role in improving the governmentwide processes. During that time, OMB has overseen, among other things, the issuance of reciprocity standards, the growth of OPM’s investigative workforce, and greater use of OPM’s automated clearance-application system. An August 9, 2007, memorandum from the Deputy Secretary of Defense indicates that DOD’s clearance program is drawing attention at the highest levels of the department. Streamlining security clearance processes is one of the 25 DOD transformation priorities identified in the memorandum. Another indication of high-level governmentwide involvement in addressing problems in clearance processes is the formation of an interagency security clearance process reform team in June 2007. The team’s memorandum of agreement indicates that it seeks to develop, in phases, a reformed DOD and intelligence community security clearance process that allows the granting of high-assurance security clearances in the least time possible and at the lowest reasonable cost. The team’s July 25, 2007, terms of reference indicate that the team plans to deliver “a transformed, modernized, fair, and reciprocal security clearance process that is universally applicable” to DOD, the intelligence community, and other U.S. government agencies, no later than December 31, 2008. In our November 2007 discussions with DOD officials, the OUSD(I) Director of Security clarified that the government expects to have demonstrated the feasibility of components of the new system by December 2008, but the actual system would not be operational for some additional unspecified period. While DOD’s initial report on security clearances addressed all of the issues specified in the mandate, the omission of certain important information on the same issues currently limits Congress’s ability to carry out its oversight and appropriations functions pertaining to industry personnel security clearances. For example, inclusion of only one future year of budgeting information limits the report’s usefulness for strategic appropriations and oversight purposes. Without more information on DOD’s longer-term funding needs for industry personnel security clearances, Congress lacks the visibility it needs to fully assess appropriations requirements. Elsewhere, DOD provides such longer-term funding projections as a tool for looking beyond immediate budget priorities. Specifically, DOD annually submits to Congress the FYDP, which contains budget projections for the current budget year and at least the 4 succeeding years. Similarly, congressional oversight is hampered by the absence of information specific to industry personnel on timeliness measures for the average number of days it takes to perform portions of the clearance process—such as the adjudication phase—for pending and completed cases. Without these additional statistics, there is limited transparency for monitoring the progress that DOD and OPM are making annually in streamlining investigative and adjudicative tasks. Finally, DOD’s report did not include any metrics on quality, even though we have previously recommended—in multiple reports and testimonies—that DOD and other parts of the government develop and report such measures for their clearance processes. Problems with the quality of investigations and adjudications can lead to negative consequences—such as the reluctance of agencies to accept clearances issued by other agencies—and can thereby increase waste in the form of unnecessary additional workload for the entire clearance community. Inclusion of these three types of data in the future annual reports appears feasible, based on statements in DOD’s initial report that acknowledged the availability or ongoing development of each type of data. To improve the quality of the information that DOD provides in future reports to Congress for monitoring the security clearance process for industry personnel, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Intelligence to augment the information contained in the department’s initial mandated report. We therefore recommend the following three actions: Add projected funding information for additional out years so that Congress can use that input in making strategic appropriation and authorization decisions about the clearance program for industry personnel. In addition to the mandated information on average delays for pending cases; provide timeliness data for the additional phases within the clearance process, to allow for greater transparency regarding which processes are working well and which need improvement. Develop measures of quality in the clearance process and include them in future reports, to explicitly show how DOD is balancing quality and timeliness requirements in its personnel security clearance program. In written comments on a draft of this report, OUSD(I) concurred with all three of our recommendations. OUSD(I) noted that DOD agrees the recommended additional information will aid Congress in its oversight role and its future annual reports—starting in 2009—will include the suggested information. Regarding our funding recommendation, OUSD(I) noted its plans for addressing out year funding in the future and discussed the difficulty in capturing infrastructure costs such as those needed to sustain the current adjudication system and build a new information technology system. With regard to our recommendation on quality, DOD noted that the Personnel Security Research Center is leading the effort to further define measures, develop collection methodology, and suggest collection methods. DOD’s comments are included in their entirety in appendix II of this report. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the Director of the Office of Management and Budget; and the Director of the Office of Personnel Management. We will also make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-3604 or farrellb@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. The scope and methodology of this report follow from the questions it answers. This report answers the following questions: (1) To what extent does the Department of Defense’s (DOD) August 2007 report to Congress address the five issues specified in the mandate? (2) What were the number and cost of each type of clearance investigation and adjudication for industry personnel performed in fiscal years 2000 through 2006? (3) To what extent has DOD developed procedures to estimate the number of investigations to be conducted; plans to reduce delays and backlogs in the clearance program, if any; and provide funding? In 2006, the John Warner National Defense Authorization Act for Fiscal Year 2007 mandated that (1) DOD report annually on the future requirements of its industry personnel security investigations program and (2) we evaluate DOD’s first report in response to this mandate and provide additional information on eight issues. For our review of the DOD report, our scope was largely limited to information in the DOD report. The report included information on initial and renewal top secret, secret, and confidential clearances for industry personnel and information about program funding, the size of the backlog, the average time to complete investigations, and changes to the process. For the additional information on the number and cost—including information on surcharges that DOD paid to the Office of Personnel Management (OPM)—of each type of industry clearance work performed in DOD’s personnel security clearance program, we limited our scope to DSS- and OPM-conducted investigations and DOD adjudications of initial and renewal top secret, secret, and confidential clearances for industry personnel completed in fiscal years 2000 through 2006. For the additional information on planning and investigation requirements-estimation procedures, our scope included DOD and governmentwide plans and on-going efforts as well as DOD- specific procedures for estimating the numbers of future initial and renewal top secret, secret, and confidential clearances which will be needed for industry personnel. To determine the extent to which DOD’s report addressed each of the five issues specified in the mandate, we reviewed various documents, including laws and executive orders, DOD security clearances policies, OPM security clearances policies, and DOD and governmentwide data quality policies and regulations. These sources provided the criteria used for assessing the DOD report on personnel security clearances for industry. The sources also provided insights into possible causes and effects related to our findings about whether the DOD report addressed each of the issues specified in the mandate. We also reviewed clearance- related reports issued by organizations such as GAO, DOD’s Office of Inspector General (DODIG), and DOD’s Personnel Security Research Center. We interviewed and obtained and evaluated documentary evidence from headquarters policy and program officials from various offices (see the column for question 1 in table 5) in DOD, OPM, and the National Archives and Records Administration (NARA). We compared the findings in the DOD report to the mandated requirements and governmentwide and DOD-wide data quality standards. We also interviewed and discussed our observations of the DOD report with officials from various DOD offices. To determine the number and cost of each type of clearance investigation and adjudication for industry personnel performed in fiscal years 2000 through 2006, we obtained and analyzed data from the Defense Security Service (DSS), the Office of the Under Secretary of Defense for the Comptroller , the Defense Industrial Security Clearance Office (DISCO), and the Defense Office of Hearings and Appeals (DOHA). Before determining the numbers and types of investigations and clearances, we assessed the reliability of the data by (1) interviewing knowledgeable officials about the data and the systems that produced them; (2) reviewing relevant documentation; and (3) comparing multiple sources (e.g., DSS vs. OUSD(C) records) for consistency of information and examining patterns in the data (e.g., the percentage of all adjudications in a given fiscal year that were for top secret clearances). Our analyses showed the numbers and costs of investigations and adjudications completed in fiscal years 2000 through 2004 were not sufficiently reliable for the purposes of this report as we have previously discussed. In contrast we found the data for fiscal years 2005 and 2006 to be sufficiently reliable for our purposes but explicitly noted limitations with those data. The data for these 2 more recent years used different databases than those used to capture the earlier 5 years. Our methodology to determine the numbers and costs of investigations and adjudications for fiscal years 2005 and 2006 included the following: Numbers of investigations: We obtained and analyzed data from OPM’s Personnel Investigations Processing System that DSS provided to us. Numbers of adjudications: We obtained and analyzed data from the Joint Personnel Adjudications System. Costs of investigations: We obtained and analyzed investigation rate data in Financial Investigative Notices published by OPM. While we found limitations associated with these types of data for fiscal years 2005 and 2006, we found that the information was sufficiently reliable for the purposes of this report. Surcharge for investigations: We obtained and analyzed documentary and testimonial evidence from DSS and OUSD(C) officials. Costs of adjudications: We obtained and analyzed unit cost information that DSS officials produced for this report to show the cost of DISCO- provided adjudications and discussed the limitations of these data in the report. Although DOHA reported a unit cost for adjudications for fiscal year 2006, we did not report that statistic because our assessment revealed that it was not sufficiently reliable for the purposes of this report. Finally, we interviewed headquarters policy and program officials from various offices (see question 2 in table 5) in DOD, OPM, and NARA to obtain their perspectives on our observations of these data. To determine the extent to which DOD has developed procedures to estimate the number of future investigations needed for industry personnel and the extent to which DOD has plans to reduce delays and backlogs and provide funding, we took the following actions. We reviewed relevant laws, regulations, and DOD security clearances policies. These sources provided the criteria that we used in our evaluations. We also reviewed relevant clearance-related reports issued by organizations such as GAO, DODIG, and DOD’s Personnel Security Research Center. We interviewed headquarters policy and program officials from the organizations shown in table 5 (see the column for question 3). Our methodology to determine the extent to which DOD has developed procedures to estimate the number of future investigations needed for industry personnel included three steps: (1) we obtained and analyzed documents describing DOD’s procedures for estimating the number of industry investigations, (2) we reviewed DSS’s Internet-based survey of contractors who perform classified work for the government and discussed our observations of this survey with the DSS Director and DSS officials responsible for this survey, and (3) we reviewed documents obtained from DOD officials describing ongoing research on potential changes to the methods DOD uses to make these estimates. Finally, our methodology to determine the extent to which DOD has plans to reduce delays and backlogs and provide funding included reviewing documents obtained in interviews with officials at the Office of the Under Secretary of Defense for Intelligence and DSS. In particular, we reviewed and analyzed the Memorandum of Agreement between the Director of National Intelligence and the Under Secretary Of Defense (Intelligence) concerning the clearance process reengineering team. We also reviewed an August 2007 memorandum from the Deputy Secretary of Defense listing the top 25 transformation priorities for DOD, one of which is streamlining the security clearance process. We conducted this performance audit from May 2007 through February 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Brenda S. Farrell, (202) 512-3604 or farrellb@gao.gov. In addition to the contact named above, Jack E. Edwards, Assistant Director; Joanne Landesman; James P. Klein; Ron La Due Lake; Thomas C. Murphy; Beverly C. Schladt; and Karen Thornton made key contributions to this report. Defense Business Transformation: A Full-time Chief Management Officer with a Term Appointment Is Needed at DOD to Maintain Continuity of Effort and Achieve Sustainable Success. GAO-08-132T. Washington, D.C.: October 16, 2007. DOD Personnel Clearances: Delays and Inadequate Documentation Found For Industry Personnel. GAO-07-842T. Washington, D.C.: May 17, 2007. High-Risk Series: An Update. GAO-07-310. Washington, D.C.: January 2007. DOD Personnel Clearances: Additional OMB Actions Are Needed To Improve the Security Clearance Process. GAO-06-1070. Washington, D.C.: September 28, 2006. DOD Personnel Clearances: Questions and Answers for the Record Following the Second in a Series of Hearings on Fixing the Security Clearance Process. GAO-06-693R. Washington, D.C.: June 14, 2006. DOD Personnel Clearances: New Concerns Slow Processing of Clearances for Industry Personnel. GAO-06-748T. Washington, D.C.: May 17, 2006. DOD Personnel Clearances: Funding Challenges and Other Impediments Slow Clearances for Industry Personnel. GAO-06-747T. Washington, D.C.: May 17, 2006. Questions for the Record Related to DOD’s Personnel Security Clearance Program and the Government Plan for Improving the Clearance Process. GAO-06-323R. Washington, D.C.: January 17, 2006. DOD Personnel Clearances: Government Plan Addresses Some Long- standing Problems with DOD’s Program, But Concerns Remain. GAO-06- 233T. Washington, D.C.: November 9, 2005. Defense Management: Better Review Needed of Program Protection Issues Associated with Manufacturing Presidential Helicopters. GAO-06- 71SU. Washington, D.C.: November 4, 2005. Questions for the Record Related to DOD’s Personnel Security Clearance Program. GAO-05-988R. Washington, D.C.: August 19, 2005. Industrial Security: DOD Cannot Ensure Its Oversight of Contractors under Foreign Influence Is Sufficient. GAO-05-681. Washington, D.C.: July 15, 2005. DOD Personnel Clearances: Some Progress Has Been Made but Hurdles Remain to Overcome the Challenges That Led to GAO’s High-Risk Designation. GAO-05-842T. Washington, D.C.: June 28, 2005. DOD’s High-Risk Areas: Successful Business Transformation Requires Sound Strategic Planning and Sustained Leadership. GAO-05-520T. Washington, D.C.: April 13, 2005. High-Risk Series: An Update. GAO-05-207. Washington, D.C.: January 2005. Intelligence Reform: Human Capital Considerations Critical to 9/11 Commission’s Proposed Reforms. GAO-04-1084T. Washington, D.C.: September 14, 2004. DOD Personnel Clearances: Additional Steps Can Be Taken to Reduce Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-632. Washington, D.C.: May 26, 2004. DOD Personnel Clearances: Preliminary Observations Related to Backlogs and Delays in Determining Security Clearance Eligibility for Industry Personnel. GAO-04-202T. Washington, D.C.: May 6, 2004. Industrial Security: DOD Cannot Provide Adequate Assurances That Its Oversight Ensures the Protection of Classified Information. GAO-04-332. Washington, D.C.: March 3, 2004. DOD Personnel Clearances: DOD Needs to Overcome Impediments to Eliminating Backlog and Determining Its Size. GAO-04-344. Washington, D.C.: February 9, 2004.
The Department of Defense (DOD) industry personnel security clearance program has long-standing delays and backlogs in completing clearance requests and difficulties in accurately projecting its future needs for investigations to be conducted by the Office of Personnel Management (OPM). In 2006, Congress mandated that DOD report annually on the future requirements of the program and DOD's efforts to improve it, and that GAO evaluate DOD's first report. Specifically, GAO was required to report on (1) the extent to which the report responds to the issues in the mandate, (2) the number and cost of clearance investigations and adjudications in fiscal years 2000-2006, and (3) the extent to which DOD has developed procedures to estimate future needs, plans to reduce delays and backlogs, and plans to provide funding for the program. To accomplish these objectives, GAO obtained and reviewed laws, executive orders, policies, reports, and other documents related to the security clearance process and interviewed officials from a range of government offices concerned with the clearance process. Although DOD's first annual report responded to the issues specified in the mandate, it did not include certain important information that was available on funding, processing times, and quality. DOD's report limited the funding requirements information for its industry security clearance program to 2007 and 2008, even though the department asserted before Congress in May 2007 that it would need tens of millions of dollars in the future to maintain the infrastructure supporting the program and to cover operating costs. While DOD reported the average total time for DOD industry clearances and the average time to complete all clearance investigations, it did not include information on the time to complete any of the other phases (e.g., adjudication). GAO's September 2006 report suggested that longer delays are found in some phases of the process than in others and that quantifying those delays would be useful. The DOD report was largely silent on measures of quality in the clearance process, which is crucial if agencies are to accept the validity of clearances from other agencies. By not including these types of information, DOD limited the information available to Congress as it oversees the effectiveness of DOD's industry personnel security clearance program. GAO was unable to report the number and unit cost of investigations and adjudications for fiscal years 2000 through 2004 because data were either unavailable or insufficiently reliable. However, DOD reported that OPM conducted 81,495 and 138,769 investigations of industry personnel in fiscal years 2005 and 2006, respectively, and DOD granted clearance eligibility to 113,408 and 144,608 industry personnel in fiscal years 2005 and 2006, respectively. In estimating unit costs, DOD and OPM did not account for all factors affecting the cost of a clearance--factors that would have made the DOD-provided estimates higher. These factors included (1) the cost of special interviews that are sometimes necessary to resolve discrepancies in information and (2) that top secret clearance adjudications normally take about twice as long as those for secret/confidential clearances. DOD's procedures and plans are evolving, including procedures for projecting the number of future investigations it will need and plans to reduce backlogs and delays, as well as steps to fund the industry clearance program. In ongoing efforts to address the continued inaccuracy of its projections of future clearance needs, DOD has taken several steps. For example, DOD made its voluntary annual survey of contractors performing classified government work accessible through the Internet in 2006 and began encouraging industry staff to complete it. The response rate increased to 86 percent of industry personnel in 2007. Further, while DOD does not have its own plan to address the funding of its clearance program and its delays in processing clearances, it is currently participating in a governmentwide effort to make clearance processes more efficient and cost-effective. Streamlining and improving the efficiency of its clearance process is also one of DOD's top transformation priorities. In its 2004 report, GAO recommended that DOD implement a comprehensive plan and improve its estimates of future investigation needs.
VHA provides health care services to various veteran populations— including an aging veteran population and a growing number of younger veterans returning from military operations in Afghanistan and Iraq. VHA’s 152 VAMCs offer outpatient, residential, and inpatient services, ranging from primary care to complex specialty care, such as cardiac surgery and spinal cord injury care. In providing these health care services to veterans, clinicians at VAMCs use a variety of medical supplies and equipment, which must be appropriately managed and standardized in order to help ensure safe and cost-effective care. The functions carried out by VHA’s logistics program include the management of medical supplies and equipment in VAMCs’ inventories and the standardization of these items. At the VA headquarters level, several offices are involved in the logistics program. VA’s Office of Acquisition, Logistics, and Construction develops policies related to department-wide inventory management and standardization of medical supplies and equipment. VHA’s Procurement and Logistics Office develops requirements, based on VA’s policies, some of which are applicable to networks and some of which are applicable to VAMCs. VA’s Management Quality Assurance Service assesses each VAMC for compliance with logistics requirements every 7 to 8 years using a standardized check list, which is updated annually. Each of the 21 networks is responsible for complying with applicable VHA requirements and ensuring compliance with VHA’s requirements by the VAMCs within its network. In turn, the logistics department at each of the 152 VAMCs is responsible for inventory management and standardization, and must comply with applicable VHA requirements. In past reports, GAO and VHA identified major deficiencies related to VHA’s management of medical supply and equipment inventories and the standardization of such items. These deficiencies included the following: Limitations with inventory management systems. A 2011 GAO report and VHA internal reports from 2008 and 2011 identified that the two inventory management systems that VHA currently requires VAMCs to use to track the type and quantity of medical supplies and equipment at their facilities rely on antiquated technology and therefore have limited functionality.Specifically, the Generic Inventory Package (GIP)—used to track medical supplies, such as needles and scalpel blades—and the Automated Engineering Management System/Medical Equipment Reporting System (AEMS/MERS)—used to track medical equipment, such as endoscopes—cannot provide VHA or VAMCs with system-wide data on the types and quantities of medical supplies and equipment in use at VAMCs. This occurs because each VAMC maintains its own inventory management systems, and data have not been entered consistently across VAMCs. Without system-wide information, VHA’s ability to identify VAMCs’ noncompliance with certain inventory management requirements is limited, which in turn may pose risks to veterans’ safety and limit VHA’s cost effectiveness. In addition, because of the antiquated technology on which GIP and AEMS/MERS are based, these systems can only be updated to a limited extent, meaning that VHA is largely unable to expand the capabilities of these systems. Gaps in requirements for managing inventories. In 2011, we reported that VHA’s requirements for managing medical supply and equipment inventories had gaps concerning the types of medical supplies and equipment VAMCs must track in their inventories, and that, as a result, VAMCs did not track all the items being used in their facilities. In that report, we also found that items clinical department staff had purchased were sometimes not captured in the medical supply and equipment inventories. Because VAMCs did not track all the items being used in their facilities, they had difficulty ensuring that they maintained appropriate quantities of items to stock, resulting in unavailable or expired medical supplies. VAMCs with incomplete inventories may also have been unable to quickly identify and remove medical supplies and equipment that are the subject of a manufacturer or U.S. Food and Drug Administration recall or patient safety alert so that they would not be used when providing care to veterans. Lack of standardization of medical supplies and equipment. While VHA has had efforts in place to standardize medical supplies and equipment, a 2011 VHA internal report found that VHA did not have a systematic process for identifying the medical supplies and equipment that could be standardized. In addition, VHA did not have a systematic process to determine the potential financial and clinical effects of standardizing a particular item throughout VAMCs, and VHA lacked appropriate involvement of clinical staff in the standardization process. Clinical staff must be involved in this process, because—as the users of medical supplies and equipment—they have a unique understanding of the features offered by each item. Greater standardization could allow VHA to better leverage its purchasing power by establishing national contracts and blanket purchase agreements and to increase veterans’ safety. For example, if a VAMC standardized certain types of reusable medical equipment (RME), it could reduce the number of different types of RME available for use in the facility.different reprocessing methods that VAMC staff would need to be familiar with in order to clean, disinfect, and sterilize each piece of RME properly, which might reduce instances of inadequate or improper reprocessing of these items. To address deficiencies in its logistics program, VHA issued new requirements in 2011 mainly in three areas—management of medical supplies and equipment in VAMCs’ inventories, the standardization of these items, and the monitoring of VAMCs’ logistics programs. These requirements, some of which apply to VAMCs and some of which apply to networks, are designed to improve veterans’ safety and the cost-effective use of resources. We found that the five VAMCs we visited and their corresponding networks have partially complied with VHA’s new requirements, as of December 2012. None of the VAMCs we visited fully complied with all of VHA’s new requirements for managing inventories. These new requirements include three components: (1) having logistics staff manage all medical supplies, (2) establishing and maintaining a list of all medical supplies and RME approved for use at the facility, and (3) entering all stock surgical and dental instruments—a type of RME—into GIP. (Table 1 lists each of the new requirements for managing inventories and each VAMC’s compliance with these requirements.) Specifically, compliance with these requirements was as follows: At two of the five VAMCs, some medical supplies were still being managed by staff from clinical departments instead of logistics staff as required. Staff at these two facilities cited a lack of staffing resources in light of the additional responsibilities that logistics department staff had to take on as the reason for not fully meeting the VHA requirement. One of these VAMCs only allows staff from two clinical departments to manage medical supplies and requires these staff to comply with VHA’s review, approval, and tracking processes. Since our visit, the other VAMC with partial compliance was able to secure additional staffing resources and expects to achieve compliance with this requirement in the near future. The purchase of medical supplies by clinical department staff may circumvent the required review, approval, and tracking processes and thereby poses risks to veterans’ safety and limits the cost-effective use of resources. Although all five VAMCs had established a list of medical supplies and RME approved for use at the facility, these lists were incomplete at each of the five VAMCs. Specifically, at the five VAMCs, based on VHA data, between 1 percent and 14 percent of medical supplies and RME that had been purchased in October 2012 were not captured on the list, although the percentage of items not captured has decreased at three of these VAMCs since September 2012. VAMC and network officials mainly attributed the incomplete lists of approved medical supplies and RME to two factors. First, a lack of training for logistics program staff has led to inaccuracies in entering medical supplies and RME on this list. Second, such items may not be consistently entered on the list when they are managed by clinical department staff rather than logistics staff. VAMCs with an incomplete record of medical supplies and RME in use at the facility may have difficulty determining whether they possess an item targeted by a manufacturer or Food and Drug Administration recall, or patient safety alert. Moreover, VHA’s Procurement and Logistics Office is in the process of establishing an electronic database that will contain all medical supplies and RME approved for use across VHA, which will eventually allow VHA to determine which items each VAMC has approved for use. Because data are extracted from each VAMC’s list of approved medical supplies and RME to populate VHA’s database, each VAMC needs to achieve full compliance with this requirement in order for the database to be complete. Officials from four of the five VAMCs said that they had not entered all stock surgical and dental instruments into GIP because they lack the staffing resources necessary to do this. A VAMC official said given the high volume of existing surgical and dental instruments that have not been entered into GIP at each VAMC, this requirement would likely take months to complete. VAMC officials stated that in light of limited resources available to comply with this requirement, they would benefit from guidance from VHA such as how to prioritize the entry of instruments into GIP. At the one VAMC that reported achieving full compliance with this requirement, an official told us that the process of entering these instruments into GIP was cumbersome and resource- intensive and that achieving compliance required extensive collaboration between logistics staff and sterile processing staff within the VAMC. One VAMC we visited and two networks fully complied with VHA’s new standardization requirements, and the remaining four VAMCs and three networks have partially complied. For VAMCs, these new requirements include: (1) establishing and maintaining a clinical product review committee; (2) reviewing and approving medical supplies and RME that have not previously been used at the VAMC, including emergency purchases of these items; and (3) performing standardization activities—including identifying opportunities for standardizing medical supplies and RME within the facility and ensuring facility compliance with national contracts and blanket purchase agreements for medical supplies and RME. For networks, these new requirements include: (1) establishing and maintaining a network commodity standardization committee and four subcommittees; (2) reviewing the activities of the VAMC clinical product review committees; and (3) performing standardization activities— including identifying opportunities for standardization, facilitating the standardization of medical supplies and RME, and tracking and reporting the benefits resulting from the implementation of standardization initiatives. Extent of VAMCs’ compliance. Four of the five VAMCs we visited had not fully complied with all of VHA’s new standardization requirements. (Table 2 lists each of the new standardization requirements and each VAMC’s compliance with these requirements.) Specifically, compliance with these requirements among the five VAMCs we visited was as follows: Each of the five VAMCs had established a clinical product review committee; however, at one of these VAMCs, the committee was not meeting on a regular basis. An official at this VAMC stated that the committee had only been meeting sporadically because clinical staff— who are required to be represented on the committee—were not available to attend the meetings. In addition, at another VAMC, the committee was not established until recently—9 months after the deadline for establishing it. However, prior to the establishment of the clinical product review committee, some of its required functions, such as the review and approval of items that had not previously been purchased, were being performed by another VAMC committee. Regular clinical product review committee meetings, which include appropriate representation of clinical and nonclinical staff, are key to ensuring that new medical supplies and RME are reviewed and approved prior to their use and to identify opportunities for standardization. Three of the five VAMCs lacked a documented process for reviewing emergency purchases. Specifically, at one VAMC, emergency purchases were made without any approval, while at the other two VAMCs, the process for reviewing and approving these purchases was not documented; however, officials told us that an informal review is conducted. Without a documented process for review and approval of emergency purchases, the VAMCs may purchase these items without evaluating their cost effectiveness or likely effect on veterans’ care. Three VAMCs did not perform all of the required standardization activities. Specifically, one VAMC did not identify opportunities for standardization within its clinical product review committee. Officials at this VAMC told us that this was because the committee had not been meeting regularly because of a lack of clinical department staff availability. Identifying opportunities for standardization is an important first step to standardizing medical supplies and RME, which may ultimately result in cost savings and greater continuity of care. Furthermore, officials at this VAMC and two others did not have measures in place to fully ensure compliance with national contracts and blanket purchase agreements for medical supplies and RME. Officials at one of these VAMCs were only assessing compliance with newly issued national contracts and blanket purchasing agreements. Officials at the other two VAMCs used an electronic tool—made available by VHA—to ascertain whether items they were purchasing were available at lower prices through a national contract or blanket purchasing agreement. However, VHA officials told us that this tool was insufficient to assess compliance because it does not provide enough information to determine whether VAMCs are purchasing all of the standardized medical supplies and RME available on these contracts and blanket purchasing agreements. At the two VAMCs that were in full compliance with this requirement, the corresponding networks had developed a spreadsheet that enabled the VAMCs to assess whether they were purchasing standardized items on national contracts and blanket purchasing agreements by manually reviewing their purchase histories. Both of these networks required the VAMCs to assess their compliance with a certain number of national contracts and blanket purchasing agreements each month and report their findings to the network. Assessing compliance with national contracts and blanket purchasing agreements is important because VAMCs that are not in compliance may not be taking advantage of cost-effective options for purchasing medical supplies and RME that have been standardized. Extent of network compliance. Two of the five networks we visited fully complied with all of VHA’s new standardization requirements, and the remaining three only partially complied. (Table 3 lists the requirements for standardization and each network’s compliance with the requirements.) Specifically, compliance with these requirements was as follows: While each of the five networks had established a commodity standardization committee and the four required subcommittees, at three of the networks the committee and its subcommittees were not meeting on a regular basis, as required. Officials at these networks stated that the committee had only been meeting sporadically because clinical or logistics program staff from the VAMCs within the networks—who are required to be represented on the committee and its subcommittees—were not available to attend the meetings. Officials at two networks told us that clinical staff serve on this committee as a collateral duty and often lack the time to participate in committee meetings. Similarly, officials at another network told us that VAMC logistics staff were not always available to participate in committee meetings in light of additional responsibilities they had to take on. Regular network commodity standardization committee meetings, which include appropriate representation of clinical and nonclinical staff, are key to identifying and pursuing opportunities for standardization. One of the five networks did not review the activities of the VAMC clinical product review committee. Network officials told us that they were not receiving committee minutes from the VAMCs but that they would begin requesting these minutes in the future, which would allow the network to review the committees’ activities. Review of the VAMCs’ clinical product review committees’ activities is important to ensure that the committees are functioning effectively and to help identify standardization opportunities, which can lead to financial savings. Officials at one of the two networks that was partially complying with this requirement told us that instead of reviewing the activities of the VAMCs’ clinical product review committee, network officials extract data on the VAMCs’ purchases to identify items that are frequently used or costly—in an effort to standardize them across the network. Officials at the other network that was partially complying told us that they review the activities of the VAMCs’ clinical product review committee only once annually, as part of a network external review of VAMCs’ logistics programs. Because these networks did not regularly review the activities of the VAMC committees, they may not have been aware of new supply and RME purchases the VAMCs are considering. If other VAMCs within the networks are considering similar purchases, the networks may be able to consolidate these purchases across multiple VAMCs and thereby achieve financial savings. At one network, the commodity standardization committee had not performed any standardization activities because clinical staff from the VAMCs within the network were not available to attend the committee meetings and identify medical supplies and RME for standardization. At another network that achieved partial compliance, the commodity standardization committee and its subcommittees had not met regularly because VAMC logistics staff were not available to attend committee meetings. Instead, network officials performed some standardization activities outside of the committee. At another network, the committee had just begun the process of reviewing VAMCs’ purchase histories to identify opportunities for standardization but had not developed specific plans to standardize medical supplies or RME because of a lack of clinical staff involvement. In contrast, the two networks that fully complied with this requirement were able to identify items for standardization, facilitate the standardization process, and implement the standardized items at the VAMCs within the network. For example, at one of these networks, a medical supply used in certain imaging procedures was standardized across the network, resulting in an estimated cost savings of $1.7 million over 5 years, according to network officials. Four of the five VAMCs we visited and three of the five corresponding networks had fully complied with the new monitoring requirements at the time of our visit. One VAMC and two networks partially complied. For VAMCs, these requirements include conducting an annual facility internal review using VA’s Management Quality Assurance Service checklist, developing an action plan for correcting deficiencies identified during the review to submit to the network, and ensuring that all identified deficiencies from the internal review are corrected within 90 days after they were identified. The VAMC that only partially complied with these new monitoring requirements failed to correct identified deficiencies within 90 days after they were identified or request an extension. While each of the VAMCs either fully or partially complied, VAMC officials stated that they were unclear about some of the items on the checklist. For example, these officials pointed out that several of the checklist items referred to VHA requirements that appeared to be conflicting, and thus they were unsure how to interpret those items. Officials from VHA’s Procurement and Logistics office told us that they were in the process of issuing guidance to VAMCs and networks on how to interpret each checklist item; however, at the time of our report, this guidance had not yet been issued. VHA requires that an extension be requested for deficiencies that cannot be addressed within 90 days of the date of the network external review. requirements helps ensure the cost-effective use of resources and patient safety by enabling VAMCs and networks to identify and correct deficiencies in the management of medical supplies and equipment. In addition to the new VAMC and network requirements, VHA has other efforts underway that—according to officials—will further improve the management and tracking of medical supplies and equipment in VAMC inventories and the standardization of such items across VHA. Specifically, VHA is (1) developing a new inventory management system that will replace VHA’s existing systems for managing medical supply and equipment inventories, (2) developing a system for electronically tracking the location of certain medical supplies and equipment in VAMCs, and (3) establishing a program executive office that will provide logistics support and manage the standardization of medical supplies and equipment VHA-wide. However, there are uncertainties related to implementation, funding, and operational issues that may impede their success, if not appropriately addressed. In early 2012, VHA began to pilot a new inventory management system, called Service Oriented Architecture Research and Development (SOARD) that relies on commercially available asset management software. VHA is developing SOARD to eventually replace VHA’s antiquated inventory management systems—GIP and AEMS/MERS. VHA established a project team that is responsible for planning and implementing the SOARD pilot at selected VAMCs. SOARD project team officials told us that SOARD will provide VHA with enhanced inventory management capabilities that are not available through GIP and AEMS/MERS. For example, these planned capabilities include a link to an electronic database of recall information for medical supplies and equipment and a single Web-based system that contains systemwide data on the types and quantities of medical supplies and equipment in use at VAMCs that SOARD project team officials told us will enable VHA to search, view, and report aggregate data from each VAMC. VA has made two previous unsuccessful attempts to update the inventory management systems in use at VAMCs. In a previous report, GAO attributed these failures to the lack of a reliable program schedule and cost estimate, as well as concerns about the capabilities of the new inventory management systems, among other factors. VHA is concurrently developing a new system—called Real Time Location System (RTLS)—for electronically tracking certain medical supplies and equipment. the SOARD project team had to identify alternative funding sources.VHA’s Office of Emergency Management and Procurement and Logistics Office provided some funding for SOARD for fiscal year 2012. However, according to officials, of the $16.4 million that the SOARD project team requested for fiscal year 2013, it only received $3.4 million, which has required the team to scale back its efforts and may not allow it to remain on target with its implementation plan. SOARD project team officials told us that it is unlikely that VHA will allocate additional funding for SOARD in fiscal year 2013. Given the budgetary uncertainties facing the federal government, as well the fact that VA has not committed to funding SOARD, the extent to which funding will be available for SOARD in future years is unclear. Resources needed for implementation. A VAMC official at the initial SOARD pilot site told us that the preparations for the implementation of SOARD, which consist of updating existing inventory databases and training staff to use SOARD, are highly resource and labor intensive, often requiring highly trained and experienced staff to complete. For example, at this VAMC, two engineering staff members spent 3 months updating the inventory databases. The official stated that other facilities with relatively fewer resources and less experience would likely face major challenges completing the necessary steps to implement SOARD. On the basis of feedback from this pilot site, SOARD project team officials have decided to provide additional training as well as greater on-site assistance with preparations for the implementation of SOARD at future pilot sites. However, SOARD project team officials acknowledged that it is a challenge to support several pilot sites with their current resources and nearly impossible to support simultaneous deployment at additional sites. As a result, officials told us that the expansion of SOARD to additional pilot sites is dependent on the SOARD project team being able to hire additional staff members, who will provide support to pilot sites. System interoperability issues. It is currently unclear whether SOARD will be able to provide certain capabilities that are supported by the existing inventory management systems. For example, SOARD currently does not have the capability to interface with VHA’s financial management system. If this capability is not established, VAMC staff would have to manually enter information on medical supply and equipment purchases in two separate systems, which would increase their workload. SOARD project team officials told us that they are working on solving this interoperability issue but have not estimated a time frame for doing so. Furthermore, SOARD project team officials told us that the Web- based product on which SOARD relies was designed to be used for equipment management and that—to their knowledge—only a small number of health care entities use this product to manage medical supply inventories. SOARD project team officials told us that they are working with officials at several entities that use the product for this purpose in order to develop this capability for VHA; however, they have no timeline for when they expect to achieve this capability. SOARD project team officials told us that—over the course of the pilot— officials are monitoring the development and implementation of SOARD, including whether each phase of the pilot remains on-time and within its budget, and making changes to the system based on feedback that they receive from users at the pilot sites and other stakeholders. However, we found that they had not yet developed formal criteria that rely on data collected from the pilot sites for measuring the overall performance of the pilot, including whether anticipated benefits are being achieved. According to GAO internal control standards, performance measures need to be established and monitored, so that analyses can be made and appropriate actions can be taken. Measuring performance would allow VHA to track the progress it is making towards achieving its anticipated benefits from SOARD implementation and would give officials crucial information on which to base their decisions regarding the necessary modifications and successful implementation of SOARD. Furthermore, VHA’s implementation plan for SOARD is ambitious—with nationwide implementation of SOARD’s equipment management capabilities expected by September 2015—which may not allow adequate time to address the uncertainties associated with the program, evaluate the performance of the pilot, and address identified concerns. Given the uncertainties that surround the SOARD pilot and the fact that VA has made two previous unsuccessful attempts at updating the inventory management systems in use at VAMCs, it is important that VHA have a realistic implementation plan—vetted thoroughly in VHA—and assess the SOARD pilot using formal evaluation criteria to increase the probability of success. At the same time SOARD is being piloted at several VAMCs, VHA is preparing to roll out a system for physically tracking certain medical supplies and equipment, using radio frequency identification and other technologies, called Real Time Location System (RTLS). Officials expect that physical tracking of medical supplies and equipment will enable VAMCs to reduce expenses associated with lost or stolen supplies and equipment and help improve patient safety by—for example—being able to systematically track RME through reprocessing. According to RTLS program officials, VHA has budgeted up to $550 million through fiscal year 2014 to implement RTLS nationwide through a contractor. However, VHA’s nationwide RTLS contract was subjected to a bid protest in 2012, which was upheld and resulted in VHA having to reopen the acquisition. In January 2013, VHA selected a new RTLS contractor which, according to RTLS program officials, will allow for the implementation of RTLS VHA-wide to begin in March 2013—6 months later than anticipated. Because RTLS program officials expect VHA to make funding for RTLS implementation available only through fiscal year 2014, VHA is attempting to implement RTLS VHA-wide by that time. According to a VHA official, separately from VHA-wide RTLS implementation, RTLS is currently being rolled out at all of the VAMCs in These networks began rolling out two networks as demonstration sites. RTLS before VHA decided to establish a contract for nationwide RTLS implementation and are using a different vendor than the one that will eventually provide RTLS nationwide to all VAMCs. The demonstration sites are providing VHA with lessons learned for future implementation of RTLS at all VAMCs. To prepare for VHA-wide RTLS implementation, VHA has begun to equip VAMCs with wireless capabilities and, according to VHA officials, is requiring VAMCs to update existing inventory databases to help ensure that RTLS is populated with accurate data on medical supplies and equipment. In addition, 6 months prior to RTLS implementation, each network will dedicate one staff member to RTLS implementation activities. However, like SOARD, there are uncertainties with respect to interoperability issues and resources that may prevent RTLS from being fully implemented by the end of fiscal year 2014: System interoperability issues. Once it is operational, RTLS is meant to interface with VHA’s current GIP and AEMS/MERS systems as well as with SOARD. RTLS program officials told us that they believe it would likely take 6 to 12 months for VHA’s contractor to develop separate interfaces between RTLS and these various systems once the RTLS contract has been finalized.be known until VHA’s contractor begins to develop these interfaces. However, a more precise timeline will not Furthermore, some VAMCs are in the process of rolling out separate systems for tracking the location of surgical and dental instruments, which is a function that RTLS will provide. According to RTLS program officials, once RTLS is implemented at the 44 VAMCs that already have a system for instrument tracking, 14 VAMCs will have to replace these systems because they are not compatible with the instrument tracking capability provided by RTLS. In fiscal year 2012, after RTLS program officials discovered that 24 additional VAMCs were planning to purchase their own instrument tracking systems, these officials placed a moratorium on VAMCs acquiring their own instrument tracking systems in order to prevent this incompatibility issue from arising at additional VAMCs. Resources needed for implementation. Currently, not all VAMCs have wireless capabilities—which are necessary before RTLS can be implemented—and at some facilities that have these capabilities, the wireless signals do not cover the entire facility. RTLS program officials told us that some VAMCs are difficult to equip with wireless capabilities because of the age or configuration of the buildings, or both, and that this, among other issues, has resulted in the installation of wireless capabilities taking longer than initially anticipated. Furthermore, funding has not been identified for the installation of wireless capabilities at the VAMCs in several networks; however, RTLS program officials told us that they are working with VA’s Office of Information Technology to secure funding for this purpose. RTLS program officials plan to implement RTLS first at VAMCs that have full wireless capabilities, but nationwide RTLS deployment hinges on wireless capabilities being in place at each VAMC. Furthermore, as with SOARD preparation activities for updating existing inventory management systems, data in AEMS/MERS must be updated in preparation for RTLS implementation, so that the data used to populate RTLS is accurate. According to RTLS program officials, these activities are resource intensive and have resulted in some VAMCs requesting that RTLS be installed at their facilities during the later stages of RTLS implementation. RTLS program officials told us that they have made training available to VAMCs to help them conduct the necessary preparations for RTLS. However, officials told us that ultimately these preparation activities need to be completed by each VAMC prior to RTLS installation. In August 2011, VHA established a program executive office, within its Procurement and Logistics Office, for providing logistics support and managing standardization. This office consists of a logistics operations office and six program-management offices—aligned along functional areas—that are tasked with identifying medical supplies and equipment for standardization and facilitating the process of standardizing them. The logistics operations office is responsible for providing overall logistics program support across VHA, which includes monitoring VAMCs’ compliance with national contracts and blanket purchase agreements and other logistics metrics. According to VHA officials, each of the six program management offices are expected to collaborate with teams of clinicians and other stakeholders, who can provide insight on identifying certain medical supplies and equipment for standardization and assessing the feasibility of standardizing these items; coordinate with VA’s Office of Acquisition and Logistics to develop business cases for standardizing these items; and coordinate with VA’s contracting offices, which are responsible for establishing nationwide contracts for standardized items. After contracts have been established, the program-management offices are responsible for helping VAMCs implement standardized items. According to VHA officials, the program-management offices will also collaborate with the network commodity standardization committees to avoid duplication of efforts. At the time of our report, the program- management offices had identified 179 items as potential standardization opportunities. VA’s contracting offices had awarded national contracts for six of these items and a regional blanket purchasing agreement for one item. However, VHA did not provide documentation for some of these items to support that VAMCs have been instructed to implement them at their facilities. We were unable to verify whether VAMCs were using the standardized items, and Procurement and Logistics Office officials told us that they were not systematically assessing VAMCs’ compliance. VHA had originally planned to allocate about 140 staff members to the entire program executive office; however, as of February 2013, according to Procurement and Logistics Office officials, only 43 positions had been filled. Officials told us that efforts to hire additional staff are currently on hold because VHA intends to evaluate the effectiveness of the new program executive office before committing additional resources to it. At the time of our report, officials had not yet developed a plan for conducting this evaluation. To its credit, VHA has developed new requirements to address deficiencies in its logistics program that it expects will help improve patient safety and the cost-effective use of resources. However, because VAMCs we visited and the associated networks have only partially complied with these requirements, the potential risks to patient safety and the inefficient use of resources remain. VHA’s efforts to enhance its logistics program—developing a new inventory management system, known as SOARD, rolling out a new system for electronically tracking certain medical supplies and equipment at VAMCs, known as RTLS, and establishing a program executive office that provides logistics support and manages standardization of medical supplies and equipment VHA-wide—offer VHA potential benefits in terms of patient safety and the cost-effective use of resources. However, these efforts face major concerns and uncertainties regarding their implementation. Unless appropriately addressed, VHA may not successfully implement these efforts and could face unnecessary cost increases and wasted resources. Specifically, without an evaluation plan that includes specific criteria and appropriate solutions to address the concerns and uncertainties we identified, VHA’s SOARD pilot runs the risk of not meeting its objectives and—in the worst case—meeting a similar fate as previous unsuccessful attempts to update VHA’s inventory management systems. Furthermore, the implementation plan for RTLS lacks updated timelines to take into account (1) establishing interoperability between RTLS and VHA’s inventory management systems and SOARD, (2) installing wireless capabilities at VAMCs, and (3) data cleansing activities. An implementation plan with updated timelines would help ensure that VHA remains on track for implementing RTLS. Lastly, VHA does not currently have a plan for evaluating the success of its new program executive office for providing logistics support and managing standardization, which would help it determine whether this office is meeting its intended goals of improving VHA’s logistics program and increasing cost effectiveness. We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following actions: To assist VAMCs and networks in complying with VHA’s new logistics requirements, and thereby help ensure patient safety and the cost- effective use of resources, determine appropriate resource levels for VAMC logistics programs and provide training and best practices to VAMCs to help them ensure that logistics staff, rather than clinical department staff, manage all medical supplies; ensure that all items that VAMCs purchase are captured on their lists of approved medical supplies and RME; and enter all stock surgical and dental instruments into the appropriate reinforce through communication the requirement that VAMCs develop a formal process for reviewing and approving emergency purchases of medical supplies and RME, develop a systematic method using available VHA data to assist VAMCs in tracking compliance with national contracts and blanket purchase agreements, and issue guidance to VAMCs and networks regarding interpretation of the Management Quality Assurance Service checklist and reinforce through communication the requirement that VAMCs correct deficiencies within 90 days after they were identified or request an extension and that networks use the entire checklist when conducting their reviews of VAMC logistics programs and complete their review within the required time frame. To address concerns about VHA’s pilot of a new inventory management system, known as SOARD, develop a written plan that outlines how the SOARD pilot will be evaluated before the pilot is expanded to additional VAMCs or preparations are made to implement SOARD nationally. This plan should include formal criteria for evaluating the overall performance of the pilot, which are based on consistent data collected from each pilot site, as well as a strategy for addressing concerns about funding needed for SOARD, staffing resources needed for SOARD implementation at VAMCs, and establishing interoperability between SOARD and legacy systems. To address concerns about VHA’s implementation of a system for electronically tracking medical supplies and equipment, known as RTLS, develop an updated implementation plan that reflects timelines for establishing interoperability between RTLS and VHA’s inventory management systems and SOARD, installing wireless capabilities at VAMCs once funding is available for this effort, and completing data cleansing activities at VAMCs in preparation for RTLS implementation. To address concerns about VHA’s program executive office for providing logistics support and managing the standardization of medical supplies and equipment, develop a plan for measuring the success of the program executive office. VA provided written comments on a draft of this report, which we have reprinted in appendix I. In its comments, VA generally agreed with our conclusions, concurred with our recommendations, and described the department’s plans and time frames to implement each of our seven recommendations. VA did not provide any technical comments. With respect to its plans for addressing our recommendations, VA described specific actions that VHA, networks, and VAMCs plan to take to improve VAMCs’ and networks’ compliance with VHA’s logistics requirements. VA also stated that it is developing a written plan that outlines how the SOARD pilot will be evaluated before the pilot is expanded and includes a strategy for addressing concerns we identified about the pilot. Moreover, VA stated that it is updating its existing implementation plan for RTLS and developing a plan for measuring the success of its program executive office for providing logistics support and managing standardization of medical supplies and equipment. In its general comments, VA disagreed with our assessment that uncertainty exists about the continued implementation of VHA’s program executive office for providing logistics support and managing standardization of medical supplies and equipment. Specifically, VA stated that there is no uncertainty about the continued implementation of this office. VA stated that this office is being stood up in three hiring phases, the first of which is 86 percent complete, with a planned completion date still months away. However, at the time of our audit work, the implementation plan we received from VHA officials stated that the first hiring phase was scheduled to be completed by September 2012, a milestone target that VHA has exceeded by more than 6 months. Moreover, VHA officials told us that VHA intended to evaluate the effectiveness of the program executive office before deciding whether to proceed to the second and third hiring phases, which further indicates uncertainty as to whether or how VA will proceed with phases 2 and 3, pending the outcome of its evaluation. Therefore, we concluded that uncertainty existed with regard to the continued implementation of this office. VA also stated that VHA currently has a “plan in development” to evaluate the success of its program executive office; however, VA has provided us with neither a copy of this plan for review nor specifics on the nature of the plan it is developing. We are sending copies of this report to appropriate congressional committees and the Secretary of Veterans Affairs. The report is also available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or williamsonr@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs are on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, Mary Ann Curran, Assistant Director; Kathryn Black; Elizabeth Conklin; and Michael Zose made key contributions to this report. Elizabeth Morrison assisted in the message and report development; and Sandra George provided legal support.
VHA’s logistics program is responsible for the management of medical supplies and equipment in VAMCs’ inventories and the standardization of such items throughout VHA. Previous reports have pointed to deficiencies in VHA’s logistics program. GAO assessed (1) the extent to which VAMCs and networks have complied with new VHA requirements to remedy known deficiencies in its logistics program and (2) VHA’s progress in enhancing its logistics program. GAO reviewed documents and interviewed officials to identify new requirements affecting VHA’s logistics program. GAO then visited a nongeneralizable sample of five VAMCs and verified the extent to which the VAMCs and corresponding networks, which oversee VAMCs, were complying with VHA’s new requirements. GAO also reviewed documentation of VHA’s plans for funding, implementing, and evaluating efforts it is undertaking to enhance its logistics program, examined the extent to which VHA was on track to execute those plans, and assessed VHA’s efforts against criteria in GAO’s standards for internal control in the federal government. To address deficiencies in its logistics program, the Veterans Health Administration (VHA) issued new requirements in 2011 regarding the management of medical supplies and equipment in Veterans Affairs medical centers’ (VAMC) inventories, the standardization of these items, and the monitoring of VAMCs’ logistics programs. These requirements, some of which apply to VAMCs and some of which apply to networks, are designed to improve veterans’ safety and the cost-effective use of resources. GAO found that the five VAMCs GAO visited and their corresponding networks have partially complied with VHA’s new requirements. Specifically, as of December 2012, none of the VAMCs GAO visited fully complied with all of VHA’s new requirements for managing inventories; one VAMC GAO visited and two networks fully complied with VHA’s new standardization requirements, and the remaining four VAMCs and three networks partially complied; and four of the five VAMCs GAO visited and three of the five corresponding networks fully complied with the new monitoring requirements. Because VAMCs GAO visited and the associated networks have only partially complied with these requirements, potential risks to patient safety and the inefficient use of resources remain. In addition to the new VAMC and network requirements, VHA has other efforts underway that—according to officials—will further improve the management and tracking of medical supplies and equipment in VAMC inventories and the standardization of such items across VHA. However, there are substantive uncertainties relating to implementation, funding, and operational issues that may impede their success, if not appropriately addressed. Specifically: VHA is piloting a new inventory management system that is intended to replace VHA’s existing systems for managing medical supply and equipment inventories. However, VHA has not fully funded the pilot, staffing resources to implement it at VAMCs are limited, and VHA has yet to resolve technical issues to ensure that this new system can interface with legacy systems. Furthermore, VHA has yet to develop criteria and collect corresponding data to evaluate the performance of the pilot. VHA is also implementing a system for electronically tracking the location of certain medical supplies and equipment in VAMCs. However, there are uncertainties with respect to interoperability issues with other inventory management systems and resources to implement the system. Lastly, VHA is establishing a program executive office that will provide logistics support and manage the standardization of medical supplies and equipment VHA-wide. However, the office has not been fully staffed and uncertainty exists about its continued implementation, because VHA's efforts to hire additional staff are on hold pending its evaluation of the effectiveness of this office. GAO recommends that VHA take steps to assist VAMCs and networks in complying with VHA’s new logistics requirements and develop plans for implementing and evaluating the performance of its efforts to improve its logistics program, which address the concerns—such as system interoperability issues—GAO identified. VA concurred with GAO’s recommendations and provided an action plan to address them.
Over the last decade, the Army focused most of its decisions to field network improvements on supporting operations in Iraq and Afghanistan—an effort that was both expensive and time consuming. The Army did not synchronize the development and fielding efforts for these network improvements and the funding and timelines for the associated acquisition activities were rarely, if ever, aligned. The Army’s efforts to develop networking capabilities fell far short of their objectives, resulting in what the Army believes was a loosely coordinated set of disparate sensors, applications, and services. The Army fielded capabilities in a piecemeal fashion and the user in the field was largely responsible for integrating them with existing technology. In other words, the Army had neither an overarching framework outlining its desired tactical networking capabilities nor a strategy for acquiring them. In an effort to establish a requirements framework for acquiring its networking capabilities, the Army, in December 2011, finalized the Network-enabled Mission Command Initial Capabilities Document, a central document that describes the essential network capabilities required by the Army, as well as scores of capability gaps. These capabilities support an Army mission-command capability defined by a network of command posts, aerial and ground platforms, manned and unmanned sensors, and dismounted soldiers linked by an integrated suite of mission command systems. A robust transport layer capable of delivering voice, data, imagery, and video to the tactical edge (i.e., the forward battle lines) connects these systems. The Army also developed a network strategy that changes the way it develops, evaluates, tests, and delivers networked capability to its operating forces, using an approach called capability set management. A capability set is a suite of network components, associated equipment, and software that provides an integrated network capability.approach, the Army plans to buy only what is currently available, feasible, and needed for units preparing to deploy, instead of developing an ultimate capability and buying enough to cover the entire force. Every year, the Army plans to integrate another capability set that reflects changes or advances in technology since the previous set. To support this approach, the Army has implemented the agile capabilities life-cycle process, which uses the identified capability gaps to solicit solutions from industry and government and then evaluate those solutions during the NIEs in consideration for later fielding to combat units. This process is quite different from the past Army methods in which the Army assumed beginning-to-end control of the design, development, test, and procurement of networking systems. Army officials expect the agile process and associated network integration evaluations to provide opportunities for greater industry involvement by allowing vendors to both propose solutions to address capability gaps and showcase their systems in a realistic environment. This allows the Army to identify and evaluate systems without the need for large investments in development programs and without having to enter into procurement contracts. Competition in contracting is a critical tool for achieving the best return on investment for taxpayers and can help save the taxpayer money, improve contractor performance, and promote accountability for results. Past GAO work has found that the federal government can realize significant cost savings when awarding contracts competitively. DOD also acknowledges specific benefits from competition, such as direct cost savings; improved product/service quality; enhanced solutions and industrial base; fairness and openness; prevention of fraud, waste, and abuse; and increased likelihood of efficiencies and innovation. Federal acquisition regulations require, with limited exceptions, that contracting officers shall promote and provide for full and open competition in soliciting offers and awarding government contracts. Federal acquisition regulations also require that agencies conduct market research, which DOD recognizes as a key to effective competition. According to the Defense Acquisition University, market research involves collecting and analyzing information about capabilities within the market to satisfy agency needs. It is a continuous process for gathering data on product characteristics, suppliers’ capabilities, and the business practices/trends that surround them—plus the analysis of that data to make smart acquisition decisions. Market research can vary across acquisitions given that the nature of the research depends on such factors as urgency, dollar value, complexity, and past experience. One such market research method allows for government to communicate its needs to industry and identify commercial items that could be used to satisfy those needs. The FAR outlines specific techniques for conducting market research. These could include, but are not limited to, consulting with government and industry subject-matter experts, publishing requests for information, or conducting interchange meetings with potential offerors, and hosting pre-solicitation conferences for potential offerors, which the Army has called industry days. Congress passed the Weapon Systems Acquisition Reform Act of 2009, which outlines several congressionally-directed defense acquisition reforms related to competition. Subsequently, the Under Secretary of Defense for Acquisition, Technology and Logistics issued an update to DOD acquisition policy, which provides direction and guidance on how program management will create and sustain a competitive environment at both the prime and subcontract level throughout the program’s life cycle. DOD policy requires programs to outline their market research in their acquisition strategies, which is the business and technical management framework for planning, directing, contracting for, and managing a program. Market research, in general, is a process for gathering data on, among other things, product characteristics, suppliers’ capabilities, and the business practices and trends that surround them- plus the analysis of that data to make smart acquisition decisions. The Army is incorporating competition in various ways for most of the nine tactical networking acquisition programs we examined. As the Army decreases the amount of in-house system development it is doing for tactical networking equipment, it is using various tools to involve industry in seeking items that the Army does not pay to develop to meet its needs. One such tool is the agile capabilities life cycle process and the associated semi-annual Network Integration Evaluations (NIEs), which serve as market research to identify potential solutions to meet capability gaps. This process relies heavily on industry for success, thus providing opportunities for enhancing competition when procuring new tactical networking capabilities. The Army has also reached out to industry to identify small businesses with the skills, experience, knowledge, and capabilities required to manage, maintain, operate, sustain, and support several tactical networking systems. Two of the Army networking systems are new programs that are building their acquisition approaches around competition and leveraging contracting mechanisms that Army officials believe will enhance competition. Five other networking systems have modified their acquisition approaches in ways that would incorporate greater levels of competition. This includes reaching out to industry to provide potential solutions, and competing the procurement of individual components. The two remaining systems have incorporated competition at some point in their past development efforts, but the Army has determined that expanding competition at this stage is not feasible or cost effective. In these cases, the Army continues to engage with industry to identify potential vendors for future components of these systems. Table 1 contains a list of the nine systems in our review and provides a brief description of each. Appendix II contains more detailed information about the nine systems in our review. Eight of the nine systems we reviewed have either completed an acquisition strategy or have one in draft, and all include language that pertain to market research and competition as required by DOD policy. The Army has not developed an acquisition strategy for SRW-Appliqué because it does not meet the requirements of a formal acquisition program. Rather, the Army has developed a plan to deliver a radio waveform capability to satisfy a directed requirement, has used market research, and is seeking competition for this system, as discussed below. Table 2 presents the status of the acquisition strategy for the nine systems. In two of the cases we examined, the Army is beginning new programs, structuring the acquisition strategies to focus on competition, and procuring directly from industry. One of these systems, the Mid-Tier Networking Vehicular Radio (MNVR), will provide a subset of functionality the Army intended to get from the Joint Tactical Radio System (JTRS) Ground Mobile Radio (GMR), which was canceled in 2011. The other system, SRW Appliqué, augments the capabilities of existing radios and allows them to communicate with newer, software-defined radios. The MNVR represents a subset of functionality that was demonstrated in the JTRS Ground Mobile Radio (GMR) program. The Army has a directed requirement to procure MNVR to provide secure communications. Accordingly, on September 24, 2013, following full and open competition, the Army awarded an MNVR production contract. The Army used the initial delivery order from this contract to procure a limited number of radio systems to conduct risk reduction and requirements verification. The acquisition strategy contains summary information regarding plans for competition and market research and states that follow-on production contracts are expected to use a full-and-open competition strategy. Two vendors sent SRW Appliqué radios through the Army’s Network Integration Evaluation in 2011, which the Army has described as a mechanism for market research. In March 2012, the Army finalized market research to identify sources for the production and delivery of approximately 5,000 radios to satisfy a directed requirement for a soldier radio waveform capability. The associated indefinite delivery, indefinite quantity contracts were competitively awarded in April 2014 to four vendors. Additionally, in 2013, the Army also purchased SRW Appliqué radios off the General Services Administration schedule for demonstration at a Network Integration Evaluation in May 2014 and operational testing at a Network Integration Evaluation in November 2014. AMF, Rifleman Radio, and Manpack were all part of the restructured JTRS program, which utilized competition early on to develop software- defined radios that would interoperate with existing radios and increase communications and networking capabilities. Before the Army began restructuring the JTRS program in 2011, it encountered a number of technical challenges that resulted in cost growth and schedule delays. Consequently, the Army is now reaching out to industry for proposed solutions. The Army has also adjusted its approach for the Joint Battle Command-Platform (JBC-P) and Nett Warrior programs and assumed the role of integrator. In this capacity, the Army is purchasing individual components from industry and integrating them together to build the systems. According to the Army, all of the contract orders for both systems over fiscal years 2012 and 2013 were awarded using full and open competition. Initially, Army officials believed there was only one source that could produce a particular AMF variant, but they later used market research to identify a second potential source that was interested in competing for that system. The acquisition strategy for that variant now states that full and open competition and best value procedures will be used when awarding contracts, with an emphasis on modified non-developmental solutions. To continue efforts to identify potential sources to fulfill this requirement, the Army has also posted pre-solicitations for industry comment and hosted interested vendors at industry days to discuss draft Requests for Proposals (RFP) and answer any non competition-sensitive question about the pending solicitation. Both DOD and Congress have pressed for additional competition for the Rifleman Radio. In the Conference Report of the National Defense Authorization Act for Fiscal Year 2013, the conferees noted strong agreement with the direction provided by certain DOD and Army documents regarding the conduct of full and open competition within the JTRS program, which included the Rifleman Radio. The Army modified the acquisition strategy to reflect the push for added competition, which entailed developing a competition strategy, conducting market research, releasing draft solicitations, and holding industry days. Consequently, the full-rate production decision was delayed from May 2012 to the second quarter of fiscal year 2017. The Army is currently receiving low-rate initial production units. Army planning documents indicate plans for full and open competition and IDIQ contracts for the subsequent full-rate production units. Further, the Army anticipates releasing the RFP by June 2014 and awarding contracts by March 2015. The Army has also posted requests for information to seek industry feedback on documentation required for potential future solicitations as well as hosted industry days for added market research. As with Rifleman Radio, the DOD and the Congress have encouraged increased competition for the Manpack, which was also part of the JTRS system. The Army is currently receiving low-rate initial production units. However, the full-rate decision has slipped to the fourth quarter of fiscal year 2017. According to the Army, this resulted from congressional direction for full and open competition of the full-rate production units, which involve several activities noted above. In addition, delayed approval of the acquisition strategy postponed the RFP, which is expected to occur by September 2014.sources for sustainment of Manpack radios, solicited feedback on documentation required for future solicitations, and hosted industry days. The JBC-P is largely a software development effort that will utilize existing hardware from a predecessor system. However, it will also incorporate new hardware such as a new tablet computer and a beacon device for situational awareness data. Wherever possible, the Army plans to use existing, competitively awarded Army contracts to procure new hardware. The Army contracted with the Software Engineering Institute to assess the capabilities of several software development organizations, ultimately selecting the Software Engineering Directorate at Redstone Arsenal to According to Army officials, any necessary develop JBC-P software. software that is beyond the directorate’s capabilities will be competed to industry. Army officials told us they also have plans to compete the blue- force tracker 2 component, which is the Army’s latest system for tracking the location of friendly forces. During fiscal years 2012 and 2013, the Army made 15 contract awards, 13 of which were awarded using full and open competition and the remaining 2 were awarded using full and open competition after exclusion of sources.vendors were automatically excluded from competition because they failed to meet certain criteria, such as security classification. These contract awards covered a variety of items, including miscellaneous communications equipment, cables, system configuration services, and other information technology equipment and services. The Army also conducted market research to identify a second source for armored and Stryker brigade combat team installation kits, which, according to Army officials, saved $900,000 per brigade. The Nett Warrior acquisition strategy describes the program’s planned approach for engaging in market research, including Internet-based searches, manufacturer site visits, and requests for information via the Federal Business Opportunities website—federal agencies’ primary tool for soliciting potential offers. Major Nett Warrior components include the Rifleman Radio and the end user device, which is a smartphone-like device. According to documents provided by the Army, the program has already competed the procurement of end user devices. Army documents show that the Army has made a number of other purchases such as cases, secure digital memory cards, and styluses competitively. The Army is acquiring radios from a current vendor for the Nett Warrior program as government-furnished equipment but plans to purchase other hardware and software-related items competitively. The Warfighter Information Network-Tactical (WIN-T) connects soldiers in theater with higher levels of command via line-of-sight and satellite-based communications. For WIN-T Increments 2 and 3, the Army has determined that competition for the overall system is impractical, although aspects of competition are still being used on the programs. When the WIN-T program was in system development, the Army contracted with two separate companies to develop competing designs for the system. In August 2004, the Army combined the two designs in order to develop a system with attributes from both and proceeded with a single contractor after the two contractors teamed to establish a single architecture for WIN-T that leveraged each contractor’s proposed architecture to provide the Army with what it believed was a superior technical solution for WIN- T. An updated acquisition strategy for WIN-T includes language that describes plans for both market research and competition. Pursuant to a DOD acquisition decision memorandum, the Army conducted a business case analysis to provide a recommendation with the least development and procurement cost and greatest benefit to the Army for the follow-on production of WIN-T Increment 2. Based on the analysis, the Army concluded that a sole source contract with competition at the subcontract level was most appropriate. The Army’s market research led it to conclude that only one contractor, the incumbent, is capable of providing the WIN-T capability. Federal acquisition regulations permit contracting without providing for full and open competition for DOD in cases where only one responsible source or a limited number of sources, and no other supplies or services will satisfy agency requirements. Furthermore, based on market research, government technical experts determined that there are no new technologies being developed that can meet WIN-T Increment 3 requirements. However, the Army is awarding contracts for numerous items that support WIN-T, many of which are awarded competitively. We are not making recommendations in this report. We provided a draft of this report to DOD for their review and comment. DOD provided written comments, which are reproduced in Appendix III. These comments provided updated and clarifying information on a few of the systems in our review. We incorporated these and other technical comments in the report, as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Defense, the Secretary of the Army, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact Belva Martin at (202) 512-4841 or martinb@gao.gov. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Our objective was to examine the Army’s progress in implementing competitive contracting strategies for its network-related systems, in particular for its radio capabilities. To address this objective, we selected nine of the Army’s 25 tactical networking systems as a non-generalizable sample for review. These systems are the Mid-Tier Networking Vehicular Radio, Soldier Radio Waveform—Appliqué, Rifleman Radio, Manpack, Airborne and Maritime/Fixed Station, Joint Battle Command—Platform, the Nett Warrior, Warfighter Information Network—Tactical Increment 2, and Warfighter Information Network—Tactical Increment 3. We chose these systems for a variety of reasons. These systems exist on either the tactical network’s transport or applications level and the Army has indicated they are critical systems for ensuring soldiers are able to move mission-critical information between units. They are also a subset of the systems that constituted over $3.6 billion in Army spending in fiscal year 2014 and several of them may also be included in capability set 14. These nine programs also cover the breadth of operations from the warfighter at the tactical edge to the brigade command post. We reviewed Federal Acquisition Regulations (FAR) and Department of Defense (DOD) policies to identify documentation and procedures as a guide to assess the Army’s use of competitive contracting strategies. We reviewed program acquisition strategies to determine how the programs plan to use market research and competition. While we did not attempt to assess compliance with policy and federal acquisition regulations for any of these programs, we did identify examples in acquisition strategies where the Army has utilized or intends to utilize competition. We reviewed Army market research reports, requests for information, acquisition strategies, contract award information, and briefings to senior Army officials. We reviewed acquisition decision memoranda to identify key programmatic decisions that affect contracting strategies. We reviewed defense acquisition training materials designed to enhance competition in defense programs. We interviewed Army acquisition personnel and discussed both their consideration of competition in contracting strategies and their plan for engaging industry. We independently researched accessibility of acquisition announcements on federal procurement opportunity web sites. We reviewed market research strategies for identifying the contractors with the ability to provide networking solutions and alternative courses of actions the Army considered for meeting networking requirements. We conducted this performance audit from August 2013 to May 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions, based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. The Mid-Tier Networking Vehicular Radio (MNVR) will host the Department of Defense’s (DOD) Joint Tactical Radio System (JTRS) Wideband Networking and Soldier Radio waveforms to connect unmanned sensors to decision makers on the move, which is expected to significantly reduce the decision-making time. MNVR will also provide a mobile Internet-like ad-hoc networking capability and will be interoperable with current soldier radios though simultaneous and secure voice, data, and video communications. MNVR will support Battle Command, sensor- to-shooter, sustainment and survivability applications in a full range of military operations on vehicular platforms. The program includes specific requirements to support the U.S. Army, Brigade Combat Teams, and the warfighter. The MNVR represents a subset of functionality that was successfully demonstrated in the JTRS Ground Mobile Radio (GMR) program. The Army reported that lessons learned from recent operational experiences have shown that a significant capability gap exists at the company level in Brigade Combat Teams in the inability of legacy tactical radios to seamlessly pass real-time information throughout the range of military operations. To address this capability gap, the Army initiated its GMR program; however, due to poor performance and rising costs, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the Army to establish a new program that manages the evaluations, testing, and delivery of non developmental item products to meet the reduced set of capabilities to be fielded to operational units in fiscal year 2014. This program is the Mid-Tier Networking Vehicular Radio that will host the JTRS networking waveforms. The MNVR program achieved a Material Development Decision in September 2013 and the program awarded a MNVR radio production contract. The initial Increment 1 Delivery Orderbegin risk reduction developmental testing and requirements verification testing. The acquisition has two components, (1) the manufacture, testing, and delivery of the radio and associated hardware (B-kit) and (2) procures a limited number of MNVR radio systems to the manufacture and testing of the adaption kit (A-kit) that mounts the B- kit to the intended platforms. Three delivery orders are planned for the B- kit: Delivery order 1—Provides procurement of a limited number of MNVR systems to undergo Government verification testing, and limited user test. Provides assets to support the development of vehicle installation kits by platform integrators and to meet various certification requirements. Delivery order 2—Successful limited user test results will inform a Milestone C decision and support an initial operational test and evaluation. Delivery order 3—Successful initial operational test and evaluation results will inform a full-rate production decision to procure assets to field systems to Capability Sets 17 and 18. Acquisition category: Non-Major Defense Acquisition Program ACAT ID, Special Interest. Procurement award: September 2013 Contract type: Hardware—indefinite-delivery, indefinite-quantity (IDIQ) with firm-fixed-price (FFP) Total program: $1,304.5 million Research and development: $109.1 million Procurement: $1,195.4 million Quantity: Estimated procurement of 10,293 radios Next major program event: MNVR will participate in NIE 14.2 in May 2014 as a risk reduction exercise. A limited user test is scheduled for the first quarter of fiscal year 2015. In February 2012, the Army revalidated the need for a mid-tier wideband networking radio for brigade combat teams to fill the gap created by the cancellation of the Ground Mobile Radio. The Army conducted a cost/benefit analysis for three options and concluded that a new MNVR procurement was the best solution. The Army posted a pre-solicitation notice and later posted the request for proposals. On August 27, 2012, the Under Secretary of the Army determined that full and open competition would be used to make a single award for a firm fixed price, IDIQ contract for a MNVR. According to the Army, such a contract instrument would not obligate the Army to procure large quantities of units. Rather, it would allow the Army to purchase only what is needed in the near-term through delivery orders. It also provides the Army the opportunity to infuse technology as it becomes available into future delivery orders. On September 24, 2013, following full and open competition, the Army awarded a MNVR production contract to Harris Corporation valued at up to $140 million. The Army included a clause in the contract that makes Harris responsible for all government and contractor costs related to follow-on verification testing, to include the costs of any associated retesting required by failures. The contract has a 3 to 5 year ordering period. The Army expects that after 3 to 5 years, advancements in operating system software, power amplifiers, digital processing, as well as advancements in waveform capabilities will evolve to the point that defense stakeholders will consider revising the governing requirements document and will coordinate to determine the need for an updated requirement and a new procurement. If stakeholders determine the need for a new procurement, MNVR will pursue industry competition with a full and open solicitation. The Soldier Radio Waveform (SRW) Appliqué Radio System is a single channel, vehicular mounted, software-defined radio for use by brigade combat teams. It is essentially a data transmission module that can be mounted into certain vehicle configurations of current Single Channel Ground and Airborne Radio System radios that have no capability to send or receive data. SRW-Appliqué runs the JTRS soldier radio waveform and acts as a conduit for transmitting voice and data between the dismounted soldier, the soldier’s unit, and higher headquarters. SRW-Appliqué is intended to interact seamlessly with the Rifleman Radio, which is carried by platoon, squad-level, and team-level soldiers, and which also runs the SRW. The Army plans to integrate the SRW-Appliqué radios with multiple vehicle platforms for fielding flexibility. In May 2011, the Army issued a directed requirement for a soldier radio waveform capability. Two vendors offered systems that the Army ultimately evaluated at a Network Integration Evaluation in October and November 2011. The operational need for SRW-Appliqué was confirmed by the systems’ participation at this event. Acquisition category: NA-Commodity Buy Procurement contract award: April 2014 Contractor: Contracting team of Exelis, General Dynamics, Harris Corporation, and Thales Total program: Estimated $800 million to $900 million Quantity: Estimated procurement of 5,000 radios Next major program event: Source selection and contract award. Using full and open competition, the Army plans to award multiple multi- year indefinite delivery, indefinite quantity contracts with a firm-fixed-price for SRW-Appliqué radios. In March 2012, the Army finalized market research results and concluded that only five of the 10 interested vendors were capable of providing the requisite SRW radio system. In June 2012, the Army posted a presolicitation notifying industry of plans to publish a request for proposal, which it did in October 2012. The Army is considering the use of on ramps and off ramps in the contract so new vendors can be added or underperforming vendors can be removed.government-owned SRW will be made available to contractors for integration onto their existing hardware solutions. AMF products are software programmable, multi-band, multi-mode mobile networking radios, providing simultaneous voice, data and video communications for Army aviation platforms. The radios will operate in networks supporting the Common Operational Picture, situational awareness, and interoperability of Mission Command Systems. The AMF program will procure two non-developmental radios to meet user needs. One radio, Small Airborne Link 16 Terminal (SALT), will possess Link 16 and SRW capability. The second, Small Airborne Networking Radio (SANR), will provide networking and legacy waveform capability. SALT will provide one channel dedicated to a Link 16 capability and a second software-defined radio channel that can host SRW. SANR will provide a two-channel software-defined radio that provides interoperability with the Army’s Mid-tier Networking Waveform capability that will be widely deployed to ground forces. The radio will be installed on AH-64E Apache, UH-60M/L Black Hawk, CH-47F Chinook, and OH-58F Kiowa Warrior helicopters in addition to the MQ-1C Gray Eagle unmanned aircraft system. The AMF program was restructured in accordance with Milestone Decision Authority direction dated May 2012 and July 2012. The May 2012 decision memorandum directed the closeout of the AMF system development and demonstration contract awarded in March 2008. The July 2012 decision memorandum approved a non-developmental item acquisition approach leveraging previous industry investment in tactical radio technology. AMF will operate networking waveforms and select waveforms that are widely deployed by Joint Forces today, enable interoperability between different types of platforms, and transport information through the tactical network to joint network member nodes. The Army has determined that the need for interoperable systems, including common waveforms, software applications, and network operations is critical to the mobile tactical network capability. Acquisition category: ACAT ID Acquisition phase: Pre-solicitation for production System development contract award: March 2008 Total program: $3,582.7 million Research and development: $1,849.3 million Procurement: $1,733.4 million Quantity: 7,720 radios, which is 15,440 channels Next major program event: SALT production and SANR low-rate initial production decisions in first quarter 2016. The Army developed two AMF variants—the SALT and the SANR. The SALT is intended as a 2-channel radio capable of running the Link 16 and SRW waveforms. The Army intends to install the 2-channel Link 16 and SRW capable AMF SALT radios on all AH-64E aircraft starting with Lot 6. The SANR is also a 2-channel radio capable of utilizing the Single Channel Ground and Airborne Radio System, Soldier Radio, and Wideband Networking Waveforms. The radio will be installed on AH-64E Apache Block III, UH-60M/L Black Hawk, CH-47F Chinook, and OH-58F Kiowa Warrior helicopters, in addition to the MQ-1C Gray Eagle unmanned aerial system. In March 2008, the Program Office awarded Lockheed Martin Corporation a cost-plus-award-fee contract through full and open competition. The basic contract was to acquire the complete JTRS Software Defined Radio system and subsystems in accordance with the 2006 JTRS Operational Requirements Document with the option to add capability packages for additional platform requirements. In September 2011, the Milestone Decision Authority (MDA) directed a program and contract restructure in order to meet the requirements. AMF then conducted market research on potential non-developmental Item solutions for Army Aviation. Market research showed less complex non developmental solutions would be available to meet revised user requirements and identified an additional vendor who was interested in competing. AMF conducted market research for the SALT radio, advising industry of the pending acquisition and soliciting inquiries from interested parties. AMF received and analyzed market survey data and initially determined a competitive approach in support of the planned SALT procurement for the PM Apache AH-64E Lot 6, was not feasible. On June 24, 2013, via Federal Business Opportunities, the government announced its intention to solicit on a sole source basis. In response to the synopsis, another vendor indicated its interest in the acquisition. Based on information presented by the second vendor, the Government concluded the SALT contract strategy should be full and open competition. The Army posted a revised solicitation notifying industry that the award would be competed and posted the draft request for proposals on January 17, 2014. On February 3, 2014, under an update to the same solicitation, the Army notified interested vendors that it would be hosting a pre-solicitation conference at Aberdeen Proving Ground, Maryland, in order to address any non-competition sensitive questions about the draft request for proposals. The Army anticipated issuing the formal request for proposals by March 3, 2014, and contracts by September 2014, but this has been delayed due to the lack of an approved acquisition strategy. The Army plans to award, via full and open competition, a single firm-fixed- price/cost-plus-fixed-fee hybrid type contract with a one-year base period and three one-year option periods for delivery of low-rate production and full-rate production quantities of SALT systems, spares and associated services. The program office estimates that the SALT full-rate production contract will be awarded in fiscal year 2017. According to a September 17, 2013, presolicitation announcement, the Army anticipates awarding two firm-fixed-price/cost-plus-fixed-fee hybrid type contracts with a one-year base period and four one-year option periods for SANR. The Army anticipated issuing a request for proposals by December 2, 2013, but the SANR solicitation schedule is being reassessed as a result of the budget estimate submissions. The Army would later down select to a single contractor to deliver both low-rate initial production units and full-rate production units. The JTRS HMS program meets the radio requirements for Soldiers and small platforms (such as missiles and ground sensors). JTRS HMS Increment 1 is structured as a single program of record with two phases. Phase 1 developed Small Form Fit and a Rifleman Radio for use in a sensitive but unclassified environment. Phase 2 is for the 2-Channel Manpack and small form fit-B for use in a classified environment. JTRS HMS will provide new networking capability to the individual Soldiers, Marines, Sailors and Airmen and also continue to provide legacy radio interoperability. JTRS HMS provides the warfighter with a software reprogrammable, networkable multi-mode system capable of simultaneous voice, data and video communications. The program encompasses specific requirements to support the US Army, US Navy, US Marine Corps, US Air Force and the Special Operations Command communication needs. Acquisition category: ACAT ID, as part of HMS program System development contract award: July 2004 Contractor: General Dynamics C4 Systems Inc. The Army continues to receive low rate units from the June 2011 contract; however, Army officials say that because of the Congressional direction for full and open competition, the full rate production decision has been delayed to allow for changes in the program acquisition strategy. The program is coordinating with DOD officials on a competition strategy prior to solicitation for procurement. The Army plans to award multiple fixed-price contracts to all qualified vendors using a two-step sealed bid process. Further, the Army plans to award initial delivery orders for qualification testing and operational assessment and award full rate production orders based on operational assessments and best value. The full-rate production review for the Rifleman Radio, which was initially expected to take place in May 2012, is now expected to begin in January 2015. The HMS program evolved from the JTRS program and provides software-programmable digital radios to support tactical communications requirements. The Manpack radio is a two-channel radio with military global positioning system that is capable of operating at various transmission frequencies using the SRW, the legacy Single Channel Ground and Airborne Radio System waveform, and current military satellite communications waveforms allowing Soldiers to participate in voice and data communications networks and transmit position location information. Army commanders use Manpack radios to provide networked communications for host vehicles and dismounted Soldiers during all aspects of military operations; communicate and create networks to exchange voice, video, and data using legacy waveforms or the SRW; and share voice and data between two different communications networks. As we discussed earlier, the histories of the Manpack and Rifleman radios were parallel until 2011. Low-rate production began in June 2011, for 100 Manpack radios. The full-rate production decision, initially expected in December 2012, is now expected to be in February 2015. However, in October 2012, the Army received approval for an additional 3,726 low-rate production Manpack radios. The initial production delays caused a delay in initial operational capability from March 2013 to August 2013. Acquisition category: ACAT ID, as part of HMS program System development contract award: July 2004 Contractor: General Dynamics C4 Systems Inc. Low rate production contract: June 2011 Contract type: Firm Fixed Price Total program (fiscal year 2013-2019): $1,900.35 million Quantity (fiscal years 2013-2019): 12,553 Next major program event: Draft Request for Proposal fourth quarter fiscal year 2014. Following full and open competition, a single cost-plus-award-fee development contract was awarded in July 2004. The Army began low rate production in 2011, using a firm-fixed-price contract, and continues to receive low-rate units. However, the Army told us that, because of the Congressional direction for full and open competition, the full rate production decision has been delayed to allow for changes in the program acquisition strategy. The program is coordinating with DOD officials on a competition strategy prior to solicitation for procurement. The Army plans to award multiple fixed-price contracts to all qualified vendors using a two- step sealed bid process. Further, the Army plans to award initial delivery orders for qualification testing and operational assessment and award full- rate production orders based on operational assessments and best value. The Joint Battle Command-Platform (JBC-P) provides joint forces command and control (C2) and situational awareness capability at the platform level and enables mission accomplishment across the entire spectrum of joint military operations. JBC-P serves as the cornerstone for Joint Blue Force Situational Awareness and provides continuous near real-time identification of friendly locations to populate the Joint Common Operating Picture. JBC-P software is designed to run on existing Force XXI, Battle Command, Brigade and Below (FCBC2) systems as well as new hardware items, thereby reducing the Army’s investment in new hardware. JBC-P is an upgrade from FCBC2 in that it provides enhanced chat room capability, improved mission command applications, a more intuitive graphical interface, enhanced blue force situational awareness, and ruggedization of hardware. Force XXI, Battle Command, Brigade and Below began as a program in 1996 with primary capabilities of Situational Awareness (e.g. friendly and enemy position data) and Command and Control messaging (orders, free text, overlays, etc.) for combat platforms and tactical vehicles. In its early years, FBCB2 relied upon line-of-sight radio communications, but later adopted satellite and became commonly referred to as Blue Force Tracking, or BFT. In 2006, the Army began a software product line called Joint Capabilities Release (JCR) that, according to Army officials, provided increased chat and messaging capabilities. However, these capabilities were still limited. To address these limitations, the Army initiated a follow-on effort called JBC-P, which the Joint Requirements Oversight Council approved in May 2008. JBC-P heavily leverages both the hardware and product line software of JCR and FBCB2 to introduce its enhancements to operational units. In September 2009, an Acquisition Decision Memorandum approved the system’s entry into the Engineering and Manufacturing Development Phase. In July 2012, the Program Executive Office, Command and Tactical, as the JBC-P Milestone Decision Authority, approved the program for production. During the October through November 2013 Network Integration Evaluation 14.1, the Army conducted a JBC-P software build 5.1 customer test to demonstrate correction of Initial Operational Test and Evaluation deficiencies which supported a Full Rate Production decision in December 2013. Quantity: 25,086Next major program event: JBC-P will participate in NIE 14.2 for Follow- on Test and Evaluation as well as NIE 15.1 as a System Under Evaluation (SUE). According to the Army, the FBCB2 and BFT hardware systems are known as the mounted Family of Computer Systems; DRS Tactical has a 3-year contract with 2 option years to provide needed hardware. The JBC-P software is being developed in-house at the U.S. Army Aviation and Missile Research, Development and Engineering Center’s Software Engineering Directorate, rather than selecting a contractor. The program’s strategy is to use existing hardware so that JBC-P will largely be a software-intensive effort. The Army also stated they plan to recompete periodically for things like computers to get new technologies and reduce costs and contends that competition has already netted 25-30% satellite savings and other cost savings for the network operations center. The program has conducted market research to identify potential sources and is engaging industry, both large and small. The Nett Warrior is an integrated dismounted leader situational awareness system for use during combat operations. According to the Army, the system provides unparalleled situational awareness to the dismounted leader and allows for faster and more accurate decisions in the tactical fight. The Nett Warrior program focuses on the development of the situational awareness system, which has the ability to graphically display the location of an individual leader’s location on a digital geo- referenced map image on a smart device. Additional Soldier and leader locations also can appear on the smart device digital display. Nett Warrior connects through a secure radio to send and receive information from one Nett Warrior to another, thus connecting the dismounted leader to the network. These radios will also connect the equipped leader to higher echelon data and information products to assist in decision making and situational understanding. Soldier position location information will appear on network via interoperability with the Army’s JTRS capability. This allows the leader to easily see, understand and interact in the method that best suits the user and the particular mission, help leaders avoid fratricide, and make soldiers more effective and lethal in the execution of their combat missions. The Under Secretary of Defense for Acquisition, Technology, and Logistics approved the Ground Soldier Ensemble (later renamed Nett Warrior) for entry into the Technology Development Phase in February 2009. The initial low-rate production contract was awarded in 2012 with a follow-on low-rate buy authorized in July 2013. The Nett Warrior is intended to address operational requirements to provide the dismounted Leader with improved situational awareness, command and control capabilities. It links the dismounted Leader via voice and data communications to Soldiers at the tactical edge and to headquarters at Platoon and Company levels. Three Technology Development contracts were awarded in April 2009 to General Dynamics C4 Systems of Scottsdale, Arizona; Raytheon Network Centric Systems of McKinney, Texas; and Rockwell Collins of Cedar Rapids, Iowa. The systems developed under these contracts have undergone Developmental Tests and completed a limited user test in late 2010. In August 2011, the Army held a configuration steering board that approved the recommended de-scoping of the system’s requirements and set the new technical baseline. Based on the configuration steering board recommendations, the Army further refined the system to provide competitively procured End User Devices (commercial-based smartphone-like devices) connected to the Rifleman Radio. As a result of the steering board changes, the Army did not pursue the original planned limited competition among the technology development phase contractors for production effort and instead adopted a commercial approach, allowing the program to proceed directly to low rate production. Additionally, according to Army Officials, the program plans to employ competition at the component level through contract actions with short durations, even one-time buys, to take advantage of technology advancements. For example, the end user devices will likely be purchased using competitively awarded indefinite delivery, indefinite quantity contract delivery orders. This type of contract allows the agency to bring in new contractors without having to go through the process of awarding a new competitive contract. According to documents provided by the Army, the Nett Warrior program executed 24 competitive contract actions over fiscal years 2012 and 2013 valued at $94.9 million. According to officials, the Army made the majority of these purchases through other agencies’ contract vehicles as purchase orders. The purchases included a variety of items, such as smart phones, cases, secure digital memory cards, and styluses. The Nett Warrior has also used market research to get feedback from industry utilizing the Federal Business Opportunities site to post notices intended to obtain industry comments on proposed solicitations for subcomponents. In January 2013, the Army posted a notice seeking industry feedback on a draft request for proposals that would seek to procure networking hubs and power supply. Warfighter Information Network - Tactical (WIN-T) is essentially the soldiers’ Internet, providing a satellite-based tactical communications backbone, to which other Army networked systems need to connect in order to function. WIN-T employs a combination of terrestrial, airborne, and satellite-based transport options, to provide robust, redundant connectivity. It enables battle command on the move, keeping highly mobile and dispersed forces connected to one another and to the Army’s global information network. With essential voice, video and data services, commanders can make decisions faster than ever before and from anywhere on the battlefield information. WIN-T will be fielded in three increments, all of which are managed by the same program office. Increment 1 is fielded, Increment 2 is currently fielded, and Increment 3 is currently being restructured. We discuss Increment 3 later in this appendix. WIN-T Increment 2 provides commercial and military band satellite communications to Division, Brigade, Battalion and Company while also providing on the move capability and a mobile infrastructure. It further provides satellite communications and supports limited collaboration and mission planning. Using equipment mounted on combat platforms, WIN-T Increment 2 delivers a mobile capability that reduces reliance on fixed infrastructure and allows leaders to move on the battlefield while retaining situational awareness and mission command capabilities. It enables distribution of information via voice, data and real-time video from ground- to ground and ground-to-satellite communications. The Army designed the WIN-T as three-tiered communications architecture (space, terrestrial and airborne) to serve as the Army’s high- speed and high-capacity tactical communications network. WIN-T was restructured following a March 2007 Nunn-McCurdy unit cost breach of the critical threshold, and will be fielded in the following three increments: Increment 1: Networking At-The-Halt enables the exchange of voice, video, data and imagery throughout the battlefield using a satellite based network. Increment 1 has been fielded. Increment 2: Initial Networking On-The-Move provides command and control down to the company level and implements improved network security architecture. Increment 2 has been fielded. Increment 3: Develops the network operations software to enable seamless integration of tactical network functions and enhanced waveforms for increased throughput capability. WIN-T Increment 3 is currently being restructured. Acquisition category: ACAT ID Program start: June 2007 Low rate production contract award: March 2010 Contractor: General Dynamics C4 Systems, Inc. The initial milestone B, which is the point at which a system enters system development, in 2003 included two competing contractors— Lockheed Martin and General Dynamics C4 Systems. In August 2004 the competing contractors merged into one team with General Dynamics as the lead. After the Nunn McCurdy breach in 2007 and program restructure, a sole source contracting approach was used based on the authority of 10 U.S.C. 2304(c)(1) that the requirements were available from only one or a limited number of responsible sources and no other supplies or services would satisfy agency requirements and for follow-on production since supplies are only available from the original source for continued production. A second Follow-on Operational Test and Evaluation is scheduled for the October through November 2014 timeframe at NIE 15.1 to support the full rate production decision review, which is expected to occur in 2015. The Government will consider requiring the contractor in the proposal to identify items which the government will be able to acquire competitively in the future, in substantial quantities. WIN-T Increment 3 builds on the capabilities of previous WIN-T Increments by developing the network operations software (NetOps) that enables the seamless integration and management of tactical networks, and the enhanced waveforms that increase throughput and improve network capacity and robustness. Until recently, WIN-T Increment 3 had intended to introduce an additional line-of-sight link using an airborne platform. However, an Army Configuration Steering Board meeting held on November 7, 2013 approved the de-scoping of the program to focus on NetOps and completion of the waveform development efforts. As a result, WIN-T Increment 3 is currently being restructured and upon Defense Acquisition Executive approval, a revised program baseline will be created. Increment 3: Full Networking On-The-Move provides full mobility command and control for all Army field commanders. Network reliability and robustness is enhanced with the addition of enhanced network operations software and enhanced waveforms. Acquisition category: ACAT ID Program start: June 2007 Development contract award: July 2007 Contractor: General Dynamics C4 Systems, Inc. The initial development contract was the result of a competitively awarded contract between two competing contractors—Lockheed Martin and General Dynamics C4 Systems. In August 2004 the competing contractors merged into one team with General Dynamics as the lead. The follow-on development contract will be a sole source to the current contractor—General Dynamics C4 Systems. According to Army officials, sole source is planned because the contract cannot be competitively awarded without unacceptable delays in fulfilling the Army’s requirements. For the follow-on development contract, the contractor plans to conduct sub-tier competition to ensure it is getting the best commercial products at fair and reasonable prices. The program anticipates a sole source award for low rate production to ensure success of the manufacturing demonstrations and initial operational tests. Belva M. Martin, (202) 512-4841 or martinb@gao.gov. In addition to the contact named above, LaTonya Miller, Assistant Director; Marie P. Ahearn; William C. Allbritton; Marcus C. Ferguson; William Graveline; James Haynes; Sean C. Seales; Wendy P. Smythe; Robert S. Swierczek; and Paul G. Williams made key contributions to this report.
For nearly 20 years, the Army has had limited success in developing an information network—sensors, software, and radios—to give soldiers the exact information they need, when they need it, in any environment. The Army has declared its tactical network as its top modernization priority and estimated the modernization may cost up to $3 billion per year into the foreseeable future. The Army's current modernization approach is intended to leverage solutions developed by private industry. Given the costs and importance of the network, GAO was asked to examine aspects of the Army's effort to acquire network capabilities. This is the third report in response to the Subcommittee's requests. In this report, GAO examines the Army's progress in implementing competitive strategies for tactical networking systems. GAO selected a non-generalizable sample of 9 of these 25 systems that the Army indicated are critical for ensuring soldiers are able to send and receive mission-critical information between units, and that cover the breadth of warfighter operations. GAO reviewed acquisition strategies for evidence that the Army was seeking competition. The Army is incorporating competition in various ways for most of the nine tactical networking acquisition programs GAO examined. To achieve the best return on the government's investment, federal agencies are generally required to award contracts competitively. As the Army has decreased the amount of in-house system development it is doing for tactical networking equipment, it is using various tools to involve private industry to meet its needs. One such tool is the agile capabilities life cycle process whereby the Army determines the capabilities it needs and gaps in those capabilities, and uses market research and semi-annual evaluations, among other means, to involve industry. According to the Army, this agile process provides opportunities for enhancing competition. The Army acquisition strategy for eight of the nine systems discusses plans for competition and market research. An acquisition strategy is not required for the Soldier Radio Waveform Appliqué system because it is not a formal acquisition program; however, the Army conducted market research and is seeking competition. GAO grouped the nine systems into three categories based on similarities in the competition strategy. Specifically, In two of the nine systems GAO examined—Mid-tier Networking Vehicular Radio and Soldier Radio Waveform Appliqué—the Army is beginning new programs and structuring the acquisition approaches to competitively procure non-developmental capabilities directly from industry. The Army competitively awarded a procurement contract for its Mid-Tier Networking Vehicular Radio, providing units for risk reduction and requirements verification. In April 2014 the Army competitively awarded contracts to four vendors to buy the Soldier Radio Waveform Appliqué. Five of the nine systems GAO studied have been under development for many years. Three of those—the Airborne, Maritime, and Fixed Station radio; the Rifleman Radio; and the Manpack Radio—were part of the Joint Tactical Radio System, which was previously competed and which the Army has restructured. The Army had been developing software-defined radios to interoperate with existing radios. The Army is now seeking non-developmental solutions through competition to provide the needed capability. For the other two systems, the Joint Battle Command–Platform and Nett Warrior, the Army reports that it plans to use full and open competition for individual subcomponents. In both cases, the Army conducted market research to identify vendors or seek feedback on requirements. The Army deemed competition impractical for the two remaining systems in GAO's review, the Warfighter Information Network-Tactical Increment 2 and Warfighter Information Network-Tactical Increment 3. The Army considered acquisition strategies for more competition in the development and procurement of these systems but determined that only the incumbent contractor could satisfy the requirements without unacceptable delays. Nevertheless, the Army continues using market research to identify interested contractors and has awarded several competitive contracts for subcomponents under these two systems. GAO is not making recommendations in this report. DOD provided technical comments on a draft of this report, which were incorporated, as appropriate.
Medicare claims can be denied on a prepayment basis (i.e., before the claim is paid) or on a postpayment basis (i.e., after the claim is paid and the payment is identified as improper). Many appeals originate from claims denied on a prepayment basis, but the same appeal rights exist for either scenario. To conduct a prepayment claim review, CMS contractors conduct several checks to determine whether a claim received from a provider should be paid. These checks include verifying that the provider is enrolled in Medicare, the beneficiary is eligible to receive Medicare benefits, and the service is covered by Medicare. In limited cases, before paying a claim, contractors review the supporting medical documentation for a claim to ensure the service was medically necessary. As a result of these checks or reviews, CMS’s contractors may deny Medicare payment for the claim. Most prepayment reviews are conducted by MACs, which are responsible for processing and paying FFS claims within 16 geographic jurisdictions. To conduct a postpayment review, contractors generally select claims from among those that have already been processed and paid, request and review documentation from providers to support Medicare coverage of the services identified in those claims, and apply Medicare coverage and coding requirements to determine if the claims were paid properly, reviewing, for example, whether the service was medically necessary or provided in the appropriate setting. The majority of the postpayment reviews are conducted by RAs. The Medicare administrative appeals process allows appellants who are dissatisfied with decisions at one level to appeal to the next level. The entities tasked with resolving appeals are referred to as appeals bodies. The statutory time frames for submitting and issuing appeal decisions can vary by level. (See table 1.) When an appeals body cannot render a decision within the applicable statutory time frame at levels 2 through 4, the appellant has the opportunity to escalate the appeal to the next level of appeal. CMS may also refer certain decisions made at Level 3 to Level 4. Each level of appeal follows similar steps. First, the appellant files an appeal and submits supporting documentation. The appeals body then assigns the appeal to an adjudicator who reviews the appeal, including the relevant Medicare policies and documentation. Adjudicators at all four levels generally conduct what are known as de novo reviews, meaning they conduct an independent evaluation of the claim(s) at issue and are not bound by the prior findings and decisions made by other adjudicators. Next, the appeals body issues the appeal decision and notifies the appellant. If the appellant files an appeal at the next appeal level, the documentation associated with the prior appeal is sent to the next appeal level. Appeals must meet certain requirements in order to be reviewed. For example, the appeal must be filed by an appropriate party, such as by the provider who furnished the service to the beneficiary and submitted a claim to Medicare for that service. In addition, the appeal must be filed within the required time frame. To be reviewed at Level 3, an appeal must meet or exceed a minimum dollar amount, known as the amount in controversy. Under certain circumstances, appellants may combine claims to meet the amount in controversy requirement, which is $150 in calendar year 2016. Some differences exist in the criteria appeals bodies use to make their decisions. While all levels are bound by statutes, regulations, national coverage determinations, and CMS rulings, only Level 1 is subject to local coverage determinations (LCD) and CMS program guidance, such as program memoranda and manual instructions. In comparison, Levels 2 through 4 are required to give substantial deference to LCDs and other CMS program guidance if they are applicable to a particular appeal. However, unlike Level 1, Levels 2 through 4 may exercise discretion to decline to follow LCDs and CMS program guidance when issuing an appeal decision, and must explain in the decision the basis for doing so. Levels 1 and 2 can also accept and consider new evidence submitted by appellants to support their appeals. For example, to pay claims related to certain durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS), CMS requires providers to submit a certificate of medical necessity. If this document was not submitted with the original claim, the provider may submit it as part of the appeal. At Levels 3 and 4, new evidence can generally be accepted only with “good cause.” Unlike Levels 1, 2, and 4, which decide appeals generally by reviewing the documentation upon which the initial denial was based as well as any supporting documentation the appellant submitted with the appeal, Level 3 Administrative Law Judges (ALJ) conduct hearings during which appellants are permitted to explain their positions, present evidence, and submit into the record a written statement of the facts and law material to the issue. All four appeals bodies may issue appeal decisions that do not address the merits of the case: Dismiss: The appellant withdraws the request for an appeal or the appeals body determines that the appellant or appeal did not meet certain procedural requirements; for example, the appellant did not file the request within the required time frame. Remand: An action that can be taken at Level 2, 3, or 4 which vacates a lower level appeal decision, or a portion of the decision, and returns the case, or a portion of the case, to that level for a new decision. Appeal decisions for Levels 1 through 3 include the following categories for decisions issued on the merits based upon a consideration of the facts of the appeal: Fully reverse: The appeals body fully reverses a prior decision denying coverage and all of the claim(s) in dispute are paid. Partially reverse: The appeals body partially reverses a prior decision denying coverage, and those parts of the claim(s) in dispute are paid. Not reverse: The appeals body upholds a prior decision denying coverage, and payment of the claim(s) in dispute is denied. Level 4 uses different categories for decisions issued on the merits. In addition to dismissing or remanding a Level 3 decision, Level 4 appeal decisions can affirm, reverse, or modify a Level 3 decision. Additional information about Level 4 appeal categories is discussed in appendix I. To help manage the Medicare appeals process and track appeal decisions, the appeals bodies use various data systems. (See table 2.) In 2005, CMS implemented MAS, which at the time was intended to support Levels 1 through 4. However, currently, three data systems are used to collect appeals data across the four levels of the Medicare appeals process. The total number of Medicare appeals filed and the number of appeal decisions that were issued after statutory time frames at Levels 1 through 4 increased from fiscal years 2010 through 2014, with the largest rate of increase at Level 3. Reversal rates also decreased during this time for most levels of appeals. Between fiscal years 2010 and 2014, the number of filed appeals at all levels of Medicare’s appeals process increased significantly, with the rate of increase varying across levels. For example, during this period, the number of Level 1 appeals, which represented the vast majority of all appeals, increased from 2.6 million to 4.2 million—an increase of 62 percent—which was the slowest rate of increase among the four levels. While Level 3 handled fewer appeals overall, it experienced the largest rate of increase in appeals from 41,733 to 432,534 appeals—936 percent—during this period. (See table 3.) For most levels, the largest annual growth over the 5-year period occurred between fiscal years 2012 and 2013, and between fiscal years 2013 and 2014 the rate of growth slowed at all levels. For all appeal levels, appeals of claim denials for Medicare Part A (Part A) services showed the most dramatic increase. Among the four levels, Level 3 experienced both the largest increase in appeals overall, as well as the largest increase in Part A appeals, which increased over 2,000 percent between fiscal years 2010 and 2014. (See fig. 1.) Appeals of denied DMEPOS claims also grew substantially during this time at all levels. For example, DMEPOS-related appeals increased the most at Level 3 at over 1,000 percent. HHS attributed the increases in appeals overall to several factors. For example, HHS fiscal year 2016 budget justification materials noted that CMS’s increased focus in recent years on expanding new program integrity activities to ensure proper payment has resulted in more denied claims and, therefore, more appeals. Specifically, appeals resulting from RA claim denials began entering the appeals process in fiscal year 2011 after Congress enacted legislation that expanded the RA program from a demonstration operating in six states to a permanent national program, which CMS implemented in fiscal year 2009. In expanding nationally, the RA program added a new set of contractors with the specific purpose of reviewing postpayment claims to identify improper payments. In addition to the large volume of postpayment reviews conducted by the RAs, there was also an increase in overall claim denials from fiscal years 2011 to 2014, according to HHS’s June 2015 Process Improvement and Backlog Reduction Plan. The number of overall claim denials during this time period for Part A and B claims increased 12.5 and about 9 percent, respectively. For all levels, we found that appeals related to RA denials were a larger contributor to the increase in Part A appeals compared to Part A appeals not related to RAs. For example, at Level 3, RA-related appeals of Part A services grew from 1 percent (140 appeals) of filed Part A appeals in fiscal year 2010 to 78 percent (216,271 appeals) in fiscal year 2014. HHS also attributed the increase in appeals to a greater propensity among providers to appeal denied claims. From fiscal year 2010 to fiscal year 2014, the proportion of appeals filed by providers increased at Levels 2 through 4. The proportion of appeals filed by state Medicaid agencies also increased at Levels 2 and 4, while the proportion of appeals filed by beneficiaries at Levels 2 through 4 declined. According to HHS agency officials, a small number of providers and state Medicaid agencies were responsible for a large share of the appeals. For example, at Level 2, CMS noted that three DMEPOS suppliers filed 12 percent of DMEPOS appeals in calendar year 2012 and 33 percent of Level 2 DMEPOS appeals in calendar year 2014. Similarly, at Level 3, OMHA reported that four DMEPOS providers and one state Medicaid agency filed 51 percent of appeals in the first quarter of fiscal year 2015. In addition, the number of appeals filed by state Medicaid agencies more than doubled at Levels 2 through 4 from fiscal year 2010 to fiscal year 2014. At Level 3, state Medicaid agency appeals increased from 2,617 to 25,195 during that time period. According to HHS’s Process Improvement and Backlog Reduction Plan, appeals filed by state Medicaid agencies that relate to home health care services provided to beneficiaries eligible for both Medicare and Medicaid services have contributed to the growth in Level 3 appeals, and CMS officials told us that four state Medicaid agencies (Connecticut, Massachusetts, New York, and Vermont) generated the majority of these appeals. (For more information on appeals by appellant type, see app. II.) The number of appeal decisions that were issued after statutory time frames generally increased from fiscal years 2010 through 2014. Among the four appeal levels, Levels 1 and 2 had a smaller proportion of decisions exceeding statutory time frames over the period. For example, CMS data show that in fiscal years 2010 and 2011, MACs generally issued less than 10 percent of their Level 1 appeal decisions after the statutory time frame (see table 4). In fiscal year 2012, MACs issued a greater percentage of decisions after statutory time frames and, notably, CMS data show that in the fourth quarter of that year, MACs issued about 68.5 percent of their appeal decisions related to DMEPOS claims after the statutory time frame. CMS officials told us that the delays resulted from two factors: two MACs received a high volume of appeals filed by seven suppliers and one of those MACs also experienced challenges implementing a new tool used to generate correspondence with appellants. In fiscal year 2014, MACs again issued less than 10 percent of Part A and DMEPOS appeals after statutory time frames, though nearly 21 percent of Medicare Part B (Part B) appeal decisions were issued after statutory time frames in one quarter. Like the MACs, the Qualified Independent Contractors (QIC) also generally had a relatively small proportion of Level 2 decisions exceeding the statutory time frame during this time. CMS data show that the QICs began issuing appeal decisions after the statutory time frame in fiscal year 2011, and the percentage of such appeal decisions increased to 44 percent (345,049 appeals) in fiscal year 2013. However, in fiscal 2014, the QICs issued less than 5 percent of their appeal decisions after the statutory time frame. In contrast, the increase in appeal decisions issued after statutory time frames and the proportion of those appeal decisions were greater at Levels 3 and 4. For example, OMHA data show that in fiscal year 2014, ALJs issued 96 percent of their Level 3 appeal decisions after the statutory time frame. Similarly, Departmental Appeals Board (DAB) data show that in fiscal year 2014 the Council issued 91 percent of its Level 4 appeal decisions after the statutory time frame. (See fig. 2.) In fiscal year 2014, Levels 3 and 4 issued decisions within the statutory time frames for a greater percentage of beneficiary-filed appeals than appeals filed by providers or state Medicaid agencies. Recognizing that delays in issuing appeal decisions affects this population most acutely, both levels have instituted processes to move beneficiary appeals to the front of their queues. Between the two appeals bodies, Level 3 ALJs took longer to issue decisions. In fiscal year 2014, ALJs issued 93 percent of their Level 3 appeal decisions in 180 days or more—the statutory time frame is generally 90 days—while the Council issued 67 percent of Level 4 appeal decisions in 180 days or more. According to HHS’s Process Improvement and Backlog Reduction Plan, the increase in late appeal decisions for Levels 3 and 4 from fiscal year 2010 through 2014 resulted from the increase in the number of appeals filed, as well as the relatively flat budgets of OMHA and the Council, which have prevented the hiring of sufficient staff to address the growing workload. For example, as previously noted, the number of filed appeals at Level 3 increased over 900 percent from fiscal year 2010 to fiscal year 2014, while OMHA’s budget during the same period increased from about $71 million to about $82 million (16 percent). (See table 5.) In addition, HHS noted that neither HHS agency receives funds from recoveries made by the RA program, although they review appeals of claims denied by RAs. The increase in the number of decisions made after statutory time frames at Levels 3 and 4 also increases the amount of interest paid by CMS to providers whose postpayment claim denials are reversed upon appeal, thus increasing Medicare’s costs. Currently, CMS is prohibited by statute from collecting overpayments from providers who file appeals until after a Level 2 decision is made. CMS is also required to pay providers interest on the overpayments it initially collects after the Level 2 decision is made and then returns when the appellant wins appeals at Level 3 or higher. In 2014, the annual interest rate paid by CMS to these providers ranged from 9.625 percent to 10.375 percent. As a result, CMS interest payments have increased. Specifically, CMS officials estimate that from fiscal years 2010 through 2015, the agency paid $17.8 million in interest payments to Part A and B providers that it would not have paid had Level 3 issued appeal decisions within statutory time frames. Moreover, CMS estimates that the agency paid about 75 percent of this interest ($13 million) in fiscal years 2014 and 2015, when delays in issuing decisions have been the longest. From fiscal years 2010 through 2014, fully favorable reversal rates decreased for Levels 1 through 3, but varied across levels, with appeals reaching Level 3 the most likely to be reversed. (See fig. 3.) For example, in fiscal year 2014, ALJs fully reversed the prior decision in 54 percent of Level 3 appeal decisions issued on the merits. In contrast, Level 1 and Level 2 adjudicators fully reversed prior decisions in 36 and 19 percent, respectively, of appeal decisions issued on the merits in fiscal year 2014. At different times, HHS has attributed the relatively high reversal rates at Level 3, in part, to the opportunity for hearings and presentation of new evidence at Level 3, and ALJs’ exercise of discretion in declining to follow LCDs and CMS program guidance. More specifically, HHS has noted the following: ALJs conduct hearings, which provide an opportunity for appellants to explain the rationale for the medical treatment. ALJs may consider new evidence admitted for good cause—for example, documentation required for the claim to be approved that the appellant did not submit for consideration at Levels 1 or 2. While neither CMS nor OMHA collect data in MAS that would allow us to substantiate to what extent ALJs declining to follow LCDs or CMS program guidance contribute to Level 3 reversals, HHS noted in its Process Improvement and Backlog Reduction Plan that this is a factor, and a 2012 HHS Office of Inspector General (OIG) report reached similar conclusions. Furthermore, OMHA’s most recent quality assurance evaluation, completed in 2013, identified compliance with and understanding of the role of LCDs and other program guidance as a key issue for improvement. According to HHS’s Process Improvement and Backlog Reduction Plan, the qualified decisional independence afforded ALJs may result in a more favorable result for appellants at Level 3. Furthermore, as anticipated by the federal law governing administrative procedures, qualified decisional independence leaves substantial room for subjectivity in ALJs’ application of policy to the facts of a given case, and consequently, two reasonable reviewers can review the same facts and come to two legally defensible conclusions. Similarly, OMHA’s 2013 quality assurance evaluation found that of 60 reviewed cases that were decided after a hearing that involved an LCD or other CMS program guidance, in 30 cases the policy was applied differently than how it was applied at the lower level. While reversal rates declined across Levels 1 through 3 from fiscal years 2010 through 2014, reversal rates varied by type of service, with Part B appeals having the highest reversal rates. (See fig. 4.) In addition, fully favorable reversal rates at Levels 1 and 3 during this time generally varied depending upon whether the appeal was RA-related. At Level 1, RA-related appeals often had lower fully favorable reversal rates than did non-RA appeals, though differences exist when rates are compared by type of service. In contrast, RA-related appeals at Level 3 generally had higher fully favorable reversal rates than did non-RA appeals, both overall and for each of Part A and Part B services. (For more information on reversal rates for Levels 1 through 3, see app. III.) Our analysis of Level 4 appeals data shows that from fiscal years 2010 through 2014, the Council affirmed the Level 3 decision in about two- thirds of appeals, and reversed, dismissed, or remanded the remaining one-third of the decisions. Level 4 decisions on appeals filed by providers, beneficiaries, and state Medicaid agencies were more likely to affirm ALJ decisions compared to decisions on appeals referred by CMS, meaning that the Council’s decisions were more likely to uphold lower level decisions to deny Medicare payment for those claims. Specifically, Level 4 decisions affirmed the Level 3 decision in 73 percent of appeals filed by appellants and in only 15 percent of appeals filed by CMS. (For more information on reversal rates for Level 4, see app. III.) HHS agencies use appeals data to monitor the Medicare appeals process, but do not collect information on the reasons for Level 3 appeal decisions or the amounts of allowed Medicare payments in dispute. Further, we identified several instances of inconsistent data across the three data systems used by HHS to monitor appeals. HHS agencies use data collected in CROWD, MAS, and MODACTS to monitor the Medicare appeals process for Levels 1 through 4. These data systems collect information such as the date when the appeal was filed, the type of service or claim appealed, and the length of time taken to issue appeal decisions. Among other things, HHS agencies use these data to identify emerging trends, such as increases in appeals among certain service categories and changes in reversal rates; determine the extent to which the agencies or their contractors decide appeals within the statutory time frames; and help HHS estimate resource needs. For example, CMS officials told us that using data collected in MAS the agency observed that the largest increases in filed DMEPOS appeals were related to oxygen supplies and diabetic glucose testing supplies. As a result, the agency developed a strategy to help reduce the growth in these types of appeals. CMS and OMHA are also in the process of making changes to these appeals data systems, and according to agency officials, these changes will improve their monitoring activities. Specifically, CMS plans to transition the collection of all Level 1 appeals data from CROWD into MAS, a process that CMS officials expect could take a minimum of 27 months and is dependent on the receipt of additional funding. CROWD currently collects the majority of Level 1 appeals data, which has less specificity than MAS. For example, CROWD collects only aggregate monthly totals of the number of appeals filed, which does not, for example, enable the tracking of individual Level 1 appeal decisions. Additionally, OMHA is developing the Electronic Case Adjudication and Processing Environment (ECAPE) to help the agency transition from a paper-based business process to a fully electronic one, enabling OMHA officials to automate many aspects of the agency’s appeals processes, such as generating appellant correspondence. ECAPE will exchange Level 3 data with MAS and MAS will continue to be the data system of record for Level 3 decisions in order to enable the sharing of common appeals data across the first three levels. According to OMHA officials, the new system will also provide the agency with additional data with which to monitor appeals at Level 3. For example, officials told us that ECAPE will allow the tracking of the time it takes to conduct discrete processes in Level 3, such as the time between when an ALJ provides written instructions to an attorney to when an attorney completes the decision letter draft. Additionally, OMHA officials told us that the data from ECAPE will also provide the agency with additional functionalities not present in MAS that could improve the efficiency with which Level 3 appeals are decided, such as the ability to allow appellants to view on a website the documentation included in their appeal file. Officials expect such a website could reduce the amount of redundant documentation from prior appeal levels submitted by appellants that must be reviewed by OMHA staff. However, MAS does not collect other information contained in ALJs’ appeal decisions issued at Level 3, which is one data source CMS uses to monitor Level 3 appeal decisions. Level 3 decision letters generally document the facts of the case and the rationale for an appeal decision, but MAS does not collect detailed information related to the reasons for the appeal decisions that could be useful to HHS. For example, MAS does not contain information on whether LCDs or other CMS program guidance were among the issues disputed as part of the appeal, whether the ALJ declined to follow such guidance in issuing the decision, whether the ALJ admitted new evidence, or whether other factors contributed to the Level 3 decision. While some information on the reasons for Level 3 denials is collected by a CMS contractor, this information is not maintained in MAS. Of the three Medicare appeals systems, only MAS collects information on the amount at stake in an appeal. In MAS, the amount is tied to the amount billed by the provider, but this amount can vary substantially from the Medicare allowed amount. According to HHS officials, CMS and OMHA data analyses suggest that, on average, billed amounts are about three times higher than the Medicare allowed amounts, but for some types of service, such as DMEPOS, the billed amount can be as much as eight times higher than the Medicare allowed amount. The Medicare allowed amount is a better approximation of what Medicare will actually pay if the item or service at issue in the appeal was covered. For example, according to CMS data, we found that inpatient hospitals in the United States billed Medicare an average of $6.3 billion for the top 100 diagnoses and procedures in fiscal year 2013, but the Medicare allowed amount for these services averaged $1.4 billion. CMS officials told us that MAS does not track the Medicare allowed amount for prepayment claim denials because the MACs do not compute this amount for those claims. CMS officials also indicated that tracking allowed amounts for all appealed claims at Levels 1 and 2 would be extremely resource intensive and the benefits would be minimal. However, several MACs told us that they compute an estimate of the Medicare allowed amount to determine the Medicare savings associated with their prepayment medical reviews. Additionally, CMS officials told us that MAS currently collects the data that would be used to calculate the Medicare allowed amount, such as procedure codes. The collection of these types of data, specifically reasons for ALJ decisions and the Medicare allowed amount associated with an appeal, could help HHS agencies strengthen their existing monitoring and data collection activities. This would be consistent with the federal standards for internal control that require agencies to conduct ongoing monitoring to assess the quality of performance over time to ensure operational effectiveness, and to run and control agency operations using relevant, reliable, and timely information. If HHS agencies collected information on the key characteristics that contributed to the Level 3 appeal decision in the appeals data systems, they would have information that could help identify appeal trends, which could help identify payment or claim review policies in need of clarification or additional guidance for appeals bodies or appellants. Similarly, by not collecting the Medicare allowed amount for all pending appeals, HHS agencies are lacking information that could be useful in three ways. OMHA officials told us that the agency would like to base the amount in controversy on the Medicare allowed amount, as they believe that doing so could help reduce the number of Level 3 appeals filed in two ways. First, by using the Medicare allowed amount, some appeals might fall below the amount in controversy, and, therefore, would not be appealed. Second, appellants could choose to aggregate appeals that individually fall below the amount in controversy, which could also reduce the number of appeals filed. Currently, per regulation, the amount in controversy is computed using the provider billed amount. HHS agencies could use the Medicare allowed amount to calculate reversal rates based upon the potential Medicare dollars payable. Currently, HHS agencies calculate reversal rates based upon the number of appeals or appealed claims. Such a methodology does not account for differences in the dollar value of those appeals. Monthly reports from 2014 prepared for CMS on the Medicare appeals process state that the Level 3 reversal rate is higher when it is calculated based upon the amount in controversy, which according to monthly reports indicates that higher value claims are more likely to be reversed on appeal. Without the Medicare allowed amount or an approximation of it, HHS agencies do not know the amount of money at issue in the Medicare appeals process. Our review found data inconsistencies across the three appeals data systems and within the appeal levels that use MAS, such as variation in how appeal decisions are recorded at the claim level and how HHS agencies track appeal decisions. These data inconsistencies limit HHS agencies’ ability to monitor emerging trends in appeals using consistent and reliable data. Federal standards for internal control call for agencies to establish and control operations using reliable information. First, our review found variation in how appeal decisions at the claim level are recorded across CROWD, MAS, and MODACTS. Specifically, MAS has the capability to track appeal decisions by each claim, as well as by each line item in a claim, while CROWD and MODACTS do not. A claim for Medicare payment may identify a single procedure or item, or multiple procedures or items. For example, a claim for a continuous positive airway pressure device, a DMEPOS item, can have multiple line items that represent the device, tubes, filters, and mask included on the claim. Payment for some or all of these line items can be denied and then appealed. We also found variation within MAS in how the Level 1 through 3 appeals bodies record appeal decision data at the claim level. The Level 1 and 2 adjudicators that report appeals data in MAS record an appeal decision for each line item within a claim. MAS then derives a claim-level decision, that reflects the totality of decisions made for each of the claim lines included in the appeal. Using the claim-level decision, CMS can calculate a claim-level reversal rate. For example, the fully favorable reversal rate at the claim-level for an appeal composed of 10 claims, where 4 are fully reversed, would be 40 percent. In contrast, OMHA officials told us that ALJ teams vary in how they record claim-level decisions in MAS. Specifically, OMHA officials told us that while some ALJ teams record the actual decision for every claim included in the appeal, others record the decision for the appeal overall as the decision for each claim in the appeal. In such a circumstance, a comparable claim- level reversal rate cannot be calculated using the hypothetical example of the 10-claim appeal referenced above because all claims would be coded as partially reversed even though 4 claims were reversed and 6 claims were not reversed. Additionally, claim-level reversal rates cannot be compared across Levels 2 and 3. These differences in how data are entered into MAS limit HHS’s ability to compare claim-level reversal rates consistently across all appeal levels. Secondly, we found inconsistencies in how appeals are tracked by appeal level in the three data systems. Specifically, the three data systems use different categories to track the type of Medicare service at issue in the appeal, such as whether the appeal relates primarily to an inpatient hospital claim or a transportation claim. For example, Levels 1 and 3 cannot identify appeals submitted by hospice providers because these appeals at Level 1 are categorized as “other” and at Level 3 they are combined with home health appeals, even though hospice is tracked as its own category at Levels 2 and 4. Some efforts are being made to track appeals across appeal levels more consistently at Levels 2 and 3 in MAS. For example, according to an OMHA official, the agency plans to begin using the same appeal categories to track appeals at Level 3 that are used at Level 2, but has not determined when it will implement this planned change. There are also differences in how each appeal level assigns the appeal category to each appeal. For example, for Level 2 appeals, MAS assigns the appeal category using an algorithm based principally upon the type of claim filed. In contrast, Level 4 staff manually assign and enter the Level 4 appeal category in MODACTS, generally based upon information provided by the appellant in filing the appeal or from the Level 3 decision, according to Council officials. Such differences in how appeal categories are assigned can contribute to differences in how appeals are classified across appeal levels. Finally, another inconsistency we identified across the appeals data systems is the tracking of whether appeals are related to claims reviewed by the different Medicare review contractors. This is information that CMS can use to monitor the performance of its medical review contractors by tracking their appeal reversal rates. Although CROWD, MAS, and MODACTS track whether an appeal is RA-related, there are inconsistencies in whether appeals related to other medical review contractors are tracked in these systems. For example, only MAS tracks appeals related to the contractor that investigates fraud, and none of the three systems track whether the appeal was related to an improper payment identified by a MAC or by another of CMS’s review contractors, the Supplemental Medical Review Contractor. CMS and OMHA officials told us that they agree that greater data consistency across the Medicare appeals data systems and among the appeal levels using MAS would be beneficial for monitoring purposes. CMS officials told us that the agency awarded a contract in September 2015 to evaluate Levels 1 through 4 of the Medicare appeals process and that the evaluation, which is due in spring 2016, could also identify ways in which the appeals data could be improved. The specific objectives of this evaluation are to identify any changes that could streamline the Medicare appeals process, reduce the backlog of appeals, and reduce the number of filed appeals or the number of appeals reaching Levels 3 and 4. CMS officials told us they also expect the evaluation to identify additional appeals data that should be collected to improve the appeals process; however, this activity was not identified as an objective in the evaluation’s statement of work, and therefore, we do not know to what extent the evaluation will focus on the data in the appeals systems. While conducting such an evaluation is a good first step and may allow HHS to make improvements to the data systems that collect appeal information, it is unclear what findings the evaluator will recommend related to data consistency as this topic appears to be a small component of the overall evaluation. HHS agencies have taken several actions to reduce the total number of Medicare appeals filed and the current appeals backlog. However, the Medicare appeals backlog is likely to persist despite actions taken to date, and HHS efforts thus far do not address inefficiencies with the way certain repetitive claims are adjudicated. In order to provide more timely adjudication of appeals of Medicare claim denials, HHS agencies have taken various actions, which can be grouped into three categories: 1. changes to Medicare prepayment and postpayment claims reviews, which may reduce claim denials and, therefore, the number of filed appeals; 2. actions aimed at reducing the number of decisions at lower appeal levels that lead to appeals at Levels 3 and 4; and 3. actions aimed at resolving the current backlog of undecided appeals at Levels 3 and 4. CMS has made some changes to Medicare prepayment claims reviews, which may reduce the number of claim denials, and as a result, the number of filed appeals. For example, due to concerns about improper payments for certain services, CMS has established four prior authorization models in which providers submit documentation to support a claim for Medicare payment before rendering services, instead of submitting that documentation after the service was provided at the time the claim is submitted for payment. According to CMS officials, this practice allows providers to work with MACs to address potential issues with claims before the services are performed. Since 2012, CMS has implemented three demonstrations that require providers in certain states to obtain prior authorization for power wheelchairs and scooters, repetitive scheduled non-emergent ambulance transports, and non- emergent hyperbaric oxygen therapy. In addition, CMS established a prior authorization process for certain other DMEPOS items on February 29, 2016. In February 2016, a CMS official said that a recent decline in the number of Level 1 and 2 appeals of denied DMEPOS claims is due, in part, to the power mobility devices and non-emergent hyperbaric oxygen therapy prior authorization demonstrations. CMS also made changes to the inpatient hospital coverage policy and the RA program, which have reduced the number of Part A filed appeals at Levels 1 and 2. For example, on October 1, 2013, CMS implemented a rule intended to clarify the circumstances under which Medicare would cover short stays in inpatient hospitals in an effort to help reduce the number of providers billing inappropriately for inpatient care instead of outpatient services. As a result of these new coverage policies, CMS prohibited the RAs from conducting reviews of short-stay inpatient hospital claims with dates of admission after October 1, 2013. After several extensions imposed by CMS and Congress, the prohibition ended in January 2016, at which time CMS allowed the RAs to conduct a limited number of short-stay inpatient admission reviews. The number of appeals filed related to hospital and other inpatient claims at Levels 1 and 2 declined in 2014 and 2015 from a high in 2013. In addition, in 2015, CMS limited the RA look-back period to 6 months from the date of service for certain patient status reviews instead of 3 years, which reduces the number of claims eligible for RA review and possible denial. RAs are also required to allow for a discussion period, in which providers who receive an improper payment determination can discuss the rationale for the determination and submit additional information that may substantiate payment of their claim prior to the claim adjustment process. CMS has also taken actions aimed at reducing the number of appeals filed at Levels 3 and 4. In a demonstration that began in January 2016, the QIC responsible for processing DMEPOS appeals will engage in formal discussions with certain providers that are appealing two items— oxygen supplies and diabetic glucose testing supplies—before issuing an appeal decision. CMS officials predict that these discussions will enable the QIC to reverse more claim denials at Level 2, thereby reducing the number of appeals that reach Levels 3 and 4. In future years, CMS plans to expand the demonstration to providers with appeals related to other DMEPOS services. In another change, effective August 2015, CMS instructed MACs and QICs to focus their reviews of appeals of postpayment claim denials on only the reason(s) for the denial at issue in the original appeal, without introducing new reasons that appellants would need to address in further appeals. Prior to this change, MACs and QICs reviewing appeals involving prepayment and postpayment claim denials were able to identify new claim denial reasons. CMS’s policy change will address stakeholder concerns that when MACs and QICs conducted independent reviews of claims, they often found new reasons to deny the claim, and as a result, appellants would have to file an appeal and provide evidence to address the new denial reason(s) at the next level of appeal. In February 2016, a CMS official reported that the agency believes this policy change has already resulted in an increase in the Level 2 reversal rate, which should reduce the number of appeals reaching Levels 3 and 4. CMS and OMHA have also taken steps to reduce the number of undecided appeals at Level 3 and Level 4. Under the global settlement CMS offered to hospitals from August to October 2014, CMS agreed to pay 68 percent of the inpatient net payable amount on Part A claims denied because the inpatient setting was determined to be medically unnecessary. In exchange, the hospital withdrew its pending appeals and waived its right to file a future appeal related to the claims. As of June 1, 2015, CMS paid approximately $1.3 billion to providers through the settlement. We estimate that it reduced the number of undecided appeals by 31 percent at Level 3 and 37 percent at Level 4. (See table 6.) In addition, OMHA has implemented three pilot programs—the settlement conference facilitation pilot, the statistical sampling pilot, and the senior attorney pilot—which focus on resolving appeals at Level 3 more efficiently. OMHA’s settlement conference facilitation pilot, which began in June 2014, allows eligible appellants to have their appeals at Level 3 settled through an alternative dispute resolution process rather than an ALJ hearing. OMHA offered the pilot to a limited number of providers initially, and, according to OMHA officials, as of January 2016, had settled with 10 appellants involving about 2,400 appeals. The agency expanded the scope of appeals eligible for participation in the pilot to include appeals of additional Part B claim denials in October 2015 and appeals of certain Part A claim denials in February 2016. OMHA officials told us they are also exploring expansion of the pilot in 2016 to appeals filed by state Medicaid agencies that relate to home health services provided to dually eligible beneficiaries. As noted earlier, appeals from these state Medicaid agencies have increased. We identified approximately 47,000 pending Level 3 appeals as of our June 2015 data extract that are related to this issue, which could take over half of OMHA’s ALJs at least a year to adjudicate through a traditional hearing process. OMHA’s statistical sampling pilot began in July 2014 and aims to reduce the appeal backlog by deciding multiple appeals filed by a single appellant using statistical sampling and extrapolation. Under this pilot, an ALJ reviews and issues decisions on a random sample of the appellant’s eligible denied claims. The ALJ’s decision is then extrapolated to the universe of the appellant’s claims in question. As of August 2015, the pilot’s success has been limited—according to HHS, only one appellant had elected to participate in this process that would resolve its 405 pending appeals, which equates to about 40 percent of the annual workload of one ALJ. OMHA representatives said the office has conducted outreach to encourage more providers to participate in the pilot and plans to increase the number of claims eligible for the pilot, although as of February 2016, OMHA had not announced any specific plans or time frames to do so. According to HHS officials, OMHA’s senior attorney pilot, which began in July 2015, uses senior attorneys to conduct on-the-record reviews of appeals if the appellant waived the right to an oral hearing. Under this pilot, the senior attorney determines whether an on-the-record decision is warranted, and if so, drafts the decision for an ALJ to review and issue. HHS officials reported that as of March 2016, 671 appeals at Level 3 have been resolved through this initiative and that they plan to increase the number of senior attorneys participating in this program. Despite actions HHS agencies have taken, the Medicare appeals backlog will likely persist. While it is too early to predict the ultimate effect many of HHS’s current efforts will have on the Medicare appeals backlog, their effect thus far, with the exception of the global settlement, has been limited and the backlog continues to grow at a rate that outpaces the adjudication capacities at Levels 3 and 4. According to OMHA representatives, in fiscal year 2015, the number of incoming appeals at Level 3 declined to 235,543 from a high of 432,534 in fiscal year 2014. While this was a significant decrease, it was still three times the number of appeals decided in fiscal year 2015. Further, HHS reported that it expects the number of incoming appeals to increase again when new RA contracts are awarded and the RA program resumes full operation. A similar challenge exists at Level 4. The Council reported that it can adjudicate almost 2,680 appeals each year, which includes both its FFS and non-FFS workload; however, the Council’s pending appeals workload as of February 2016 was more than six times that amount and, in fiscal year 2015, it received more than three times the number of appeals it adjudicated in the same year. OMHA and Council representatives said that the fiscal year 2016 appropriations are unlikely to mitigate the growing appeals backlog at Levels 3 and 4. OMHA received a 20 percent increase in funding in its fiscal year 2016 appropriation, which HHS officials said will allow OMHA to hire 15 additional ALJs as well as expand other efforts to improve the appeals process. However, HHS representatives told us that even with this increase, OMHA will not have the adjudication capacity to stem the growing number of appeals at Level 3. The Council did not receive a funding increase in the fiscal year 2016 appropriations, and Council representatives said that at its present funding levels the Council is unlikely to keep pace with any increases in decisional output at Level 3. In the fiscal year 2017 HHS budget justification materials, several budgetary and legislative changes were requested to improve the Medicare appeals process and reduce the backlog. For example, additional funding for OMHA and the Council was requested to increase their adjudication capacity, as well as additional funding to CMS to increase QIC participation in Level 3 hearings, which the agency expects will reduce the reversal rate at Level 3. Legislative authority was also requested to allow OMHA and the Council to use a portion of the overpayments collected through the RA program to increase their adjudication capacity. (See app. IV for a description of the legislative proposals included in the President’s fiscal year 2017 budget related to the Medicare appeals process.) HHS’s efforts to reduce the number of filed Medicare appeals and the appeals backlog have not addressed inefficiencies regarding the way appeals of certain repetitive claims for ongoing services are decided, although doing so could lead to fewer appeals. According to representatives from one MAC that reviews DMEPOS appeals, under the current process, once a provider submits an initial claim for a recurring service—such as DMEPOS claims for monthly oxygen equipment rentals—and it is denied, all subsequent claims for the service are also denied, requiring providers to file multiple appeals for the recurring service. A beneficiary’s one year supply of oxygen, for example, could generate 12 claims, and therefore, 12 denials and possibly 12 appeals. If the appeal for the initial claim is later reversed in favor of the appellant, the appeals of the subsequent claims must continue to go through the appeals process, awaiting separate decisions, because the favorable appeal decision on the initial claim cannot generally be applied to the other appeals of subsequently denied claims. Representatives from some MACs, OMHA, and a provider group we interviewed said that this process is inefficient and suggested approaches to change the way these repetitive claims are adjudicated. In addition, two of the MACs we spoke to had developed their own processes to adjudicate some of these appeals more efficiently. For example, representatives from one of the MACs said that if a decision on an initial repetitive claim is reversed at Level 1, the MAC will apply that decision to related appeals pending within its jurisdiction. Given that these claims are for recurring services that are typically appealed individually, they could contribute substantially to the number of appeals related to DMEPOS. Furthermore, OMHA representatives told us that addressing this issue would achieve major efficiencies for the Medicare appeals process. Doing so is also consistent with internal controls that call for agencies to establish control activities that are effective and efficient in accomplishing the agency’s stated goals. HHS officials told us that the department could address this issue if granted certain statutory authority described in the HHS fiscal year 2017 budget justification materials. Specifically, HHS requested legislative authority to consolidate appeals into a single administrative appeal. While the authority is requested to allow appeals bodies to consolidate appeals for the purposes of sampling and extrapolation, HHS officials said that they could also use this authority to consolidate appeals of certain repetitive claims and decide them jointly. It is unclear whether HHS will be granted this authority. However, department officials acknowledged HHS currently has the authority to promulgate regulations that could help address this issue through the reopening process, although at the time we discussed our findings with department officials, they told us that they prefer to address this issue through the statutory change requested in the President’s proposed fiscal year 2017 budget. The reopening process could allow appeals bodies discretion to give deference to a decision made at a higher appeal level upon determining that the beneficiary’s condition or other facts and circumstances of the appeal had not changed. For example, an appeals body could apply a decision of a higher appeal level that the appellant met medical necessity requirements, although it would still need to verify certain components of the claim, such as verification of service delivery, in order to prevent fraud and abuse. In doing so, the review of the claim or claims in question could require a less intensive analysis than a de novo review. Significant growth in the number of appeals at all administrative appeal levels has posed several challenges to the Medicare appeals process. These challenges are particularly pronounced at Levels 3 and 4, which had the largest proportion of decisions issued after the statutory time frames from fiscal year 2010 through fiscal year 2014 and the greatest backlog of pending appeals. This backlog shows no signs of abating as the number of incoming appeals continue to surpass the adjudication capacity at Levels 3 and 4. The current situation whereby Levels 3 and 4 decide a substantial number of appeals after statutory time frames is likely to persist without additional actions. HHS could take more steps to improve its oversight of the appeals process and its understanding of the characteristics of appeals contributing to the increased volumes and the current appeals backlog. As HHS takes action aimed at reducing the appeals backlog, HHS will need reliable and consistent data to monitor the appeals system, including the effect of any actions taken. Currently, HHS data systems are not collecting additional information that would assist HHS agencies in their monitoring efforts. HHS is awaiting results of an evaluation of the Medicare appeals process that may address data inconsistencies within the three appeals data systems and among levels using MAS. While the evaluation is a good first step to identifying and modifying the data systems, it is unclear how well the evaluation will address these issues because it is not a specific objective of the evaluation. Without more reliable and consistent information, HHS will continue to lack the ability to identify issues and policies contributing to the appeals backlog, as well as measure the funds tied up in the appeals process. Finally, the manner in which appeals of certain repetitive claims are adjudicated is inefficient, which leads to more appeals in the system than necessary. With the appeals backlog as large as it is at Levels 3 and 4, HHS would benefit from a change in the process that could consolidate these appeals and reduce the number of appeals that require decisions. HHS has requested legislative authority to achieve this. Department officials acknowledged HHS currently has the authority to promulgate regulations that could help address this issue through the reopening process, although at the time we discussed our findings with them, we were told that they prefer to address this issue through the statutory change requested in the President’s proposed fiscal year 2017 budget. To reduce the number of Medicare appeals and to strengthen oversight of the Medicare FFS appeals process, we recommend that the Secretary of Health and Human Services take the following four actions: 1. Direct CMS, OMHA, or DAB to modify the various Medicare appeals a. collect information on the reasons for appeal decisions at Level 3; b. capture the amount, or an estimate, of Medicare allowed charges at stake in appeals in MAS and MODACTS; and c. collect consistent data across systems, including appeal categories and appeal decisions across MAS and MODACTS. 2. Implement a more efficient way to adjudicate certain repetitive claims, such as by permitting appeals bodies to reopen and resolve appeals. HHS provided written comments on a draft of this report, which are reprinted in appendix V, and provided technical comments, which we incorporated as appropriate. HHS generally agreed with four of the five draft recommendations and outlined a number of initiatives it is taking to improve the efficiency of the Medicare appeals process, reduce the backlog of pending appeals, and mitigate the possibility of future backlogs. HHS also expressed its willingness to modify the appeal data systems in order to collect consistent data across the appeal data systems and to implement a more efficient way to adjudicate certain repetitive claims. In commenting, HHS provided further information for two of the recommendations with which it generally agreed. Regarding our recommendation to collect information on the reasons for appeal decisions at Level 3, HHS indicated that collecting this information in the planned ECAPE system instead of MAS, as we recommended, would be more cost effective. We agree with the department’s rationale and modified our recommendation to remove the language specifying that this information be collected in MAS. Regarding our recommendation that HHS capture the amount of Medicare allowed charges; in its technical comments, the department indicated that it would not do this for all appeals. Specifically, HHS indicated that it has no plans to collect the Medicare allowed amount for Levels 1 and 2 because doing so would require changes to the claims processing system or require manual pricing of all appeals, which would require additional funding for the MACs. We believe that there may be less resource intensive options for implementing the recommendation, and we modified the language of the recommendation to clarify that obtaining an estimate of the Medicare allowed amount would be a way to fulfill the recommendation. In contrast, HHS disagreed with a recommendation related to determining the costs and benefits of delaying CMS’s collection of overpayments until after a Level 3 decision is made, stating that such a change would increase the number of appeals filed at Level 3. We agree that this change might increase the number of filed appeals and, therefore, we did not include the recommendation in the final report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Health and Human Services, the Administrator of the Centers for Medicare & Medicaid Services, the Chief Administrative Law Judge of the Office of Medicare Hearings and Appeals, the Chair of the Departmental Appeals Board, appropriate congressional committees, and other interested parties. In addition, this report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff has any questions about this report, please contact me at (202) 512-7114 or kingk@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. This appendix provides additional details regarding our analysis of (1) trends in Medicare fee-for-service (FFS) appeals for fiscal years 2010 through 2014; (2) differences in claim-level and appeal-level reversal rates; (3) appeals resolved by the Centers for Medicare & Medicaid Services’ (CMS) global settlement; (4) CMS’s estimate of interest paid by the agency to certain providers; and (5) data reliability. To examine trends in appeals for fiscal years 2010 through 2014, we analyzed extracts of three data systems obtained from CMS, the Office of Medicare Hearings and Appeals (OMHA), and the Departmental Appeals Board (DAB). (See table 7.) To determine the number of Medicare FFS appeals filed for each level overall, by the type of appellant, by type of service, by subcategory of service, and by whether the appeal resulted from a claim review conducted by a Recovery Auditor (RA), we took a number of steps that varied by level due to differences in the systems. Level 1. While the Contractor Reporting of Operational and Workload Data (CROWD) system extract contained data on most Level 1 appeals filed during the period of our analysis, the Medicare Appeals System (MAS) extract contained Level 1 appeals data for six of the seven Medicare Administrative Contractor (MAC) jurisdictions that reported their Medicare Part A (Part A) appeals data to MAS in fiscal year 2014. Using the CROWD data, we determined the number of appeals filed by counting the number of requests received less the number of misrouted requests. CMS officials indicated that this approach will likely produce an approximate number of filed appeals for our purposes. However, agency officials also noted that CMS uses the number of requests cleared instead of requests received when representing appeals workload, because the requests received line could overestimate the number of filed appeals. For example, it could count duplicate requests or requests for reopenings as opposed to appeals. CMS officials noted that the agency has made changes, effective in January 2016, to improve the quality of the requests received data. Using MAS data for the Part A appeals data for the remaining six MAC jurisdictions, we also counted appeals filed and excluded misrouted and misfiled appeals. To determine the total number of Level 1 appeals filed, we added counts derived from CROWD and MAS. Our analysis excludes Level 1 appeals decided by Quality Improvement Organizations because MACs are responsible for handling Level 1 appeals of denials related to most claims. Type of appellant: We did not determine the number of appeals filed by the type of appellant because this information is not captured in the CROWD system. Type of service: To determine the number of appeals that were related to durable medical equipment, prosthetics, orthotics, and supplies (DMEPOS) items, using CROWD data we categorized all appeals decided by the four MAC jurisdictions that decide DMEPOS appeals as DMEPOS services. We categorized appeals of Part B services and appeals of Medicare Part B (Part B) services whose claims were decided by the Part A MACs (referred to as Part B of A) as Part B services. Subcategory of service: We report the number of appealed claims decided by subcategory of service because CROWD does not track filed appeals or filed appealed claims by subcategory of service. Using the CROWD data, we determined the number of appealed claims decided using the number of claims cleared. CROWD uses the following subcategories: inpatient hospital, DMEPOS, home health, laboratory, other, outpatient, physician, skilled nursing facility, and ambulance, which we refer to as transportation. Using the MAS data, we determined the number of appealed claims decided by counting the number of claims. Primarily using a crosswalk provided by CMS, we mapped MAS appeal categories to the CROWD subcategories. (See table 8.) RA-related: We report the number of RA-related appeals decided using the number of RA redeterminations cleared because CROWD does not have this information for filed appeals. In MAS, we considered an appeal as RA-related if the field RA name was not missing. Level 2. In analyzing MAS data, we excluded combined, deleted, and misrouted appeals, but included reopened appeals. Our analysis also excludes Level 2 appeals decided by Quality Improvement Organizations because Qualified Independent Contractors (QIC) are responsible for handling Level 2 appeals of denials related to most claims. Type of appellant: To determine the type of appellant that filed the appeal, we used the field “appeal appellant type.” Type of service: To determine whether the appeal was for Part A, Part B, or DMEPOS, we used two MAS fields—“Medicare type” and the name of the QIC. In general, we categorized appeals of Part B services and appeals of Part B of A services as Part B services using “Medicare type.” Subcategory of service: To determine the subcategory of service, we used the field “appeal category.” Using appeal category, we mapped Level 3 appeal categories to Level 2 appeal categories generally using a crosswalk provided by OMHA. Using that crosswalk, we grouped services into 10 subcategories. (See table 9.) RA-related: We considered an appeal as RA-related if the field “RAC flag” was equal to “yes.” Level 3. In analyzing MAS data, we excluded appeals that had been combined or deleted, but included reopened appeals. Type of appellant: To determine the type of appellant that filed the appeal, we used the MAS field “requester type;” a field created by OMHA for us that indicates that the MAS appeal record included a beneficiary identification number, thus indicating the appeal was filed by a beneficiary; and a file provided to us by OMHA that identified appeals filed by a state Medicaid agency that could not be identified using the field “requester type.” Type of service: To determine whether the appeal was for a Part A, Part B, or DMEPOS service, we used two MAS fields—“Medicare type” and the name of the QIC. In general we categorized appeals of Part B services and appeals of Part B of A services as Part B services using “Medicare type.” Subcategory of service and RA-related: We used the same approach described for Level 2 above. Level 4. In analyzing Medicare Operations Division Automated Case Tracking System (MODACTS) data, we excluded appeals resulting from CMS referrals, and appeals in which the record indicated a final action of lost file or tape as Level 4 did not review the appeal. We counted as one appeal any appeals in which the appellant filed one appeal but the Medicare Appeals Council (the Council) issued separate appeal decisions. Type of appellant: To determine the type of appellant that filed the appeal, we used the fields “appellant type” and the name of the appellant, where the field “workload” indicated that CMS had not filed the appeal. Type of service: To determine whether the appeal was for a Part A or Part B service, we used the field “claim type.” We used the field “type of service”—specifically, values of durable medical equipment, orthotic, prosthetic, or surgical dressing—to identify whether the appeal was for a DMEPOS item. We did not take additional steps to categorize appeals of Part B of A services as Part B services as Council officials told us that those services are already categorized as Part B claims. Subcategory of service: To determine the subcategory of service, we used the field “type of service.” We grouped type of service into 10 subcategories. (See table 10.) RA-related: An appeal was RA-related if the field “overpayment” was set to “RAC.” For all appeal levels, we determined the percentage of appeal decisions issued after the statutory time frames. This analysis is based on the fiscal year that the appeal was decided. Thus, appeals in which no appeal decision had been issued from fiscal year 2010 through fiscal year 2014 are excluded from our analyses. Level 1. Our analysis for Level 1 is different from those for Levels 2, 3, and 4. Specifically, the Level 1 analysis presents information on a quarterly basis by type of service (i.e., Part A, Part B, and DMEPOS) and the percentages of Part B of A services are included in totals for Part A services. We derived this information from CMS’s “Appeals Fact Sheets,” which contain the percentage of appealed claims decided on-time on a quarterly basis by type of service. Using these data, which are presented on a calendar year basis, we determined the percentage of appealed claims on a fiscal year basis that were not issued on-time. Level 2, Level 3, and Level 4. To determine the percentage of appeals issued after the statutory time frame, we determined the number of appeals issued after the deadline date overall and by type of appellant for Levels 3 and 4. The deadline date is captured and adjusted in MAS (Levels 2 and 3) and MODACTS (Level 4) to reflect any reasonable changes to the deadline, such as if the appellant submitted additional documentation after the appeal was filed. We found that the deadline date was missing in MODACTS for over one-third of appeals that had been decided during the time frame of our analysis. As a result, we set the deadline date for these appeals to 90 days after the appeal start date. DAB officials indicated this approach was generally appropriate. For Levels 2 and 3, we limited this analysis to appeal decisions issued on the merits. As a result, appeals with the following appeal decisions are excluded from the calculation: dismissed or escalated at Level 2; and dismissed, escalated, remanded, or in which no decision on the denied claim was made at Level 3. For Level 4, we excluded appeals referred to the Council by CMS, as well as appeals that were dismissed by the Council under the circumstances set forth in 42 C.F.R. §405.1114 at Level 4. We also calculated the percentage of appeal decisions issued on the merit that were issued after at least twice the statutory time frame, which we report for Levels 3 and 4, by determining the number of appeals in which decisions were issued 90 or more days after the deadline date. We excluded from these calculations appeals that were put on hold during CMS’s global settlement process. We estimated the number of Levels 2, 3, and 4 appeals resolved through the global settlement based on information as provided to us by CMS, OMHA, and DAB. Specifically, for Levels 2 and 3 we used fields in MAS, and for Level 4 we used a file provided to us by the Council within DAB on November 6, 2015. HHS officials noted that as of February 2016, OMHA and DAB were still in the process of dismissing appeals included in CMS’s global settlement. The dismissal process includes a review by OMHA and DAB of each settlement agreement which could result in the identification of appealed claims that were inadvertently included in the settlement. Therefore, our estimates may differ from the number of appeals settled once the dismissal process is complete. For Levels 1 through 3, we determined the proportion of appeals in which the appeals body reversed a coverage denial. For Level 4, we separated appeal decisions into different categories to better understand how the Level 4 appeal decision affected the Level 3 appeal decision. Level 1, Level 2, and Level 3. We report the number of appeals in which a decision was issued on the merits and the percentage of appeals that fully reversed, partially reversed, or did not reverse the coverage denial. In calculating those percentages, we do not reflect decisions based on other grounds, such as dismissals. As noted above, for Levels 1 through 3, we categorized appeals of Part B of A services as Part B services. As a result, the reversal rates we present may differ from reversal rates that categorize appeals of Part B of A services as Part A services. We calculated reversal rates overall, by type of service, and by whether or not the appeal was RA-related. For comparison purposes, we also report the total number of appeals in which a decision was issued, which is not limited to decisions issued on the merit. Level 4. We report the number of appeals that affirmed, reversed, dismissed, or remanded Level 3 decisions as well as the percentage of those appeals in each category. We calculated these percentages overall and by whether an appellant filed an appeal or whether CMS referred the appeal. For comparison purposes, we also report the total number of appeals in which a decision was issued, which is not limited to the four final action categories. Because the following Level 4 decisions do not comment on the Level 3 decision, we excluded them from our analysis: appeal decisions of other, special disposition, and dismiss request for review. Similarly, we excluded appeals that were escalated from OMHA because Level 3 did not issue a timely appeal decision and appeals in which the Council was asked to reopen an appeal it already decided. Our categorization of Level 4 decisions is as follows. Affirmed the Level 3 decision: (a) a final action of affirm; (b) a final action of modify; (c) if CMS did not refer the appeal, a final action of denial of request for review; and (d) if CMS referred the appeal, decline protest. Reversed the Level 3 decision: a final action of reverse decision. Dismissed the Level 3 decision: a final action of dismiss request for hearing. Remanded appeal to Level 3: a final action of remand to the Administrative Law Judge (ALJ), which has the effect of vacating the Level 3 decision and generating a new Level 3 appeal, according to Council officials. To report on the effect of CMS’s global settlement on the number of appeals pending decisions at Levels 3 and 4, we determined the number of appeals pending a decision as of the dates of our extract files. To determine the number of those appeals pending after the global settlement, we subtracted the number of appeals we estimated to be included in the global settlement from the number of pending appeals. To better understand the effect of late appeal decisions on the amount of interest paid by CMS to certain providers who have their postpayment claim denials reversed upon appeal, we asked CMS for (a) the amount of interest CMS paid to providers on the overpayments the agency initially collected and then returned after the appellant won a Level 3 appeal; and (b) the amount of interest that CMS would have paid to those providers if Level 3 had adhered to the 90-day statutory time frame for issuing appeal decisions. CMS officials told us that their data system did not enable them to create similar estimates for appeals reversed at Levels 4 or 5. To report on the amount of interest that CMS would not have paid if Level 3 had issued decisions within the statutory time frame, we subtracted estimate (b) from (a). To respond to our inquiry, CMS developed an estimate, which is based on several assumptions and is subject to certain limitations. To report on the amount of interest paid, CMS identified transactions in its Healthcare Integrated General Ledger Accounting System (HIGLAS) that were categorized as related to this type of interest payment. To report on the amount of interest that CMS would have paid if Level 3 had adhered to a 90-day time frame for issuing appeal decisions, CMS created an estimated date whereby the ALJ would have issued the appeal decision, because this information is not calculated in HIGLAS. This date was set equal to 180 days after HIGLAS indicated the overpayment collection was initiated by the MAC, and accounts for the following: the time it would take the MAC to collect the overpayment after the appellant lost the Level 2 appeal, the time it took for the appellant to file a Level 3 appeal, and the 90-day time frame for Level 3 to issue an appeal decision. Because overpayments can be collected over multiple dates, CMS set the date of the recoupment equal to the median date of all recoupment dates. CMS officials acknowledged that their estimate has limitations. First, CMS officials told us that the recording of overpayment adjustments, the assignment of codes which categorize types of accounts receivable, and the determination of interest payments is a manual process conducted by the MACs and that MACs may not, for example, be using the appropriate codes. Second, use of a median date to estimate the interest that would have been payable results in different estimates than if CMS were able to apply the interest rate separately for each recoupment made based on the actual date the overpayment was recouped. Third, CMS’s estimate is limited to Part A and Part B appeals and excludes any interest associated with DMEPOS appeals because CMS officials told us they did not have the necessary data in-house to conduct the analysis and obtaining access to the necessary data would have been administratively burdensome. To assess the reliability of the data used in this report, we performed manual and electronic testing to identify and correct for missing data and other anomalies; interviewed or obtained written information from CMS, OMHA, and DAB officials to confirm our understanding of the data; and reviewed related documentation. We determined that the data were sufficiently reliable for our purposes. Tables 11 through 14 provide information on the number and characteristics of appeals filed and decided at Levels 1 through 4 of the Medicare fee-for-service appeals process from fiscal year 2010 to fiscal year 2014. Tables 15 through 18 provide information on appeal reversal rates by type of service and whether the appeal was Recovery Auditor-related at Levels 1 through 4 of the Medicare fee-for-service appeals process for appeals decided from fiscal year 2010 to fiscal year 2014. This proposal would expand the authority of the Secretary of Health and Human Services to retain a portion of RA recoveries for the purpose of administering the RA program and allows RA program recoveries to fully fund RA‐related appeals at OMHA and DAB. This proposal would institute a refundable filing fee for Medicare Parts A and B appeals for providers, suppliers, and state Medicaid agencies, including those acting as a representative of a beneficiary, and requires these entities to pay a per‐claim filing fee at each level of appeal. According to the Department of Health and Human Services’ (HHS) budget justification materials, this filing fee will allow HHS to invest in the appeals system to improve responsiveness and efficiency. Fees will be returned to appellants who receive a fully favorable appeal determination. This proposal would extend the Centers for Medicare & Medicaid Services’ (CMS) prior that are at the highest risk for improper payment. The proposal observes that CMS currently and services. authorization authority to all Medicare fee‐for‐service items and services, in particular those has authority to require prior authorization for only specified Medicare fee‐for‐service items This proposal would allow the Secretary to withhold payment to an RA if a provider has filed an appeal at Level 2 and a decision is pending. According to HHS’s budget justification materials, aligning the RA contingency fee payments with the outcome of the appeal will ensure that CMS has assurance of the RA’s determination before making payment. This proposal would allow the Secretary to adjudicate appeals through the use of sampling and extrapolation techniques. Additionally, this proposal authorizes the Secretary to consolidate appeals into a single administrative appeal at all levels of the appeals process. Parties who are appealing claims included within an extrapolated overpayment or consolidated previously will be required to file one appeal request for any such claims in dispute. This proposal would remand an appeal to Level 1 when new documentary evidence is submitted into the administrative record at Level 2 or above. Exceptions may be made if evidence was provided to the lower level adjudicator but erroneously omitted from the record, or an adjudicator denies an appeal on a new and different basis than earlier determinations. According to HHS’s budget justification materials, this proposal incentivizes appellants to include all evidence early in the appeals process and ensures the same record is reviewed and considered at subsequent levels of appeal. This proposal would increase the minimum amount in controversy for ALJ adjudication to the federal court (Level 5) amount in controversy requirement ($1,500 in calendar year 2016). According to HHS’s budget justification materials, this will allow the amount at issue to better align with the amount spent to adjudicate the claim. This proposal would allow OMHA to use attorneys called Medicare magistrates to adjudicate appealed claims of lower amounts in controversy—specifically, amounts that fall below the federal district court amount in controversy threshold. This proposal would allow OMHA to issue decisions without holding a hearing if there is no material fact in dispute. These cases include appeals, for example, in which Medicare does not cover the cost of a particular drug. Kathleen M. King, (202) 512-7114 or kingk@gao.gov. In addition to the contact named above, Lori Achman, (Assistant Director), Todd Anderson, Susan Anthony, Christine Davis, Julianne Flowers, Krister Friday, Shannon Legeer, Amanda Pusey, Lisa Rogers, Cherie’ Starck, and Jennifer Whitworth made key contributions to this report.
In fiscal year 2014, Medicare processed 1.2 billion FFS claims submitted by providers on behalf of beneficiaries. When Medicare denies or reduces payment for a claim or a portion of a claim, providers, beneficiaries, and others may appeal these decisions through Medicare's appeals process. In recent years there have been increases in the number of filed and backlogged appeals (i.e., pending appeals that remain undecided after statutory time frames). GAO was asked to examine Levels 1 through 4 of Medicare's appeals process. This report examines (1) trends in appeals for fiscal years 2010 through 2014, (2) data HHS uses to monitor the appeals process, and (3) HHS efforts to reduce the number of appeals filed and backlogged. GAO analyzed data from the three data systems used to monitor appeals, reviewed relevant HHS agency documentation and policies, federal internal control standards, and interviewed HHS agency officials and others. The appeals process for Medicare fee-for-service (FFS) claims consists of four administrative levels of review within the Department of Health and Human Services (HHS), and a fifth level in which appeals are reviewed by federal courts. Appeals are generally reviewed by each level sequentially, as appellants may appeal a decision to the next level depending on the prior outcome. Under the administrative process, separate appeals bodies review appeals and issue decisions under time limits established by law, which can vary by level. From fiscal years 2010 and 2014, the total number of filed appeals at Levels 1 through 4 of Medicare's FFS appeals process increased significantly but varied by level. Level 3 experienced the largest rate of increase in appeals—from 41,733 to 432,534 appeals (936 percent)—during this period. A significant portion of the increase was driven by appeals of hospital and other inpatient stays, which increased from 12,938 to 275,791 appeals (over 2,000 percent) at Level 3. HHS attributed the growth in appeals to its increased program integrity efforts and a greater propensity of providers to appeal claims, among other things. GAO also found that the number of appeal decisions issued after statutory time frames generally increased during this time, with the largest increase in and largest proportion of late decisions occurring at appeal Levels 3 and 4. For example, in fiscal year 2014, 96 percent of Level 3 decisions were issued after the general 90-day statutory time frame for Level 3. The Centers for Medicare & Medicaid Services (CMS) and two other components within HHS that are part of the Medicare appeals process use data collected in three appeal data systems—such as the date when the appeal was filed, the type of service or claim appealed, and the length of time taken to issue appeal decisions—to monitor the Medicare appeals process. However, these systems do not collect other data that HHS agencies could use to monitor important appeal trends, such as information related to the reasons for Level 3 decisions and the actual amount of Medicare reimbursement at issue. GAO also found variation in how appeals bodies record decisions across the three systems, including the use of different categories to track the type of Medicare service at issue in the appeal. Absent more complete and consistent appeals data, HHS's ability to monitor emerging trends in appeals is limited and is inconsistent with federal internal control standards that require agencies to run and control agency operations using relevant, reliable, and timely information. HHS agencies have taken several actions aimed at reducing the total number of Medicare appeals filed and the current appeals backlog. For example, in 2014, CMS agreed to pay a portion of the payable amount for certain denied hospital claims on the condition that pending appeals associated with those claims were withdrawn and rights to future appeals of them waived. However, despite this and other actions taken by HHS agencies, the Medicare appeals backlog continues to grow at a rate that outpaces the adjudication process and will likely persist. Further, HHS efforts do not address inefficiencies regarding the way appeals of certain repetitious claims—such as claims for monthly oxygen equipment rentals—are adjudicated, which is inconsistent with federal internal control standards. Under the current process, if the initial claim is reversed in favor of the appellant, the decision generally cannot be applied to the other related claims. As a result, more appeals must go through the appeals process. GAO recommends that HHS take four actions, including improving the completeness and consistency of the data used by HHS to monitor appeals and implementing a more efficient method of handling appeals associated with repetitious claims. HHS generally agreed with four of GAO's recommendations, and disagreed with a fifth recommendation, citing potential unintended consequences. GAO agrees and has dropped that recommendation.
Federal funding for highways is provided to the states mostly through a series of grant programs collectively known as the Federal-Aid Highway Program. Periodically, Congress enacts multiyear legislation that authorizes funding for the nation’s surface transportation programs. In 2005, Congress enacted SAFETEA-LU, which authorized $197.5 billion for the Federal-Aid Highway Program for fiscal years 2005 through 2009. In a joint federal-state partnership, FHWA, within DOT, administers the Federal-Aid Highway Program and distributes most funds to the states through annual apportionments established by statutory formulas. Once FHWA apportions these funds, the funds are available for obligation for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes, as well as for other authorized purposes. The amount of federal funding made available for highways has been substantial—from $34.4 billion to $43 billion per year for fiscal years 2005 through 2009. The Highway Trust Fund was established by Congress in 1956 to fund construction of the Interstate Highway System. The Highway Trust Fund receives excise taxes collected on motor fuels and truck-related taxes, including taxes on gasoline, diesel fuel, gasohol, and other fuels; truck tires and truck sales; and heavy vehicle use. The Department of the Treasury collects fuel taxes from a small number of corporations (i.e., oil refineries or fuel tank farms) located in a relatively small number of places and not directly from states. As such, FHWA calculates motor fuel-related contributions based on estimates of the gallons of fuel used on highways in each state by relying on data gathered from state revenue agencies and summary tax data available from Treasury as part of the estimation process (see app. II). Most of the funds from the Highway Account of the Highway Trust Fund (about 83.3 percent) were apportioned to states across 13 formula programs during the 5-year SAFETEA-LU period. Included among these are 6 “core” highway programs (see table 1). In addition to formula programs, Congress also directly allocated about 7.4 percent of Highway Account funds to state departments of transportation through congressionally directed High Priority Projects, which provided funding for a total of 5,091 specific projects identified in SAFETEA-LU. The remaining funds, about 9.3 percent of the total, represent dozens of other authorized programs allocated to state departments of transportation, congressionally directed projects other than High Priority Projects, administrative expenses, and funding provided to states by other DOT agencies, such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 1). Some of the apportioned programs use states’ contributions to the Highway Account of the Highway Trust Fund as a factor in determining funding levels for each state. As previously mentioned, FHWA has to estimate the fuel tax contributions made to the fund by users in each state. The collection and estimation process takes place over several years (see fig. 2), and thus, the data used to calculate the formula are two years old. For example, the data used to apportion funding to states in fiscal year 2009 were based on estimated collections attributable to each state in fiscal year 2007. By the early 1980s, construction of the Interstate Highway System was nearing completion, and a larger portion of the funds authorized from the Highway Trust Fund were for non-Interstate programs. The Surface Transportation Assistance Act of 1982 provided, for the first time, that each state would for certain programs receive a “minimum allocation” of 85 percent of its share of estimated tax payments to the Highway Account of the Highway Trust Fund. This approach was largely retained when Congress reauthorized the program in 1987. Since then, each state has received a specific share of the total program (defined as all apportioned programs plus High Priority Projects) and rate-of-return considerations into funds states received for the Interstate Maintenance, National Highway System, and Surface Transportation programs. In 2005, Congress implemented through SAFETEA-LU the Equity Bonus Program that was designed to give states a guaranteed rate of return of 92 percent by fiscal year 2008. In June 2010, we reported that every state but one received more funding for highway programs than users contributed to the Highway Account. The only exception, Texas, received about $1.00 (99.7 cents) for each dollar contributed. Among other states, this ranged from a low of $1.02 for both Arizona and Indiana to a high of $5.63 for the District of Columbia. In addition, all states, including Texas, received more funding than was contributed during both fiscal years 2007 and 2008. In effect, almost every state was a donee state during the first 4 years of SAFETEA-LU. This occurred because overall, more funding was authorized and apportioned than was collected from highway users, as the account was supplemented by general funds from Treasury. We also reported that whether a state is viewed as a “donor” or “donee” state can depend on which of the four following methods of calculating states’ rate of return is used: 1. States’ rate of return per dollar contributed to the Highway Account, using same-year comparison data. This analysis compares funding states received from the Highway Trust Fund Highway Account with the dollars estimated to have been collected in each state into the Highway Account in that same year. For example, for fiscal year 2007, it compares the highway funds states received in fiscal year 2007 with the amount collected and contributed in that fiscal year. This analysis includes all funding provided to the states from the Highway Account, including (1) funds apportioned by formula, (2) High Priority Projects, and (3) other authorized programs, including safety program funding provided to states by other DOT agencies, such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration (see fig. 1 for a breakdown of these funds). This was the method used to calculate the results reported in our previous report that every state but one received more funding for highway programs than users contributed. 2. States’ rate of return per dollar contributed to the Highway Account, using time-lagged data, apportioned programs, and High Priority Projects. This method applies the same dollar return calculation method but uses contribution data that were available at the time funds were apportioned to the states—the lagged-time data were 2 years old, due to the time lag between when Treasury collects fuel and truck excise taxes and funds are apportioned. It also uses a subset of Federal-Aid Highway programs including both programs apportioned to states by formula and High Priority Projects; it does not include other allocated highway programs or other funding states receive from other DOT agencies, such as the National Highway Traffic Safety Administration and Federal Motor Carrier Safety Administration. 3. States’ relative share rate of return from the Highway Account, using time-lagged comparison data, apportioned programs, and High Priority Projects. This third calculation method is based on a state’s “relative share”—that is, the amount a state receives relative to other states instead of an absolute, dollar rate of return. In order to calculate this rate of return, FHWA must determine what proportion of the total national contributions came from highway users in each state. Each state’s share of contributions into the Highway Account of the Highway Trust Fund is then used to calculate a relative rate of return—how the proportion of each state’s contribution compares to the proportion of funds the state received. 4. States’ relative share rate of return per dollar contributed to the Highway Account, using same-year comparison data. This fourth method for calculating a state’s rate of return involves evaluating the relative share as described in method 3, but using the same-year comparison data. Our analysis of the entire 5-year period of SAFETEA-LU shows that every state was a donee state, receiving more funding for highway programs than their users contributed to the Highway Account (see fig. 3). Funding received for each dollar contributed ranged from about $1.03 for Texas to about $5.85 for the District of Columbia. Every state was a donee state during the 5-year SAFETEA-LU period because overall, more funding was authorized and apportioned than was collected from highway users, since the account was supplemented by general funds from Treasury. Since our June 2010 report, nearly every state increased its rate of return by at least 1 cent over what we originally reported. Hawaii was the only state that remained constant and did not have an increase on the rate of return in fiscal year 2009. Although our analysis shows that states received more than was contributed, other calculations provide different results. Because there are different methods of calculating a rate of return, and the method used affects the results, confusion can result over whether a state is a donor or donee. A state can appear to be a donor using one type of calculation and a donee using a different type. We found that each state received more in funding than its users contributed during the SAFETEA-LU period when using time-lagged comparison data to calculate states’ rate of return per dollar. The rate of return ranged from a low of $1.04 per dollar for 16 states to a high of $5.26 per dollar for the District of Columbia, as shown in figure 4. This methodology results in states generally having a lower dollar rate of return than our calculations using same-year data and differs in that we use a subset of Federal-Aid Highway programs including both programs apportioned to states by formula and High Priority Projects. The results from our June 2010 report have not changed because fiscal year 2009 data were included in our 2010 analysis, and all states are donee states. A third calculation, based on a state’s “relative share”—the amount a state receives relative to other states using time-lagged data—results in both donor and donee states. Congress defined this method in SAFETEA-LU as the one FHWA uses for calculating rates of return for the purpose of apportioning highway funding to the states. A comparison of the relative rate of return on states’ contributions showed 28 donor states, receiving less than 100 percent relative rate of return, and 23 states as donees receiving more than a 100 percent relative rate of return (see fig. 4). States’ relative rates of return ranged from a low of 91.3 percent for 12 states to a high of 461 percent for the District of Columbia. Similar to the return per dollar analysis in figure 5, this calculation includes only apportioned funds and High Priority Projects allocated to states, and excludes other DOT authorized programs allocated to states (see fig. 1). The difference between a state’s absolute per dollar return and relative rate of return can create confusion because the relative share calculation is sometimes mistakenly referred to as “cents on the dollar.” As we previously reported, using the relative share method of calculation will result in some states being “winners” and other states being “losers.” If one state receives a higher proportion of highway funds than it is viewed as having contributed, another state must receive a lower proportion than it contributed. However, because more funding has recently been apportioned and allocated from the Highway Account than is being contributed by highway users, a state can receive more than it contributes to the Highway Trust Fund Highway Account, making it a donee under its rate of return per dollar, but a donor under its relative share rate of return. California provides a useful example. From fiscal years 2005 through 2009, using same-year contributions and funding across all Highway Trust Fund Highway Account allocations and apportionments, California received $1.19 for each dollar contributed. This analysis shows California as a donee state (see table 2). Alternatively, when calculating a dollar rate of return using state contribution estimates available at the time of apportionment (as shown in fig. 3) and including only programs covered in rate-of-return adjustments, California remains a donee state, receiving $1.04 for each dollar contributed. In contrast, using the relative share approach for the fiscal year 2005 through 2009 period, California received 91 percent of the share it contributed in federal highway-related taxes, which would make it a donor state. Our fourth way of calculating a state’s rate of return is not normally calculated by FHWA. It involves evaluating the relative share as just described, but using the same-year comparison data. Again, because of the time lag required to estimate state highway user contributions to the Highway Account, such analysis is possible only 2 years after FHWA calculates apportionments for states. Our analysis using this approach results in yet another set of rate-of-return answers. When this analysis is applied to all states, approximately half of the states are donor states and the other half are donee states. Specifically, under this methodology, 25 states received less than 100 percent relative rate of return, and 26 states received more than a 100 percent relative rate of return. Compared with the results we reported in our 2010 report, our recent analysis with fiscal year 2009 data resulted in Louisiana, Maine, and Washington changing from a donor to a donee state using the relative share method for a state’s rate of return in the same year. For instance, in our June 2010 report, the relative share for Louisiana was 97.97 percent, but it increased to 102.98 percent with the inclusion of fiscal year 2009 data. Conversely, Minnesota shifted from a donor to a donee state. In our June 2010 report, the relative share for Minnesota was 101.26 percent but it changed to 99.46 percent with the inclusion of fiscal year 2009 data, again using the relative share method for a state’s rate of return in the same year. Table 3 shows the results of all states for the entire 5-year SAFETEA-LU period using the four analysis methods. The Equity Bonus provisions addressed states’ concern on the rate of return and provided more funds to states than any other individual Federal-Aid Highway formula program. Over SAFETEA-LU’s 5-year authorization period, the Equity Bonus provisions provided $44 billion in funding authority to the states, while the second-largest formula program, the Surface Transportation Program, provided $32.5 billion. Each year since fiscal year 2005, about $2.6 billion stayed as Equity Bonus program funds and could be used for any purpose eligible under the Surface Transportation Program. Any additional Equity Bonus funds are added to the apportionments of the six “core” Federal-Aid Highway formula programs: Interstate Maintenance, National Highway System, Surface Transportation, Congestion Mitigation and Air Quality, Highway Bridge, and Highway Safety Improvement. Because states can transfer funds among the core programs, the funding apportioned to any core program is not critical. States may qualify for Equity Bonus funding by meeting criteria contained in one of three provisions (see fig. 6). A state that meets the criteria for more than one of the provisions receives funding under the provision providing the greatest amount of funding. FHWA conducts Equity Bonus calculations annually. However, with the extension of SAFETEA-LU authorization for 2 years, the Equity Bonus Program has not been recalculated. According to FHWA officials, since fiscal years 2010 and 2011 were funded at the fiscal year 2009 level, states simply received the same amount for fiscal years 2010 and 2011 as they did in fiscal year 2009. For the first criterion—the guaranteed relative rate of return—for fiscal year 2005, all states were guaranteed at least 90.5 percent of their share of estimated contributions. The guaranteed rate of return increased in steps, rising to 92 percent in fiscal year 2009. The second criterion—the guaranteed increase over average annual apportionments authorized by the Transportation Equity Act for the 21st Century (TEA-21)—also varied by year, rising from 117 percent in fiscal year 2005 to 121 percent in fiscal year 2009. The number of states qualifying under the first two provisions varied from year to year. For the third criterion, a guarantee to “hold harmless” states that had certain qualifying characteristics at the time SAFETEA-LU was enacted, 27 states had at least one of these characteristics. A number of states had more than one of these characteristics. Forty-seven states received Equity Bonus funding every year during the SAFETEA-LU period. However, the District of Columbia, Rhode Island, and Vermont each had at least 1 year where they did not receive Equity Bonus funding because they did not need it to reach the funding level specified under the three provisions. Maine was the only state that did not receive an Equity Bonus in any year. As a result, half of all states received at least a 25 percent increase in their overall Federal-Aid Highway Program funding over their core funding. Each state’s percentage increase in its overall funding total for apportioned programs and High Priority Projects for fiscal years 2005 through 2009 due to Equity Bonus funding is shown in figure 7. Additional factors further complicate the relationship between states’ contributions to the Highway Trust Fund and the funding states receive. These include (1) the infusion of significant amounts of general revenues into the Highway Trust Fund, (2) the challenge of factoring performance and accountability for results into transportation investment decisions, and (3) the long-term sustainability of existing mechanisms and the challenges associated with developing new approaches to funding the nation’s transportation system. The infusion of significant amounts of general revenues into the Highway Trust Fund Highway Account breaks the link between highway taxes and highway funding. In fiscal year 2008, the Highway Trust Fund held insufficient amounts to sustain the authorized level of funding and, partly as a result, we placed it on our list of high-risk programs. To cover the shortfall, from fiscal years 2008 through 2010 Congress approved the transfer of about $29.7 billion in additional general revenues into the Highway Account of the Highway Trust Fund. This transfer affected each state’s rate of return and resulted in all states being donees— receiving more funds than they contributed to the Highway Account. Taken as a whole, for fiscal year 2009, the general fund transfers had a significant impact on the states’ rate of return, with the federal government paying about $42.4 billion to the states, while highway user fees paid into the Highway Account were $30.1 billion. This means that, to a large extent, funding has shifted away from the contributions of highway users, breaking the link between highway taxes paid and benefits received by users. Furthermore, the infusion of a significant amount of general fund revenues complicates rate-of-return analysis because the current method of calculating contributions does not account for states’ general revenue contributions. Because for many states the share of Highway Trust Fund contributions and general revenue contributions are different, state-based contributions to all the funding in the Trust Fund are no longer clear. In addition, since March 2009, the American Recovery and Reinvestment Act of 2009 apportioned an additional $26.7 billion to the states for highways—a significant augmentation of federal highway spending that was funded with general revenues. Using rate of return as a major factor in determining federal highway funding levels is at odds with re-examining and restructuring federal surface transportation programs so that performance and accountability for results is factored into transportation investment decisions. As we have reported, for many surface transportation programs, goals are numerous and conflicting, and the federal role in achieving the goals is not clear. Many of these programs have no relationship to the performance of either the transportation system or the grantees receiving federal funds and do not use the best tools and approaches to ensure effective investment decisions. Our previous work has outlined the need to create well-defined goals based on identified areas of federal interest and a clearly defined federal role in relation to other levels of government. We have suggested that where the federal interest is less evident, state and local governments could assume more responsibility, and some functions could potentially be assumed by the states or other levels of government. Furthermore, incorporating performance and accountability for results into transportation funding decisions is critical to improving results. However, the current approach presents challenges. The Federal-Aid Highway Program, in particular, distributes funding through a complicated process in which the underlying data and factors are ultimately not meaningful because they are overridden by other provisions designed to yield a largely predetermined outcome—that of returning revenues to their attributed state of origin. Moreover, once the funds are apportioned, states have considerable flexibility to reallocate them among highway and transit programs. As we have reported, this flexibility, coupled with a rate-of-return orientation, essentially means that the Federal-Aid Highway Program functions, to some extent, as a cash transfer, general purpose grant program. This approach poses considerable challenges to introducing performance orientation and accountability for results into highway investment decisions. For three highway programs that were designed to meet national and regional transportation priorities, we have recommended that Congress consider a competitive, criteria-based process for distributing federal funds. Finally, using rate of return as a major factor in determining federal highway funding levels poses problems because funding the nation’s transportation system through taxes on motor vehicle fuels is likely unsustainable in the long term. Receipts for the Highway Trust Fund derived from motor fuel taxes have declined in purchasing power, in part because the federal gasoline tax rate has not increased since 1993. In fiscal year 2008, total contributions to the Highway Account of the Highway Trust Fund decreased by more than $3.5 billion from fiscal year 2007, the first year of decrease during the SAFETEA-LU period. The Congressional Budget Office forecasts another revenue shortfall in the Highway Account of the Highway Trust Fund by the end of fiscal year 2012. Over the long term, vehicles will become more fuel efficient and increasingly run on alternative fuels. As such, fuel taxes may not be a long-term source of transportation funding. Furthermore, transportation experts have noted that transportation policy needs to recognize emerging national and global challenges, such as reducing the nation’s dependence on imported fuel and minimizing the effect of transportation systems on the global climate. A fund that relies on increasing the use of motor fuels to remain solvent might not be compatible with the strategies that may be required to address these challenges. In the future, policy discussions will need to consider what the most adequate and appropriate transportation financing systems will be, and whether the current system continues to make sense. The National Surface Transportation Infrastructure Financing Commission—created by SAFETEA-LU to, among other things, explore alternative funding mechanisms for surface transportation—identified and evaluated numerous revenue sources for surface transportation programs in its February 2009 report, including alternative approaches to the fuel tax, mileage-based user fees, and freight-related charges. The report also discussed using general revenues to finance transportation investment but concluded that it was a weak option in terms of economic efficiency and other factors and recommended that new sources of revenue to support transportation be explored. These new sources of revenue may or may not lend themselves to using a rate-of-return approach. We provided a draft of this report to DOT for review and comment. DOT provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees and the Secretary of Transportation. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or herrp@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO Staff who made major contributions to this report are listed in appendix III. This report addresses the following objectives: (1) the amount of revenue contributed to the Highway Trust Fund Highway Account compared with the funding states received during the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) period; (2) the provisions in place during the SAFETEA-LU period to address rate-of-return issues across states, and how they affected the highway funding states received; and (3) additional factors that affected the relationship between contributions to the Highway Trust Fund and the funding states receive. This report updates and includes additional information for a related report issued in June 2010. The main update to the prior report is the inclusion and additional analysis of fiscal year 2009 data on the states’ contributions to the Highway Trust Fund that was made publicly available by Federal Highway Administration (FHWA). As a result, the analyses in this update are inclusive of the entire initial 5-year SAFETEA-LU period. Any analyses from the 2010 report that were not impacted by new data were included in this update as context. To determine the amount of revenue states contributed to the Highway Trust Fund Highway Account compared with the funding they received during the SAFETEA-LU period, we completed four analyses using FHWA data. We met with FHWA and other Department of Transportation (DOT) officials to discuss availability of data and appropriate methodologies. We used FHWA estimates of payments made into the Highway Account of the Highway Trust Fund, by state, and the actual total apportionments and allocations made from the fund, by state. This is sometimes referred to as a “dollar-in, dollar-out” analysis. The source data is published annually in Highway Statistics as Table FE-221, titled “Comparison of Federal Highway Trust Fund Highway Account Receipts Attributable to the States and Federal-Aid Apportionments and Allocations from the Highway Account.” FHWA officials confirmed that it contains the best estimate of state contributions and actual total appropriations and allocations received by states from the Highway Account of the fund. We did not independently review FHWA’s process for estimating states’ contributions into the Highway Trust Fund. However, we have reviewed this process in the past, and FHWA made changes to the process as a result of our review. In addition, we did not attribute any prior balances in the Highway Trust Fund back to states of origin because these funds are not directly tied to any specific year or state. We performed alternative analyses to demonstrate that different methodologies provide different answers to the question of how the contributions of states’ highway users compared with the funding states received. Using the same data as just described, we performed a “relative share” analysis, which compared each state’s estimated proportion of the total contributions to the Highway Account with each state’s proportion of total Federal-Aid Highway funding. We also examined how states fared using FHWA’s approach, for determining the Equity Bonus Program funding apportionments. We performed this analysis to show the outcomes for states based on the information available at the time the Equity Bonus program apportionments are made. The Equity Bonus program amounts are calculated using the statutory formulas for a subset of Federal-Aid Highway Programs. These include all programs apportioned by formula plus the allocated High Priority Projects. FHWA uses the most current contribution data available at the time it does its estimates. However, as explained, the time lag for developing this data is about 2 years. Therefore, we applied the contribution data for 2003 through 2007 to the funding data for 2005 through 2009, the full SAFETEA-LU period. For these data, we analyzed (1) the total estimated contributions by state divided by the total funding received by state—the dollar-in, dollar-out methodology—and (2) a comparison of the share of contributions to share of payments received for each state. We obtained data from the FHWA Office of Budget for the analysis of state dollar-in, dollar-out outcomes and state relative share data for the Equity Bonus Program. We completed our analyses across the total years of the SAFETEA-LU period, 2005 through 2009. We interviewed FHWA officials and obtained additional information from FHWA on the steps taken to ensure data reliability and determined the data were sufficiently reliable for the purposes of this report. To determine the provisions in place during the SAFETEA-LU period to address rate-of-return issues across states, and how they affected the highway funding states received, we reviewed SAFETEA-LU legislation and reports by the Congressional Research Service (CRS), FHWA, and others as applicable. We conducted an analysis of the of FHWA data on the Equity Bonus Program provisions, which were created explicitly to address the rate-of-return issues across states. Our analysis compared funding levels distributed to states through apportionment programs and High Priority Projects before and after Equity Bonus Program provisions were applied, and calculated the percentage increase each state received as a result of the Equity Bonus. We also spoke with FHWA officials to get their perspectives. To determine what additional factors affected the relationship between contributions to the Highway Trust Fund compared with the funding states receive, we reviewed GAO reports on the Highway Trust Fund and federal surface transportation programs, CRS and FHWA reports, and the National Surface Transportation Infrastructure Financing Commission report. In addition, we reviewed existing FHWA data on the status of the Highway Account of the Highway Trust Fund. We also met with officials from DOT’s Office of Budget and Programs and FHWA to obtain their perspectives on the issue. Currently, the Federal Highway Administration (FHWA) estimates state- based contributions to the Highway Account of the Highway Trust Fund through a process that includes data collection, adjustment, verification, and final calculation of the states’ highway users’ contributions. First, FHWA collects monthly motor fuel use data and related annual state tax data from state departments of revenue. FHWA then adjusts states’ data by applying its own models using federal and other data to establish data consistency among the states. FHWA provides feedback to the states on these adjustments and estimates through FHWA Division Offices. Finally, FHWA applies each state’s estimated share of highway fuel usage to total taxes collected nationally to arrive at a state’s contribution to the Highway Trust Fund. We did not assess the effectiveness of FHWA’s process for estimating the amount of tax funds attributed to each state for this report. According to FHWA officials, because data from state revenue agencies are more reliable and comprehensive than vehicle miles traveled data, FHWA uses state tax information to calculate state contributions. States submit regular reports to FHWA, including a monthly report on motor fuel consumption due 90 days after month’s end and an annual report on motor fuel tax receipts due 90 days after calendar year’s end. Because states have a wide variety of fuel tracking and reporting methods, FHWA must adjust the data to achieve uniformity. FHWA analyzes and adjusts fuel usage data, such as off-highway use related to agriculture, construction, industrial, marine, rail, aviation, and off-road recreational usage. It also analyzes and adjusts use data based on public sector use, including federal civilian, state, county, and municipal use. FHWA headquarters and division offices also work together to communicate with state departments of revenue during the attribution estimation process. According to FHWA officials, each year FHWA headquarters issues a memo prompting its division offices to have each state conduct a final review of the motor fuel gallons reported by their respective states. FHWA division offices also are required to assess their state’s motor fuel use and highway tax receipt process at least once every 3 years to determine if states are complying with FHWA guidance on motor fuel data collection. Once the data are finalized, FHWA applies each state’s estimated share of taxed highway fuel use to the total taxes collected to arrive at a state’s contribution in the following manner. Finalized estimations of gallons of fuel used on highways in two categories—gasoline and special fuels— allow FHWA to calculate each state’s share of the total on-highway fuel usage. The shares of fuel use for each state are applied to the total amount of taxes collected by the Department of the Treasury in each of the 10 categories of highway excise tax. The state’s gasoline share is applied to the gasoline and gasohol taxes, and the state’s special fuels share, which includes diesel fuel, is applied to all other taxes, including truck taxes. In addition to the contact named above, Steve Cohen (Assistant Director), Jennifer Kim (Analyst-in-Charge), Brian Hartman, Bert Japikse, Delwen Jones, Max Sawicky, Josh Ormond, and Tim Schindler made key contributions to this report.
Federal funding for highways is provided to the states mostly through a series of grant programs known as the Federal-Aid Highway Program, administered by the Department of Transportation's (DOT) Federal Highway Administration (FHWA). In 2005, the Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) authorized $197.5 billion for the Federal-Aid Highway Program for fiscal years 2005 through 2009. The program operates on a "user pay" system, wherein users contribute to the Highway Trust Fund through fuel taxes and other fees. The distribution of funding among the states has been a contentious issue. States that receive less than highway users contribute are known as "donor" states and states that receive more than users contribute are known as "donee" states. GAO was asked to examine for the SAFETEA-LU period (1) how contributions to the Highway Trust Fund compared with the funding states received, (2) what provisions were used to address rate-of-return issues across states, and (3) what additional factors affect the relationship between contributions to the Highway Trust Fund and the funding states receive. To conduct this review, GAO obtained and analyzed data from FHWA, reviewed FHWA and other reports, and interviewed FHWA and DOT officials. From 2005 to 2009, every state received more funding for highway programs than they contributed to the Highway Account of the Highway Trust Fund. This was possible because more funding was authorized and apportioned than was collected from the states, and the fund was augmented with about $30 billion in general revenues since fiscal year 2008. If the percentage of funds states contributed to the total is compared with the percentage of funds states received (i.e., relative share), then 28 states received a relatively lower share and 22 states received a relatively higher share than they contributed. Thus, depending on the method of calculation, the same state can appear to be either a donor or donee state. The Equity Bonus Program was used to address rate-of-return issues. It guaranteed a minimum return to states, providing them with about $44 billion. Nearly all states received Equity Bonus funding, and about half received a significant increase--at least 25 percent--over their core funding. The infusion of general revenues into the Highway Trust Fund affects the relationship between funding and contributions, as a significant amount of highway funding is no longer provided by highway users. Additionally, using rate of return as a major factor in determining highway funding poses challenges related to performance and accountability in the highway program; in effect, rateof- return calculations override other considerations to yield a largely predetermined outcome--that of returning revenues to their state of origin. Because of these and other challenges, funding surface transportation programs remains on GAO's High-Risk list. GAO is not making any recommendations. DOT reviewed a draft of this report and provided technical comments, which we incorporated as appropriate.
China’s December 2001 accession to the WTO resulted in commitments to open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. The U.S. government’s efforts to ensure China’s compliance with its trade commitments under the WTO are part of an overall U.S. structure, led by USTR, to monitor and enforce foreign governments’ compliance with existing trade agreements. Among other things, USTR is required by law to identify any foreign policies and practices that constitute significant barriers to U.S. goods and services, including those that are covered by international agreements to which the United States is a party. At least 16 other agencies are involved, but USTR and the Departments of Commerce, State, and Agriculture have the primary responsibilities regarding trade agreement monitoring and enforcement. Each of these four key agencies has within its organizational structure a unit that focuses on China or the greater Asian region. These units have primary responsibilities for coordinating the agencies’ China-WTO compliance activities, although numerous other units within the agencies are also involved. The units routinely draw on assistance from experts in these other units to obtain information and expertise, as needed. Additionally, the key agencies have units in China or at the WTO, and staff in those overseas units are also involved in the agencies’ compliance activities. USTR’s annual compliance reports examine nine broad categories of WTO commitments undertaken by China and include a detailed narrative outlining China’s compliance with these commitments. USTR is required to report annually to Congress on China’s compliance with commitments made in connection with its accession to the WTO, including both multilateral and bilateral commitments made to the United States. The reports, which are submitted to Congress every year by December 11, the anniversary of China’s accession to the WTO, are consistent in format and language. They are approximately 100 pages in length and divided into nine broad sections based on categories of WTO commitments. These sections are (1) trading rights and distribution services, (2) import regulation, (3) export regulation, (4) internal policies affecting trade, (5) investment, (6) agriculture, (7) intellectual property rights, (8) services, and (9) legal framework. In each of these sections, USTR identifies areas where progress has been achieved by China in meeting its trade commitments, and USTR also describes the shortcomings with a lengthy description of the specific, as well as broad compliance issues faced by U.S. industry. As USTR notes, the report does not provide an exhaustive analysis of China’s implementation of the particular commitments made in China’s WTO accession agreement. The report incorporates a broad range of input from key federal agencies and U.S. industry. USTR bases the reports on its own experiences as well as information it collects from federal agencies such as the Departments of Commerce, State, Agriculture, and the Treasury through both an interagency process, as well as by working with officers from these agencies at the U.S. embassy and consulates general in China. In addition, USTR seeks public participation by publishing a notice in the Federal Register, holding a public hearing, and incorporating written comments and testimony. Industry associations we interviewed confirmed that USTR fairly represents the concerns and interests of U.S. business in its annual narrative reports on China’s compliance. Since GAO last reported on China’s compliance with its trade commitments in 2004, USTR undertook an interagency top-to-bottom review of U.S.-China trade relations over the past 25 years and issued a report in February 2006. USTR’s report noted that earlier U.S. trade policy with China focused on bringing China into the international trading system and urging China to implement its new WTO commitments. Its report focused on (1) identifying core principles and key objectives of U.S. trade policy with China; (2) assessing the current status and establishing priority goals for each key objective; and (3) identifying the specific action items that will help the United States achieve its priority goals. The report stated that the United States has entered an important new phase of accountability and enforcement in its trade relationship with China and will expect China to play a greater role in strengthening the global trading system. USTR stated in the report that given the importance of U.S. trade with China and the challenges that continually confront the United States as it enters this new period, the United States should readjust its U.S. trade resources and priorities. USTR’s annual reports to Congress do not have the systematic analysis needed to clearly provide an understanding of China’s compliance situation. While the reports describe many issues with China’s compliance and progress on resolving such issues, they lack summary analysis about the number, scope, and disposition of reported problems that would facilitate understanding of key China trade issues and developments and allow the agency to track its effectiveness in monitoring and enforcing China’s trade compliance. Therefore, we conducted a systematic content analysis of USTR’s reports in order to quantify the number, type, and disposition of trade issues. We identified 180 compliance issues from 2002 to 2007, spanning nine trade areas ranging from very specific issues to broader, more complex concerns. Our analysis further revealed that while China has resolved some issues, most issues have persisted without resolution. In addition, our analysis showed that China’s progress in resolving issues varies by trade area. More detailed information on China’s slowed progress in certain areas and faster progress in others might help Congress better understand China’s compliance. USTR also reported continuous engagement with China through multiple avenues in order to solve compliance issues but has not mentioned taking any action on one- quarter of outstanding compliance issues. Additionally, since USTR’s latest report, China made further progress on various compliance issues. While the lengthy detailed narrative in USTR’s reports describe many issues with China’s compliance, as well as China’s successes and progress on resolving such issues, more systematic analysis is needed to clearly understand the overall compliance situation. It is difficult to get a sense for the relative progress being made in each of the nine areas from reading the narrative descriptions. For instance, the reports do not describe how much progress is being made in the area of agriculture relative to the progress being made in intellectual property rights or services. In addition, USTR does not quantify the number of compliance issues or clearly describe the disposition of such issues. USTR also does not clearly identify priority areas or rank the issues in order of importance. While USTR highlights five or six areas of particular concern in the executive summary, some of these areas are crosscutting issues that involve more than one specific trade area. The reports also do not give a clear indication of the level of progress being made overall in each year, or show the relative progress made in one year versus other years. While USTR noted that the progress has slowed in recent years, there is no further information about the degree of this stagnation. Moreover, USTR’s narrative reports lack any high-level analysis, which might facilitate better monitoring and enforcement, and raise important questions that might prompt agencies to adjust their tactics and approaches. Therefore, more specific information on China’s slowed progress in certain areas and faster progress in others might help Congress better understand the trade compliance situation in China in a given year. In its reports, USTR highlighted numerous areas in which China has successfully implemented its commitments since joining the WTO in December 2001. China’s WTO commitments are broad in scope and range from general pledges for how China will reform its trade regime to specific market access commitments for goods and services. In 2006, when deadlines for almost all of China’s commitments had passed, and China’s transition period as a new WTO member was essentially over, USTR reported that China had taken significant and impressive steps to reform its economy. In 2007, USTR also reported that China made noteworthy progress in adopting economic reforms that facilitated its transition toward a market economy. According to USTR, these actions include repealing, rewriting, or enacting more than 1,000 laws, regulations, and other measures, enacting annual reductions in tariff rates, eliminating nontariff barriers, expanding market access for foreign services providers, and improving transparency. Table 1 provides some examples of China’s successful implementation of its WTO commitments from each of USTR’s annual reports from 2002 through 2007. Our analysis of USTR’s reports to Congress from 2002 to 2007 identified 180 compliance issues mentioned in the reports spanning all nine areas of China’s WTO commitments. The greatest number of compliance issues mentioned were in the areas of import regulation and services, and there were relatively few issues mentioned in legal framework and export regulation (see table 2). China’s WTO commitments are broad and complex. Some require a specific action from China, such as to reduce or eliminate certain tariffs. Others are less specific, such as those that require China to adhere to WTO principles of nondiscrimination treatment of foreign and domestic enterprises. Compliance issues also ranged in scope from specific, relatively straightforward issues, such as the late issuance of regulations, to broader and more crosscutting concerns, such as questionable judicial independence, which are more difficult to resolve and assess. The compliance issues can be the result of a range of factors, from political resistance, to lack of technical capacity, to issues of resources and coordination among Chinese ministries. It is important to note that not all compliance issues mentioned in USTR’s reports equally affect U.S. exports to China and that some issues are more easily resolved than others. Thus, while USTR’s reports identify key areas of concern, the economic importance of many individual issues cannot be easily quantified. USTR does not assign economic value to these concerns in its reports, and we did not attempt to calculate the importance or otherwise prioritize or rank the issues in our analysis. Our analysis revealed that over 60 percent of the compliance issues USTR reported to Congress were either resolved or progress was made on them from 2003 to 2007. A compliance issue is considered resolved if USTR reported that actions were taken by China that settled the specific issue mentioned. Our analysis shows that almost one-quarter of all compliance issues mentioned between 2002 and 2007 were ultimately resolved (see fig. 1). See table 3 for examples of compliance issues that have been resolved. In addition, none of the issues that were reportedly resolved resurfaced in later reports. Furthermore, according to our analysis, USTR indicated that China made progress, but did not resolve, about 40 percent of the compliance issues reported. An issue was considered to be one in which China made some progress if in any year USTR reported some type of improvement in the situation, or if action taken by the Chinese improved but did not completely resolve the issue. For example, if USTR reported that China announced a commitment to take a certain action, such as revise a law, which would eventually resolve the issue, then this was counted as progress made in the year in which this commitment was made. Progress can range in magnitude from small to substantial on a particular issue, as well as in frequency of occurrence, with some issues making progress in only 1 year and others in many years. For example, China made progress in improving its inconsistent application and duplication in certification requirements related to standards and technical regulations in only 1 of the 6 years the issue was reported. In contrast, USTR reported that China made progress toward improving transparency related to the administration of its tariff rate quota system for bulk agricultural commodities in 4 of the 6 years the issue was reported. Additionally, our analysis of USTR’s reports showed that 37 percent of all compliance issues mentioned from 2002 to 2007 achieved no resolution or any progress over the entire period. An issue was considered to have made no progress if the reports either explicitly noted that no progress had been made on that particular issue or if the reports did not indicate that China took any action to address the issue in the given year. See table 4 for examples of issues that made no progress over the period 2003 to 2007. In addition, our analysis showed that most compliance issues reported over this period have persisted for many years. For instance, over 30 percent of all issues were mentioned in USTR’s annual reports for at least 5 of the 6 years. In addition, less than 40 percent of all issues were present in USTR’s reports for 1 or 2 years; the remainder of issues, over 60 percent, was mentioned in the reports for at least 3 years or more (see fig. 2). In addition to the issues that were resolved over the period 2002 to 2007, we discovered that a number of the issues mentioned in the reports were not explicitly resolved but were nevertheless dropped from the report. An issue is considered dropped from the report if the issue was mentioned in 1 or more years of USTR’s report, and not mentioned in a later year, without any discussion about resolution of the issue. In total, 15 percent, or 27 issues, were not explicitly resolved according to USTR’s reports but were dropped from subsequent years, with the ultimate status of such issues remaining unknown. Some of these issues might remain outstanding but USTR chose not to include them in the report for a particular reason, or the issues no longer present concerns for U.S. industry and, therefore, were excluded from the report. A USTR official noted that issues disappear from the report for various reasons, such as the business community no longer considers it an issue, or the Chinese have offered a suitable explanation, ultimately settling the issue. While 37 percent of all issues mentioned in USTR’s reports from 2002 to 2007 were either resolved or dropped, the number of issues mentioned in each annual report remained fairly stable over the period 2003 to 2007 (see fig. 2). This suggests that, as compliance issues were resolved or dropped from the report, a similar number of new compliance issues arose and were included. USTR reported 15 to 27 new issues in its report each year, with a decreasing number of new issues added over time from 2003 to 2007. While USTR noted generally that China’s progress in resolving compliance issues has slowed, our analysis provided information about the degree to which progress has slowed in recent years. In its 2007 annual report, USTR stated that beginning in 2006 and continuing throughout 2007, China’s progress toward further market liberalization began to slow. Consistent with USTR’s characterization, our analysis showed that while there have been variations over time, the number and proportion of issues being resolved or making progress has slowed, from just under 50 percent of issues in 2003 down to about 30 percent of issues in 2007. For instance, the number of issues resolved in each year has declined since 2004. In addition, the number and proportion of issues that achieved some progress in each year peaked in 2003, declining steadily through 2006, and improved in 2007 (see fig. 3). In addition to China’s slowed progress over the period, our analysis found that there are an increasing number and proportion of compliance issues where USTR reported no progress, which suggests that issues persist for several years without resolution as new compliance issues continue to arise. According to our analysis, the proportion of issues making no progress rose from just over 50 percent in 2003 to about 70 percent in 2007, with a peak number of issues making no progress in 2006. USTR explained in its 2007 report that U.S. industry is less focused on China’s willingness to implement the specific commitments of its entry agreement than on Chinese policies and practices that undermine previously implemented commitments. According to the testimony submitted to USTR by one major trade association, the current concerns lie with more complicated issues such as a deviation from the WTO’s national treatment principle, inadequate protection of intellectual property rights, nontransparent legal and regulatory processes, and the development of technical and product standards that may favor local companies. Thus, while USTR reported that China has implemented many of its WTO commitments, many of the outstanding and new issues are broader, more complex issues that undermine the commitments and reforms already implemented. USTR noted that China’s record on implementing its WTO commitments is decidedly mixed, without presenting detailed summary information. Through our analysis, we also found that the reported progress varies significantly by trade area (see fig. 4). China has made more progress in some commitment areas—such as trading rights and distribution services, agriculture, and internal policies—having resolved over 30 percent of all issues mentioned in each area, and less progress in other areas such as services and intellectual property rights, where less than 10 percent of issues have been resolved. Overall, while most trade areas have a significant proportion of outstanding issues, the proportion of issues where China is making progress or reaching resolution varies. For instance, in the area of agriculture, the total number of compliance issues mentioned each year is declining slightly, with a large number of issues, about 85 percent, having either reached resolution or achieved some progress from 2003 to 2007. Also, similar to the overall compliance situation, the number of issues making progress or being resolved in the area of agriculture seems to be declining. In fact, USTR mentioned some specific sticking points such as transparency and selective intervention in the market by China’s regulatory authorities. USTR explained that, while U.S. exports of many agriculture commodities to China have reached record levels, the increases are largely due to the result of greater demand. Thus, while the results in the agricultural sector seem positive, there are still some important compliance issues that remain outstanding. Conversely, other trade areas such as intellectual property rights have seen less progress, with the smallest proportion of issues, less than 10 percent, reaching resolution and a sizable proportion of issues, over 30 percent, not making any progress from 2003 to 2007. In addition, there are an increasing number of compliance issues mentioned in this area, with a peak in 2006. USTR noted in its 2007 annual report that while China has put in place a relatively good set of laws and regulations aimed at protecting intellectual property rights, some critical measures still need to be revised, and China’s overall enforcement of these laws has been ineffective. Thus, while many of the intellectual property laws have been rewritten, there are still many outstanding issues, and more complex issues related to enforcement continue to arise. USTR engages with China through multiple avenues to solve compliance issues but has not mentioned taking action on several outstanding compliance issues. In its annual reports, USTR outlines various types of actions taken in order to resolve the compliance issues mentioned in the reports. These actions include raising the issue at multiple forums and dialogues with the Chinese, including the U.S.-China JCCT, the Strategic Economic Dialogue (SED), the Transitional Review Mechanism (TRM) or other forums at the WTO, or raising the issue bilaterally with the Chinese through another mechanism. For this analysis, we considered USTR to have taken action on a particular issue if USTR mentioned some type of activity in any of its annual reports, such as the ones listed above. USTR reported taking at least some type of action on most compliance issues mentioned but did not mention taking any type of action on one-quarter of compliance issues mentioned (see fig. 5). Specifically, USTR raised 32 percent of issues at the JCCT, 54 percent of issues at the TRM or other WTO forum, 13 percent of issues at the SED, and 57 percent of issues were pursued bilaterally with the Chinese through some other mechanism. Most of the issues where USTR did not report taking any type of action were in the areas of agriculture, import regulation, intellectual property rights, and internal policies affecting trade. USTR officials also highlighted that among the actions they reported, they have taken the added step of filing WTO cases against China after bilateral negotiations have made no progress. They noted that the United States has brought six such WTO cases against China (see table 5). Since USTR’s latest report, gains were made at the December 2007 JCCT and SED meeting that are not mentioned in the 2007 annual report on China’s compliance with the WTO. In December 2007, the United States and China participated in the third cabinet-level meeting of the SED and the 18th JCCT meeting; USTR, the Departments of Commerce and the Treasury have all cited numerous areas of progress resulting from those meetings. However, due to the timing of the meetings in late in 2007, the results were not included in USTR’s 2007 annual report and, therefore, were also not included in our analysis of such reports. Specifically, the Department of Commerce cited several areas of progress as a result of the December JCCT meeting, including steps taken by China in the areas of intellectual property, product safety, and market access in several industries such as medical devices, agriculture, and telecommunications. In addition, the Department of the Treasury also noted many areas of progress resulting from the December SED meeting including areas such as integrity of trade and product safety, financial sector reform, environmental sustainability, and transparency (see table 6). We were only able to partially determine the status of USTR’s 2006 top-to- bottom report, which outlines objectives for U.S.-China trade relations and serves as a plan to focus U.S. trade resources and priorities in this regard. On one hand, we found that USTR and the other agencies have made considerable progress implementing planned action items listed in the report. The key U.S. trade agencies took steps to increase bilateral engagement with the Chinese and expand the U.S. government’s capacity to enforce and negotiate by increasing staff levels in headquarters and overseas and improving training opportunities. However, we found that some previously identified management challenges—staffing gaps and limited Chinese language capacity—remain. On the other hand, we could not determine progress toward achieving the top-to-bottom report’s broad objectives, which go beyond trade compliance. While this report lays out USTR’s plans for U.S.-China trade relations, USTR does not formally assess its progress or measure its results as we have recommended in our past reviews of USTR plans. The lack of clear linkages between U.S. objectives and planned action items and vague language make it difficult to determine whether the steps agencies reported taking were effective. Furthermore, the report has not been updated to reflect subsequent developments. We found that USTR and the key trade agencies have made considerable progress in implementing the planned action items listed in the top-to- bottom report. We learned that various agencies share responsibility for carrying out the activities planned in this report either individually or collectively. USTR informed us that, of the 25 implementing steps, 10 implementing steps were interagency; USTR was responsible for 6, the Department of Commerce for 5, other agencies for 3, and the Department of State for 1. After assessing the information provided by USTR and the other key trade agencies, we determined that 17 out of 25 steps were implemented or are in the process of being implemented; the status of 8 steps was unclear because the ‘top-to-bottom’ review did not define terms such as ‘strengthen’ and ‘effectiveness’ nor did it provide baseline data from which to measure progress. For example, with regard to strengthening interagency coordination, the report says that export promotion activities will be increased, but without any baseline measurement information we could not determine if there had been an increase in these activities. (See app. II, table 10, which identifies the 10 action items and the accompanying 25 implementing steps, along with agency responsibilities and status.) We confirmed that key U.S. trade agencies took steps to increase bilateral engagement with the Chinese and expanded the U.S. government’s capacity to enforce and negotiate by increasing staff levels in headquarters and overseas and by improving training opportunities. While assessing these agencies’ implementation of the top-to-bottom report action items, we also followed up on progress made addressing management challenges identified in our 2004 report on U.S. monitoring and enforcement activities related to China. We recommended that the key agencies take various steps to improve performance management pertinent to China WTO compliance efforts and that they undertake actions to mitigate the effects of staff turnover in the agencies China WTO compliance units. We found that some previously identified challenges—staffing gaps and limited Chinese language capacity—remained at some agencies. As a result of the top-to-bottom report, key trade agencies are undertaking several action items to improve and increase bilateral engagement with China. The U.S. government has utilized two formal consultative mechanisms to address commerce, trade, and financial issues, both of which demonstrated an emphasis on high-level, bilateral engagement. First, the United States uses the JCCT, a forum for dialogue on bilateral trade issues and a mechanism to promote commercial relations. This forum had been elevated to a higher level after a 2003 meeting and refocused to give greater attention to outstanding trade disputes. Second, the United States and China created the SED in September 2006, as another bilateral high-level forum to address the most important, long- term, strategic issues in the United States-China economic relationship. The meeting of the SED, which is convened every six months, is led by a U.S. Cabinet Official and a Chinese Vice Premier, and each dialogue session comprises U.S. cabinet officials and Chinese ministers. The SED allows both governments to communicate at the highest levels and with one voice on issues of long-term and strategic importance, including issues that extend across multiple departments and agencies. The United States has three core objectives for the SED: (1) to advance the U.S.-China economic relationship by establishing new habits of cooperation; (2) to accelerate China’s next wave of economic transition; and (3) to encourage China to act as a responsible global economic power. According to Department of the Treasury officials, there are no formal working groups associated with the SED. Rather, U.S. cabinet officials and Chinese ministers determine strategic areas of focus for the intervening six months between meetings of the SED. For example, at the first SED in December 2006, civil aviation was selected. At second SED, product safety was identified and in December 2007 at the third SED, energy and environment was a strategic area of focus. According to some U.S. agency officials, there was confusion over the purpose of the SED when, at the May 2007 SED meeting, the United States used the meeting to discuss trade compliance issues. Officials told us that they have since clarified the issue. Department of the Treasury officials told us the JCCT focuses mostly on short-term trade issues, while the SED focuses on solutions to long-term, strategic, economic issues (see table 7 below for list of JCCT work areas). USTR planned to strengthen and expand bilateral dialogues on numerous current and potential problem areas, another key action item in the top-to- bottom report. The U.S. government held a number of bilateral dialogues covering 8 different subject areas to address trade issues with China, which demonstrated a continuing emphasis on bilateral engagement (see app. II; table 10, which lists these dialogues). Many U.S. government agencies engaged their Chinese counterparts on a multitude of topics such as agricultural, environmental, labor, subsidies and standards, and telecommunications issues. While some of these dialogues are very active and have resulted in accomplishments such as China’s acceding and ratifying the World Intellectual Property Organization Internet Treaties in 2007, others dialogues have not yet been implemented. For instance, both the Environmental Protection Agency and the Department of Labor indicate they have not established formal dialogues with their Chinese counterparts as planned. There are other means of bilateral engagement. For example, USDA officials told us they prefer to handle issues with their Chinese counterparts using science-based rationale. This often requires USDA Foreign Agricultural Service (FAS) to engage Chinese officials in technical forums and through capacity building initiatives, even though USDA participates in high-level JCCT working groups on agricultural and sanitary and phytosanitary issues. The number of high-level meetings between senior U.S. and Chinese officials has increased. For instance, the key economic and trade agencies sent more cabinet and sub cabinet delegations to China to engage their Chinese counterparts on trade issues; senior-level delegations to China from various U.S. government agencies increased from 31 in 2006 to 63 in 2007, a level equal to about one a week. Furthermore, this represents a substantial increase from 2002 and 2003, where there were 13 and 23 such meetings, respectively. Since the top-to-bottom report, the key trade agencies have increased staff in headquarters and overseas to expand the U.S. government’s capacity to enforce China’s trade compliance and to negotiate with China on trade issues. They also increased staff training opportunities. GAO’s prior work recommended that the key trade agencies better manage their human capital to enhance the U.S. government’s China WTO compliance efforts and mitigate the effects of staff turnover. Nevertheless, agency officials told us they still experienced staffing gaps and turnover in key overseas offices and shortfalls in language skills. Key trade agencies have continued to increase staff positions to meet the demands of the U.S.-China trade relationship (see table 8). Staff resources more than doubled at headquarters and in Beijing since 2004. The estimated number of full-time equivalent staff in units most directly involved with China trade compliance efforts increased from 60 in fiscal year 2003 to 135 in fiscal year 2007. USTR doubled its staff positions in headquarters from 5 to 10 positions and established an internal China Enforcement Task Force that includes staff from USTR’s Office of General Counsel and the China Affairs Office to prepare and handle potential WTO cases. USTR also added personnel in its China office to coordinate collection and integration of information on current and potential China trade issues. In response to increased responsibilities arising from the new U.S.-China trade relationship, USTR, Treasury, and Commerce’s U.S. Patent and Trademark Office added four new positions at the embassy in Beijing. Department of Commerce’s and USDA’s Foreign Services in China are the largest overseas office for each department. For example, 10 percent of Commerce’s Foreign Commercial Service is in China. In addition, as a result of an increased focus on China, the FAS has increased the number of staff that work in China, which now accounts for 10 percent of its overseas staff, according to USDA. Staffing Gaps and Turnover Remain Furthermore, agencies have experienced staffing gaps and shortages. In headquarters, USTR experienced staff turnover from fiscal year 2006 to 2007. USTR’s China Affairs Office had four staff depart and hired five additional staff. As of November 2007, the office is authorized to have nine staff but only have eight. The International Trade Administration officials in the Department of Commerce said that there is still a relatively high amount of staff turnover because employees in the Market Access and Compliance acquire a skill set that is highly desirable and attractive to the private sector. Department of Commerce officials noted that one official in the Market Access and Compliance’s Office of Chinese Economic Area had moved from headquarters to Beijing since January 2007. Overseas, both the Departments of State and Commerce have experienced staffing challenges. For instance, a senior Department of State official told us there has been a high level of turnover in the economic section at the embassy, which has included curtailed Foreign Service rotations. These changes have resulted in significant gaps in filling positions and reorganizations to compensate for lost expertise. To maintain current staffing levels, the department has sometimes pulled staff from Chinese language training. Although State added seven positions in China as part of its Global Repositioning Initiative, only five were at the embassy in Beijing, and two staff still had not arrived at post as of the end of 2007. One of the five economic section positions at the Beijing embassy tasked to work on China trade compliance at the embassy has been seconded to work with the senior Department of the Treasury official at post. Similarly, the Department of Commerce’s Trade Facilitation Office has been understaffed and has experienced high turnover in two staff positions according to department officials. One Market Access and Compliance position was vacant for a year, and the office had waited over 6 months for a Director. We were told that the individual has been hired and assumed duty in late February 2008. A senior Department of Commerce official stated that one contributing factor to the high turnover for the Trade Facilitation Office is that department hires experienced people with China business backgrounds, in a highly competitive job market. These individuals are on a limited 2-year noncareer appointment (with the possibility of the appointment being extended to a maximum of 5 years) with no opportunities for promotion. In 2004, GAO reported that the four key trade agencies lacked specific training relevant to executing China-WTO compliance responsibilities, but since then the Departments of Commerce and State, and USDA have offered staff opportunities for training on trade monitoring and compliance. Training opportunities for staff have increased, but most training is still ad hoc and does not apply specifically to China trade compliance. Department of State staff overseas stated they had sufficient funds for training. In addition, both departments offer courses online for staff. Department of State offers about nine training courses related to WTO compliance issues to its employees, as well as employees from other agencies. In fiscal year 2007, approximately 172 individuals took these courses. Since 2005, Department of Commerce has offered several training courses related to compliance and market access. Commerce employees in International Trade Administration participated in training on compliance and market access database. In addition, to ensure data accuracy in the Department of Commerce’s case database, about 195 employees have been trained on case procedures and received guidance on how to document their work in the database. USDA officials stated the agency increased training opportunities for its China staff since 2005. Senior management from the Departments of Commerce and State expressed concerns about the language skills of China unit staff. For instance, newer staff often have insufficient language skills, according to a senior Commerce official. As of September 2007, the Department of Commerce’s Office of Chinese Economic Area offers Mandarin language training and has five staff taking the course; however, Beijing staff confirmed that they were not fluent in Chinese and said they rely on the Chinese Foreign Service Nationals to translate and conduct research to enhance the officers’ abilities to perform their duties. Some Department of State staff told us that officers come to the embassy before they have finished their language training. According to a senior Department of State official, this limits them in their official capacities. Although senior department management and staff said they had funds to take language training, the heavy visitor schedule and workload have made it difficult to consistently take advantage of the language instruction available at the post. We could not determine agencies’ progress toward achieving the plan’s broader U.S.-China trade objectives for several reasons. First, USTR officials said that while the top-to-bottom report is their planning tool, they have not formally assessed the progress they have made in implementing it, although USTR officials told GAO that USTR periodically reviewed their progress and made informal internal assessments. However, USTR did not provide GAO any of its informal internal assessments. Second, assessing USTR’s progress toward achieving its objectives and priority goals for U.S.-China trade is difficult since the objectives and priority goals are clearly linked to the action items in the report. Furthermore, some of the action items use undefined terms such as “strengthen” and “effectiveness” and others do not include baseline information from which to measure progress. As a result, it is difficult to ascertain how the agency’s action items and implementing steps contribute to achieving the larger U.S. trade objectives and priority goals with China. Third, USTR has not updated the report despite major changes in the U.S.-China trade relations since conducting the top-to-bottom review, such as the establishment of the Department of the Treasury-led SED in September 2006 and filing of several dispute settlement cases. USTR officials told us they use the top-to-bottom report as the planning tool for USTR’s China Affairs Office, and it guides USTR’s as well as the U.S. government’s engagement with China on trade issues. Nevertheless, USTR officials told us they do not formally assess the progress they have made in implementing it. Rather, they said that in their regular discussions on China, they are inevitably touching on the issues in the top-to-bottom report. In Washington, D.C., and overseas, managers and staff we interviewed at other agencies said they were aware of the report, but that it was not used as a guide for planning their China trade compliance priorities. The top-to-bottom report indicated that the Trade Policy Review Group (TPRG) and Trade Policy Staff Committee (TPSC) were to conduct monthly reviews of the progress made in achieving the key objectives identified in the report to help ensure coordination of China trade policy formulation and implementation and appropriate focus among agencies on key U.S. trade objectives with China. However, USTR said that although these groups discuss key objectives and priority goals, they do not track progress made on achieving the action items. The TPSC Subcommittee on China met 5 times in 2007 between January and August to discuss various issues such as WTO disputes, SED and JCCT dialogues, and coordination with U.S. trading partners. The TPRG met 10 times between March 2006 and June 2007 to discuss a variety of issues related to its strategy in WTO dispute settlement. In addition, no minutes are kept on either the TPRG or the TPSC so we could not determine to what extent these objectives were informally discussed in these meetings. Furthermore, as discussed in the previous section of our report, USTR still does not attempt to measure the results of its efforts to resolve trade compliance problems with China, even though they are an integral part of many U.S.-China trade objectives. USTR’s top-to-bottom review drew upon GAO past reviews of monitoring and enforcement efforts. In GAO’s 2004 report, for example, we found that the specific units within the agencies most directly involved with China compliance activities lacked specific strategies for ensuring that they supported their agency’s goals, and they also did not assess their unit’s results. We noted that planning and measuring results were important components to ensuring that government resources were used effectively to achieve the agencies’ goals. In addition, we stated that good planning and management links overall agency goals to individual unit activities and priorities. We recommended that these agencies take steps to improve performance management pertinent to the agencies’ China-WTO compliance efforts. Specifically, we said that USTR should set annual measurable targets related to its China compliance performance measures and asses the results in its annual performance plan. Other Key Agencies Have Some Related Plans We asked the other agencies to provide us with their China unit plans. However, the Department of State provided their information too late for us to assess. The Department of Commerce’s International Trade Administration has developed a strategic plan, and it has China objectives and goals that are broad; however, there is no performance measures related to China and the information provided on China is not very specific. The International Trade Administration’s Office of China Economic Area which has major responsibility for China compliance and trade issues does not have a specific unit plan, although Department of Commerce officials told us that the activities undertaken by the Office of the Chinese Economic Area are fully consistent with the International Trade Administration’s Office’s strategic plan. The Market Access and Compliance unit which is over the Office of China Economic Area does have a draft plan but it does not mention China specifically. USDA’s Office of Country and Regional Areas and the Office on Negotiations and Agreements have developed unit plans for China, but the documents have not been officially approved by agency management. Assessing USTR’s progress toward achieving its objectives for U.S.-China trade is difficult since the broad objectives and the more specific action items are not clearly linked in the top-to-bottom report. The top-to-bottom report sets forth the following six U.S.-China trade objectives: Participation—integrate China more fully as a responsible stakeholder into the global rules-based system of international trade and secure its support for efforts to further open world markets; Implementation and compliance—monitor China’s adherence to international and bilateral trade obligations and secure full implementation and compliance; Enforcement of U.S. trade laws—ensure that U.S. trade remedies and other import laws are enforced fully and transparently, so that Chinese imports are fairly traded, and U.S. and Chinese products are able to compete in the U.S. market on a level playing field; Further market access and reform—secure further access to the Chinese market and greater economic reforms in China to ensure that U.S. companies and workers can compete on a level playing field; Export promotion—pursue effective U.S. export promotion efforts with special attention to areas of particular U.S. export growth potential in China; and Proactive identification and resolution of trade problems—identify mid- and long-term challenges that the trade relationship may encounter, and seek proactively to address those challenges. However, these six objectives and 31 related priority goals are not linked to the 10 action items and 25 specific implementing steps. (See table 11 in app. II for a list of the six objectives and 31 related priority goals.) As a result, it is difficult to ascertain how the agency’s action items and implementing steps contribute to progress and achieve the larger U.S. trade objectives with China. Therefore, we asked USTR to identify which objective each action item and implementing step is supposed to help achieve. There was a wide range in the level of planned activity to achieve different objectives. Based on the information USTR provided, we found that 11 implementing steps focused on one objective concerning implementation and compliance, while other objectives concerning proactive identification and resolution of trade issues and export promotion each only had 1 implementing step associated with them. Furthermore, the scope and specificity of some objectives and their related priority goals, did not match the actions meant to implement them. Therefore, it is not clear that the planned priority goal actions, if implemented, would fully address all of USTR’s objectives. For example, as part of planned export promotion efforts, they intended to give special attention to noncoastal parts of China, from small/medium enterprises, from high-tech firms, and in sectors where the United States is competitive; in contrast, the planned action related to export promotion is very general in nature, is discussed in the context of strengthening interagency coordination, and does not mention any of these specifics. It was also difficult to assess progress because terms in the plan do not provide a means to understand how USTR or other government agencies might determine when an action item had been achieved. Several action items state that particular initiatives will be expanded, strengthened, or increased; however, no strategy or baseline information is provided to allow one to determine how this would be done or whether actions on the part of agencies have actually expanded, increased, or strengthened the program. For example, one action item is to “increase effectiveness of high-level meetings with China’s leaders,” but the implementing steps do not state how greater effectiveness will be accomplished; instead, the step is limited to “continue to hold high-level meetings.” Similarly, with regard to strengthening interagency coordination, the report says that export promotion activities will be increased, but without any details or measurement information we could not determine if there had been an increase in activity or how this might lead to strengthened coordination. Plans Have Not Been Updated Finally, it is difficult to assess the status of U.S.-China trade objectives because the report does not reflect some important developments. USTR has not updated its plans. USTR stated in its report that these are “initial steps” and that additional action items would be developed and implemented in consultation with Congress and other stakeholders to ensure meaningful progress in achieving the reports’ key objectives. However, the action items in the report have not been updated since its issuance over 2 years ago in February 2006. There have been several important developments in U.S.-China trade since the top-to-bottom review occurred, which are not reflected in USTR’s report. The creation of the SED is a new high-level forum and now involves the Department of the Treasury. The United States has filed five dispute settlement cases at the WTO against China since February 2006 (see table 5). Also, U.S. industries have filed numerous trade remedy petitions against Chinese imports under U.S. trade laws, including requests for safeguard actions and antidumping and countervailing duty investigations. In 2007, Commerce made the determination to apply the countervailing duty law to Chinese imports, representing a major change from its long-standing policy of not applying this law to non-market economies. Clearer information on the number and disposition of trade issues with China and the trends over time helps Congress and the public understand the results of U.S. government monitoring and enforcement activities. It also better informs policymakers trying to adjust tactics in response to new developments and shift resources to where they can be the most effective. Measuring program results on an ongoing basis can be a powerful management tool. For example, analyses like the ones we conducted could prompt policymakers to shift priorities to focus on trade areas with the greatest number or most persistent unresolved issues. Also, it is possible that lessons can be learned from the tactics and approaches used in those areas where the most issues have been resolved. Similarly, USTR’s top-to-bottom review produced a 2006 governmentwide plan for U.S.-China trade relations. Since then, the key trade agencies have taken steps to implement the various action items this plan laid out, including an expansion of U.S. monitoring and enforcement capacity, increased number of bilateral forums for U.S.-China dialogue about trade and economic issues, and proactively identifying and resolving trade issues with China. However, it is not always clear how these activities will achieve the many objectives the United States has regarding trade relations with China. A clearer linkage between planned activities and objectives, and regular progress reviews could help agencies adjust priorities, focus their efforts, and ensure that there is movement toward all objectives. Furthermore, this plan for engaging China would be strengthened if it reflected new developments, like the creation of the SED, and the results from ongoing U.S. government monitoring and enforcement activities described in USTR’s annual trade compliance report to Congress. The upcoming change in administration, new Congress, expected changes in Chinese leadership, and 2-year anniversary of the top-to-bottom report, provide USTR with an opportune time to update its plan. To improve policymakers’ and the public’s understanding of China’s trade compliance situation, we recommend the USTR clearly and systematically identify the number, type, and disposition of the trade issues it is pursuing with China and report this and more useful trend information in its annual China trade compliance report to Congress. To help achieve U.S. trade objectives with China, we recommend USTR update and improve the plans reported to Congress in its 2006 top-to- bottom report by considering recent developments and the results of ongoing U.S. monitoring and enforcement activities and by reviewing how specific implementing steps and action items align with broad objectives and priority goals. We also recommend USTR take steps to formally monitor implementation of these plans over time. We provided a draft of this report to USTR and the Departments of Agriculture, Commerce, and State for their comment. USTR provided written comments, which are reprinted and evaluated in appendix III. USTR officials said they appreciated our advice to ensure that USTR is doing the most effective job in reporting on results and they would consider our insights and ideas, but they did not comment directly on our recommendations. USTR asked that we clarify our analysis of agency actions taken and the persistence of compliance issues; we made revisions, where appropriate. USTR believed that we undervalued the systematic analysis of China’s WTO compliance in USTR’s annual reports. In addition, USTR believed that the quantitative analysis of progress made reveals inherent limitations and difficulties in developing meaningful quantitative compliance measurements. Moreover, USTR expressed concern about the advisability of providing quantitative analysis in USTR’s annual reports. However, we disagree and still believe that providing summary analysis – both qualitative and quantitative -- would enhance understanding about China’s compliance situation and provide important information for Congress to conduct oversight and for senior policymakers to assess the success of USTR’s and other key trade agencies’ activities. USTR has many options for tailoring such analysis in order to address any concerns they might have. With regard to the top-to-bottom-review, USTR officials stated that it was a not a plan “in the narrow and specific sense” used in our analysis; instead it was a one-time policy document that was not intended to be updated. USTR stated it does provide updates through USTR’s annual reports to Congress on China’s WTO compliance. Furthermore, USTR stated that the report’s action items were short-term steps and were not in themselves designed to achieve the objectives and priority goals. However, it is our understanding that USTR’s report on the results of the top-to-bottom review was a plan, based on interviews with USTR staff and our reading of the document. USTR’s report has many of the characteristics of a good plan and addresses our 2004 recommendation for a China unit plan in that the report establishes goals and priorities for the various China Affairs Office activities. In addition, USTR’s report implies that updates were going to be provided, however, we agree that USTR’s report includes no requirement or explicit promise to present revised objectives, goals, subsequent actions, or the degree of progress in a new version of the report. While USTR provides numerous reports to Congress on its activities, USTR still has not updated the six objectives and 31 priority- goals specified in the top-to-bottom report to reflect subsequent developments nor formally assessed progress. GAO advocates agency strategic planning and using such plans on an ongoing basis as a management tool. We suggest that USTR reconsider its treatment of this report as a one-time policy statement and that it update and improve the report in order to enhance accountability and inform all stakeholders, including Congress and the public. In addition, we received technical and editorial comments from Department of Agriculture and Commerce officials that sought to clarify our description of information they provided about the departments’ China-related activities, such as Commerce staffing information and the correction of minor errors. We revised our report, as appropriate, in response to these comments. The Department of State did not provide any comments on our draft report. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the U.S. Trade Representative; the Secretaries of Commerce, State, and Agriculture; and interested congressional committees. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff has any questions regarding this report, please call me at (202) 512-4128. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO contact and staff acknowledgments are listed in appendix IV. To assist Congress in better understanding the United States Trade Representative’s (USTR) reporting on the U.S. government’s progress in monitoring and enforcing China’s trade commitments, we reviewed two key USTR reports, its annual December 11 report to Congress on China’s WTO Compliance and its February 2006 top-to-bottom report, U.S.-China Trade Relations: Entering a New Phase of Greater Accountability and Enforcement. We were asked to (1) evaluate the degree to which USTR’s annual reports to Congress on China’s World Trade Organization (WTO) compliance present information necessary to clearly understand China’s compliance situation and (2) examine the status of USTR efforts to implement the action items and achieve the objectives presented in its February 2006 top-to-bottom report. To examine the scope and disposition of compliance issues, we reviewed USTR’s Report to Congress on China’s WTO Compliance from 2002 to 2007. These annual reports, mandated by Congress in conjunction with China’s 2001 accession to the WTO, incorporate a broad range of input from key federal agencies, as well as the business community. To assure ourselves that the reports generally included the main compliance issues and concerns that had arisen, we interviewed three key industry associations, which together represent over 1,300 companies in over 40 industries, in Beijing, China, and Washington, D.C., about USTR’s annual reports, and these groups noted that they were generally satisfied with the report’s portrayal of the compliance situation in China. We identified each unique compliance issue that was reported by USTR in the narrative of each of their annual reports. Our identification was based on USTR’s description and definition of problems in the narrative of the report. USTR’s categorization of issues in the report and the manner in which issues were grouped and presented, also guided the identification of individual issues. We did not include areas where China initially complied fully with its commitments and, therefore, no issues were raised, these were considered successes as reported in table 2. In all, we identified 180 issues in the six annual reports. To analyze the disposition of the compliance issues, we reviewed the narrative descriptions provided in the reports and made determinations according to three broad categories: No Progress Noted, Some Progress Noted, and Resolved. We categorized an issue as “No Progress Noted” if the report text either explicitly stated that no progress had been made or did not indicate that China had undertaken any actions to address the issue. We categorized an issue as “Some Progress Noted” if the report text indicated that China had undertaken any action to address the issue but had not completely resolved it. We categorized issues as “Resolved” if the report language clearly indicated that the compliance issue was resolved and the U.S. government was no longer pursuing a resolution of that particular issue (see table 9 for additional details). Two of our staff independently identified each compliance issue and made initial determinations of the dispositions. After those staff had reconciled differences in their initial identification and disposition of issues, additional staff reviewed issues and dispositions to ensure consistency and accuracy in the dispositions. We did not attempt to identify the relative importance of the compliance issues as the report text does not provide clear indications that would allow us to make that determination. However, we based our analysis on the premise that all these compliance issues had been considered serious enough by USTR to include in its annual reports. Indeed, USTR reported that it focused the report on trade concerns raised by U.S. stakeholders that merit attention within the WTO context. In some instances, we noted that after a compliance issue in a particular area had been resolved, other issues arose in the same area. For example, in some areas, after a particular commitment was implemented, other restrictions were imposed that made it difficult for U.S. companies to realize the full benefits of the commitment. In those instances, we identified two separate issues and noted their dispositions according to the evidence. As a result, our total count of issues includes several that are related, but that were identified as separate problems in USTR’s reports. In addition, for the 2004 through 2007 annual reports, we quantified the number of issues where USTR mentioned taking various types of actions in order to resolve the issue in the narrative of the report. These actions include raising the issue at multiple forums and dialogues with the Chinese, including the JCCT, the Strategic Economic Dialogue (SED), the Transitional Review Mechanism (TRM), or other forums at the WTO, or raising the issue bilaterally with the Chinese through another mechanism. For example, regarding the concerns from the U.S. telecommunications industry about interference from Chinese regulators regarding standards and contract negotiations, USTR reported that they raised this issue during a 2004 JCCT meeting. Therefore, we noted that USTR took action toward resolving this issue at the JCCT. To assess USTR’s progress in implementing the objectives and action items presented in its February 2006 top-to-bottom report, U.S.-China Trade Relations: Entering a New Phase of Greater Accountability and Enforcement, we analyzed the document by identifying the six objectives and each of the associated priority goals. After that, we delineated the 10 action items and each of their associated implementing steps. We created a chart and divided the implementing steps under each associated action item. Next, we asked USTR to (1) identify the agency responsible for implementing each action item, (2) complete the chart, (3) indicate whether the action item was implemented and if so how was it implemented, and (4) provide supporting documentation for each response. Since we had observed that the action items were not clearly linked to the report’s objectives, we asked USTR to identify which objective the action item addressed. We also asked the Departments of Commerce and Agriculture to complete a chart for their individual agencies; identify which action item they were responsible for implementing, indicate the status of this action item, and provide supporting documentation for their responses. We asked the Department of State to provide documentation for the one step that USTR said this department was solely responsible for implementing. We conducted this performance audit from March 2007 to April 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Table 10 identifies the 11 action items and the accompanying 25 implementing steps, along with agency responsibilities and status in the top-to-bottom review. Table 11 is a list of the six objectives and 31 related priority goals in the top-to-bottom review. The following are GAO’s comments on the United States Trade Representative’s letter dated March 26, 2008. 1. We disagree that report undervalues the systematic analysis of China’s WTO compliance contained in USTR’s annual reports. In our report, we acknowledge the detailed narrative information presented by USTR and the consistent format used to present this information in its annual reports on China’s WTO compliance. Nevertheless, we believe that the lack of systematic analysis in these reports make it difficult for Congress and the public to understand the overall compliance situation in China. USTR’s annual reports lack not only quantitative measurement and analysis, but qualitative summary analysis as well. While there is a significant amount of detailed information contained in USTR’s reports, it is difficult to determine the overall progress being made in each of the nine areas. In addition, without systematic analysis of the lengthy narrative descriptions it is difficult to assess the level of progress being made in each year, or the trends over time. While there are other ways to conduct such analysis, our work demonstrates that it is possible to provide both systematic qualitative and quantitative analysis to make the information in the report more meaningful and accessible. 2. We disagree with USTR’s statement that the variety and nuances of individual compliance issues make summary analysis meaningless. In fact, we believe the opposite is true. As we noted in our report, we identified each unique compliance issue based on the description and definition of problems in the narrative of the report. The categorization of issues in USTR’s reports and the manner in which issues were grouped and presented guided our identification of individual issues. We recognize that it is possible to group the issues differently, and if USTR finds another method of grouping and identifying issues, we encourage USTR to use such methods in providing summary analysis in future reports. In addition, we believe that this type of analysis is important to help Congress and the public better understand the overall compliance situation regarding China. Should USTR choose to define issues differently, we are confident that this will not change the overall results and patterns found through our analysis. Our report discusses the fact that the compliance issues mentioned in USTR’s reports may not all equally affect U.S. exports to China. In addition, we acknowledge that the level of progress made on one particular compliance issue might not be equal to the progress made on other issues. We agree with USTR that the relative importance of such issues and the relative progress made should guide decisions about how and when to devote resources to pursue particular compliance issues. That is precisely why we believe that more systematic information included in USTR’s annual compliance reports will provide further understanding to Congress and other stakeholders (and therefore help improve decision making regarding China’s trade compliance). To the extent that USTR believes it is important to give more weight to certain issues or explain other nuances that surround individual issues and the progress made on such issues, we encourage USTR to incorporate these variables when conducting any summary analysis for future reports. 3. We disagree with USTR that it will be ill-advised to provide a detailed quantitative analysis in USTR’s annual reports. We believe that more transparency and clarity in USTR’s reports enhances understanding about China’s compliance situation and provides important information for Congress to conduct oversight and for senior policymakers to assess the success of their activities. Such information would promote a more informed discussion about U.S.-trade policy toward China among all stakeholders. While USTR could decide to more clearly prioritize the almost 180 issues it reports, we did not advocate ranking these issues. In addition, U.S. trade negotiators could use summary information on China’s progress (or lack thereof) on resolving compliance issues to better argue that more needs to be done in certain areas and that the United States expects greater progress overall. Nevertheless, USTR would have many options for how to conduct and present such summary information, giving it the flexibility to mitigate any concerns that negotiators might have. 4. USTR states our categorization of compliance issues according to “action taken” and “no action taken” is unclear. Our report states clearly that our analysis of actions taken is based solely on the information provided in USTR’s annual reports. In addition, USTR stated that it was unclear how we categorized issues when they had brought a WTO case after bilateral negotiations. We considered any actions mentioned in USTR’s annual compliance reports at any WTO forum as “raised at the WTO” including dispute settlement cases. Furthermore, we added information to our report to clearly identify the cases filed by the United States against China. 5. USTR states the “top-to-bottom” report is not a “plan” in the narrow and specific sense used in our report. However, this was not our understanding based on interviews with USTR staff during our review and our reading of the document. USTR’s report on the results of its “top-to-bottom” report addresses our 2004 recommendation for a China unit plan in that the report establishes goals and priorities for the various China Affairs Office activities. GAO advocates agency strategic planning and using such plans on an ongoing basis as a management tool. USTR’s report has many of the characteristics of a good plan: it clearly defines objectives, goals, and action items; provides a detailed discussion of the problems; delineates agency responsibilities; provides specific activities and programs; and identifies human resources needed to achieve action items. In a February 2006 news conference announcing the report on the top-to- bottom report, the then U.S. Trade Representative was asked, among other things, how he would measure USTR’s progress on U.S.-China trade issues. He replied that “… the way to measure our performance is to go point by point through the report looking at the issues that I’ve talked about at the outset….” We believe that USTR should reconsider its treatment of this report only as a one-time policy statement. 6. USTR states that the top-to-bottom” report did not promise or anticipate the drafting of update. Based on our analysis of the report, it is implied that updates were going to be provided; however, we agree that the report includes no requirement or explicit promise to present revised objectives and goals, subsequent action items, and the degree of progress in a new version of the report. So we removed language indicating that there was an explicit requirement or promise. However, we suggest that USTR reconsider its treatment of this report as a one- time policy statement. Regular briefings are important, but USTR should have a written record that specifies revised objectives and goals, subsequent action items, and the degree of progress toward achieving them. USTR states that through annual reports like its annual reports to Congress on China’s WTO compliance, USTR already provides the written updates on the “top-to-bottom” report. While USTR provides numerous reports to Congress on its activities, USTR still has not updated the six objectives and 31 priority goals specified in the top-to- bottom report to reflect subsequent developments nor formally assessed progress. We still advocate that USTR update and improve this report, commensurate with its promised actions to help ensure that it is best positioned to meet its key China trade objectives and to ensure meaningful progress toward achieving them. USTR states that we misunderstood the relationship between action items, on the one hand, and objectives and priority goals, on the other. We found the relationship between action items and the objectives in the top-to-bottom report were unclear. Therefore, we went through an exercise with USTR staff to identify how and whether short-term action items link to specific long-term objectives and priority goals and report on the outcome. In the future, USTR should formally identify these linkages in an updated plan that includes subsequent action items to ensure they are taking all the steps necessary to achieve their stated objectives. Our recommendations would enhance USTR’s accountability and inform all stakeholders, including Congress and the public, about the status of the U.S. objectives and priority goals for trade relations with China. In addition to the individual named above, Adam Cowles, Assistant Director; Diana Blumenfeld; Martin de Alteriis; Rhonda Horried; and Paul Revesz made key contributions to this report. Karen Deans and Marc Molino provided technical assistance.
Congress mandated that the United States Trade Representative (USTR) annually assess China's trade compliance and report its findings to Congress. In addition, USTR conducted an interagency "top-to-bottom review" of U.S. trade policies toward China. USTR's resulting February 2006 report outlined U.S objectives and action items. GAO was asked to (1) evaluate USTR's annual China trade compliance reports to Congress and the degree to which they present information necessary to fully understand China's compliance situation and (2) examine the status of the plans presented in USTR's February 2006 top-to-bottom report. GAO systematically analyzed the contents of USTR's compliance reports from 2002 to 2007 and reviewed information on the status of agencies' monitoring and enforcement activities. USTR's annual reports to Congress, which detail U.S. industry concerns with China's compliance and progress on resolving such concerns, are very consistent in format and language. However, they lack any summary analysis about the number, scope, and disposition of reported issues that would facilitate understanding of developments in China's trade compliance and better tracking of the effectiveness of U.S. monitoring and enforcement efforts with China. For example, USTR's narrative reports make it difficult to understand the relative level of progress China made in each trade area in a given year. USTR reported issues that spanned nine trade areas and ranged from very specific issues to broader concerns; however, USTR's narrative reports make it difficult to ascertain specific changes or trends. GAO's systematic content analysis quantified the number, type, and disposition of trade issues and identified 180 individual compliance issues from 2002 to 2007. GAO analysis showed that China resolved a quarter of these issues, but made no progress on one-third of them. Also, GAO's analysis revealed that China's progress in resolving compliance issues varied by trade area and has been slowing over time, especially since 2004, when most progress was made. GAO could only partially determine the status of U.S. agencies' implementation of USTR's 2006 top-to-bottom report, which outlines broad objectives and priority goals for U.S.-China trade relations as well as specific action items. GAO found that key trade agencies made considerable progress implementing planned action items. They increased bilateral engagement with the Chinese and monitoring and enforcement capacity by increasing staffing levels and training opportunities, but staffing gaps and limited Chinese language capacity are challenges at some agencies. However, GAO could not determine agencies' progress toward achieving some U.S. objectives and goals identified in the report. USTR does not formally assess its progress or measure program results. The lack of linkages between U.S. objectives and planned action items and undefined terms make it difficult to assess whether the steps agencies described taking were effective. Furthermore, the report has not been updated to reflect recent developments.
Federal funding for highways is provided to the states mostly through a series of formula grant programs collectively known as the federal-aid highway program. Periodically Congress enacts multiyear legislation that authorizes the Nation’s surface transportation programs, including highway, transit, highway safety, and motor carrier programs. This legislation authorizes the federal-aid highway program and the individual grant programs that comprise it, and it sets overall funding for it and other surface transportation programs. In 1991, for example, Congress enacted ISTEA, which authorized $121 billion for highways for the 6-year period from fiscal years 1992 through 1997, and in 1998 Congress enacted TEA-21, which authorized $171 billion for the federal-aid highway program from fiscal years 1998 through 2003. In 2004, the House and Senate each approved separate legislation to reauthorize the federal-aid highway program, the House authorizing $226.3 billion and the Senate authorizing $256.4 billion for fiscal years 2004 through 2009. These authorizations provide multiyear “contract authority” that gives the states notice several years in advance of the size of the federal-aid program and the approximate amount of federal funding they may expect to receive. Funding for the federal-aid highway program is provided through the Highway Trust Fund. Established by the Highway Revenue Act of 1956, the Highway Trust Fund is a dedicated source of revenues generated by highway user fees such as taxes on motor fuels, tires, and trucks. TEA-21 established two additional mechanisms to support the dedication of highway user fees to highways. First, the act established guaranteed funding for certain highway, transit, and highway safety programs, including the federal-aid highway program, by protecting them with “firewalls” from competing for funding with other domestic discretionary programs through the congressional budget process. Second, the act provided that the highway program funding authorizations would be adjusted to reflect changes in estimates of Highway Trust Fund revenue, ensuring that funding available for the federal-aid highway program reflected the revenue taken in by the Highway Trust Fund. Both the Senate and the House have each approved separate legislation to extend the collection of fuel taxes to the Highway Trust Fund, the Senate through 2009 and the House through 2011. Amid concerns that the introduction of more fuel-efficient vehicles and clean fuels may undermine the sustainability of financing the Highway Trust Fund through fuel taxes in the future, both houses also included provisions to create a National Commission to examine future revenue sources to support the Highway Trust Fund and to consider, among other things, the roles of the various levels of government and the private sector in meeting future surface transportation financing needs. Once Congress authorizes funding, FHWA makes federal funding available to the states annually at the start of each fiscal year through apportionments based on formulas specified in law for each of the several formula grant programs that make up the federal-aid highway program. Ninety-two percent of the funds apportioned to the states in fiscal year 2003 were apportioned by formula. The remaining highway program funds were distributed through allocations to states with qualifying projects. The highway programs with apportionments based on formulas are shown in table 1. As we reported in 1995, the federal funding formula derives from a complicated set of calculations and is a complex process in which the underlying data and factors are ultimately not meaningful because they are overridden by other provisions that yield a predetermined outcome. One reason is the presence of “equity provisions” that ensure that states receive set amounts based on historic funding levels and other considerations. These equity provisions were strengthened after our 1995 report. For example, as table 1 shows, TEA-21’s Minimum Guarantee Program ensures that each state’s share of apportionments from nearly all federal-aid highway funds is not less than 90.5 percent of that state’s percentage share of contributions to the Highway Account of the Highway Trust Fund. Funds from this program accounted for nearly a quarter of all highway funding in fiscal year 2003. Under separate legislation approved by both the House and the Senate, each state’s share of apportionments could rise to 95 percent by 2009. Furthermore, as table 1 shows, states receive minimum apportionments regardless of the formula for several grant programs. States have broad flexibility to transfer funds between the various grant programs. For example, states may transfer up to 50 percent of their Interstate Maintenance and National Highway System Program funds to other programs, including the Surface Transportation Program, which, as table 1 shows, has broad eligibility rules. In addition, ISTEA and TEA-21 provided the states broad authority to transfer federal-aid highway funds to transit projects and vice versa. Between fiscal years 1992 and 2002, 47 states and the District of Columbia transferred about $8.8 billion from federal-aid highway funds to transit programs to fund rail line improvements, motor vehicle purchases, new or improved passenger facilities, and other projects. During that same time, about $40 million was transferred from FTA to FHWA for highway projects. Once FHWA apportions funds to the states, funds are available to be obligated by the states for construction, reconstruction, and improvement of highways and bridges on eligible federal-aid highway routes and for other purposes authorized in law. About 1 million of the Nation’s 4 million miles of roads are eligible for federal aid; however, these roads accounted for 85 percent of the vehicle miles traveled on the Nation’s roadways in 2001. The roads that are generally ineligible are functionally classified as local roads or minor collectors. Around 161,000 miles of federally eligible roadways are on the National Highway System, of which around 47,000 belong to the Interstate Highway System. With few exceptions, federal funds for highways must be matched by funds from other sources—usually state and local governments. The matching requirement on most projects is 80 percent federal and 20 percent state or local funding. In addition to matching federal funds, states and localities spend funds to finance highway capital projects and to maintain existing roadways. The federal-aid highway program is administered by FHWA, whose responsibilities include reviewing periodic transportation improvement plans prepared by state and local governments, approving projects for federal aid, apportioning grant funding to the states, providing technical support, and overseeing federally funded projects. In fiscal year 2004, FHWA received $334 million to provide these services, with an authorized staff level of 2,931 positions. FHWA personnel are located in Washington, D.C., and in 52 field offices located in each state, the District of Columbia, and Puerto Rico, as well as a regional “resource center” with four offices across the country that provide specialized technical assistance to the field offices and the states. The federal-aid highway program has a considerable regulatory component. As a condition of receiving federal aid, states agree to apply and enforce certain federal laws on federally aided projects, such as the environmental assessment provisions in the National Environmental Policy Act, the Americans With Disabilities Act, the nondiscrimination protections found in the Civil Rights Act of 1964, and others. In addition, states are required to establish goals and to award a set percentage of contracts (the national goal is 10 percent) on federally aided projects to small businesses owned and controlled by socially and economically disadvantaged individuals, including minority and women-owned businesses. Furthermore, in accepting federal-aid highway funds, states must enact certain laws to improve highway safety or face penalties in the form of either withholdings or transfers in their federal grants.In addition to these penalties, states may apply for and receive highway safety incentive grants through programs administered outside the federal-aid highway program by the National Highway Traffic Safety Administration (NHTSA). For example, states in which the use of seat belts exceeds the national average or improves over time are eligible for incentive grants based on NHTSA’s calculation of the annual savings to the federal government in medical costs that resulted from the increased use. In general, there are three possible ways that federal grant funding can influence state spending for a program, as illustrated in figure 1. First, increased federal funding may stimulate, or leverage, additional spending from state resources. For example, a state may have to increase its own spending in order to meet federal matching requirements and obtain federal funds, thus increasing the overall level of spending by more than the amount of the federal grant. As the federal-aid highway program in most cases requires that states must contribute 20 percent of the total cost of a project in order to receive federal matching funds of 80 percent of the total cost, the suggestion is that every $1.00 increase in federal funds would go towards a total spending increase of $1.25 ($1.00 is 80 percent of $1.25), $0.25 of which would be funded with state and local government funds ($0.25 is 20 percent of $1.25). The result of a stimulative effect of federal grant funding is illustrated in the first panel of figure 1, in which an additional $1.00 of federal aid increases spending from state resources by 25 cents, increasing the overall level of highway spending by $1.25. Alternatively, increased federal funding may supplement state spending by adding to what states would otherwise have spent, increasing the overall level of spending by the amount of the federal grant, as illustrated in the second panel of figure 1. To the extent that states maintain their own spending when they receive additional federal funding, either because federal policy requires that they do so or because they do so voluntarily, then the additional federal aid supplements state spending. Finally, states may use increased federal funding to substitute for, or replace, what they would otherwise have spent from state resources, so that the overall level of spending increases by less than the amount of the federal grant. This substitution of federal funds for state funds is illustrated in the third panel of figure 1, in which an additional $1.00 in federal funding results in only a 50 cent increase to total spending because in response to the influx of federal funds, the state withdraws 50 cents of its own spending on the program and uses these funds for other purposes. The Nation’s capital investment in its highway system has doubled in the last 20 years, and during that time period as a whole, state and local investment in highways outstripped federal investment in highways—both in terms of the amount of and growth in spending. Between 1982 and 2002, state and local capital investment in highways increased 150 percent, from $14.1 billion to $35.7 billion in real terms, whereas the federal investment increased 98 percent, from $15.5 billion to $30.7 billion in real terms. For every year after 1986, states and localities invested more in the Nation’s highways than did the federal government. (See fig. 2.) Most recently, in 2002, states and localities contributed 54 percent of the Nation’s capital investment in highways, spending $35.7 billion, while the federal government contributed 46 percent or $30.7 billion in real terms. In addition to the billions of dollars states and localities invest in capital highway projects to expand highway capacity or rehabilitate existing highways, states and localities spend additional funds maintaining and policing their roadways. For example, in 2001, states and localities spent about 27 percent of their total capital and maintenance funding on maintenance activities, including fixing potholes, sealing cracks in bridge decks, and fixing highway lighting. Although states and localities still spend more on highway capital investment than the federal government, recently, state and local highway investment has increased at a slower pace than federal highway investment. In addition, state and local investment has decreased in real terms three times since 1996: between 1996 and 1997, between 1999 and 2000, and between 2001 and 2002. Last year, we reported that since TEA-21 was passed, from 1998 through 2001, federal investment increased faster than state and local investment. In real terms, federal investment increased 29 percent, while state and local investment increased 2 percent. This trend of federal investment increasing more quickly than state and local investment continued in 2002. From 2001 through 2002, federal investment increased 8.5 percent, while state and local investment decreased 5 percent in real terms. Thus, from 1998 through 2002, federal investment increased 40 percent, while state and local investment decreased by 4 percent. Figure 3 shows the annual federal and state and local capital expenditures on highways during these years. The general trend of federal investment in highways increasing at a faster pace than state and local investment in highways holds over a longer period of time as well, including the period following the passage of ISTEA in 1991. Although there was some variation on a year-by-year basis, from 1991, when ISTEA was enacted, through 2002, state and local investment increased 23 percent, from $29.0 to $35.7 billion in real terms. During that same time period, federal investment increased 47 percent, from $20.9 to $30.7 billion in real terms, as shown in figure 4. Although the reasons for this change in spending patterns by level of government are unclear, tough economic times, with a majority of states needing to reduce spending to avoid budget deficits, along with large increases in federal funds for highways may have influenced these spending patterns. For example, a recent survey of states by the National Conference of State Legislatures found that even after the economy began growing after the March 2001 national recession, 36 states still have budget shortfalls with a cumulative gap of about $25.7 billion. The preponderance of evidence suggests that increases in federal-aid highway grants influence state and local governments to substitute federal funds for funding they would have otherwise spent on highway projects from their own resources. We built on earlier studies to develop a model that analyzed data from 1982 through 2000 to examine whether and to what extent states have substituted increases in federal highway funds for state highway funds. Our preferred model analyzes data from 1983 through 2000 because of the statistical techniques we used. Our analysis suggests that significant substitution has occurred and that the rate of grant substitution increased significantly over the past two decades, rising from 18 percent in the early 1980s to about 60 percent during the 1990s—the periods that ISTEA and TEA-21 were in effect. Three previous studies of this issue also found that substitution existed, although their estimates of levels of substitution varied. The structure of the federal grant system as a whole may encourage substitution. Specifically, the structure of the federal-aid highway program creates an opportunity for substitution because states typically spend substantially more in state and local funds than is required to meet current federal matching requirements. As a consequence, when federal funding increases, states are able to reduce their own highway spending and yet obtain the increased federal funds. If states substitute some of the increase in federal funds for their own funds, then total highway spending may increase, but not by as much as it would have had substitution not occurred. Our statistical model, which we developed from previous models, estimates that states have used a significant portion of increases in federal highway funding to substitute for state and local funding for highways, and that the rate of substitution increased during the 1990s. According to our preferred model, for the entire period from 1983 through 2000, state governments used roughly half of the increases in federal highway grants to substitute for funding they would have otherwise spent from their own resources on highways. When our model examined four separate time periods from 1983 through 2000 that corresponded to the four authorization periods for the federal-aid highway program, the results suggest that the rate of grant substitution increased in the 1990s, during the periods in which ISTEA and TEA-21 were in effect, in comparison to the early 1980s. Specifically, our model suggests that states substituted approximately 18 cents (not statistically significant) of every dollar increase in federal aid from 1983 to 1986 for funds they would have spent on highways from their own resources. Our model suggests that the substitution rate rose to approximately 36 cents of every dollar increase in federal aid for the period from 1987 to 1991, and that the substitution rates then rose again to approximately 60 cents for every dollar increase in federal aid for the two periods examined in the 1990s: 1992 through 1997 and 1998 through 2000. (See fig. 5.) The rates of grant substitution for the time periods reported in figure 5 are derived from our statistical model of state spending choices and are subject to some uncertainty. While these estimates represent our most likely estimates of the rate at which states substituted federal funds for state and local funds, the actual substitution may be larger or smaller than these estimates. The uncertainty surrounding our estimates can be expressed in terms of a level of confidence that a given range of values encompasses the actual substitution rate. The range of values surrounding each of our estimates is shown in table 2 at a 95 percent level of confidence. The size of each interval provides a sense of the uncertainty associated with our estimates. The intervals associated with the two time periods during the 1980s contain possible values of zero, meaning that we cannot be 95 percent confident that substitution occurred during these periods. In contrast, the range of estimates for both time periods in the 1990s does not encompass zero; therefore, they are statistically different from zero, which means that our results imply at least a 95 percent level of confidence that substitution occurred. Our most likely estimates for the two periods we looked at in the 1990s are in both cases just under 60 percent, and we can be 95 percent confident that the actual substitution rate was between 21 percent and 97 percent. These results are roughly consistent with previous studies that, when taken together, also seem to suggest increasing substitution rates over time. We made four primary enhancements to the models used in previous studies in developing our model. First, we used more recent data on highway expenditures than were available for previous studies. Second, we used a conservative definition of substitution. Our model defined substitution as occurring only when, in response to increased federal highway funds, state and local funds were moved out of highway-related projects altogether. We did not consider it substitution if in response to increased federal highway funds, state and local funds were moved from highway projects that were eligible for federal aid to highway projects that were not eligible for federal aid. Third, our model is structured to examine substitution rates over time, rather than being limited to one estimate covering all the years included in our study. Finally, compared to previous studies, we employed a more comprehensive collection of factors related to state spending decisions. Combined, we believe these enhancements increase the ability of our model to provide a conservative and more reliable estimate of the extent to which states substitute federal highway aid for spending that would otherwise have come from state and local resources. However, all estimates that are based on statistical models, particularly of complex processes such as the determination of states’ budget choices, are subject to uncertainty. This uncertainty can derive from both choices about what factors to include in a model and the inherent impreciseness in estimating relationships between one factor—in this case federal highway grants— and another, state and local highway spending. While we have attempted to take many factors affecting state spending decisions into account, there may be other factors that are not subject to precise measurement, such as the influence of citizen and interest groups on states’ funding decisions, that could not be included in our analysis. As a result of the uncertainty in both the data and the statistical formulation of our model, the precision of our estimate, or any other estimate, is limited and our estimate should be considered one point in a range within which the actual extent of substitution falls, and one piece of a body of evidence on the existence of substitution. (See app. II for additional details on our statistical model.) In commenting on a draft of this report, DOT officials said that to the extent substitution occurred and increased during the 1990s, it was likely due to a number of factors, including changes in states’ revenues and priorities. While our analysis specifically took changing economic conditions into account when assessing state spending choices, determining specific causes is beyond the scope of our statistical model. For example, states faced rising demands for health care and education during the 1980s and early 1990s that they may have funded, in part, by reducing their own levels of highway funding effort when federal highway funding increased. Accordingly, our model establishes an association between substitution and increases in federal highway grants; it does not identify the specific causes responsible for these rising rates. Three other studies, including two published in the past 3 years, have reported that states substituted additional federal highway spending for state spending. These studies reported a wide range of estimates for the percentage of federal funds that has been used as a substitute for state and local funds, from zero to nearly 100 percent. The wide range of estimates is the result of different time periods examined, different definitions of substitution, and differences in the statistical methods employed. A study by Brian Knight, which, of the three studies, included the most recent data, found that from 1983 through 1997, roughly 90 percent of increased federal aid was substituted for state highway spending. Knight used a different definition of substitution than we used in our study. Knight defined substitution as occurring when, in response to increased federal highway funds, state funds were moved out of highway-related projects. He did not take into account local spending on highways, which might possibly have mitigated the reduction in state funds. Another study, by Shama Gamkhar, analyzed data from 1976 through 1990 using two different measures of federal grants. Gamkhar reported an average substitution rate of 63 percent when measuring federal grants through grant expenditures (the same measure of federal grants used by the other studies, including our model) and an average substitution rate of 22 percent when measuring federal grants through grant obligations. Gamkhar defined substitution the same way our model did, as when, in response to increased federal highway funds, state and local funds were moved out of highway-related projects altogether. A study by Harry G. Meyers examined data from 1976 through 1982, and modeled substitution based on two different definitions of substitution. Using a definition of substitution similar to the definition employed in our model, the study found no evidence of substitution during this period. Meyers also modeled the substitution rate based on a different definition of substitution, defining substitution as occurring when state funds were moved out of federal-aid highway projects, even if those funds were used for highway projects that were ineligible for federal aid. Using this definition of substitution, the study found a substitution rate of 63 percent. The findings of these studies and GAO’s results are summarized in figure 6. In this figure, we placed next to our finding the findings of the three models that used the same measure of federal grants and the same or a similar definition of substitution that we did, organizing these chronologically. As can be seen from this figure, generally, those studies with the same or similar definitions of substitution as our model also suggest that substitution rates may have increased over time. Specifically, Meyers reported no evidence of substitution into nonhighway spending from 1976 through 1982; Gamkhar, based on data through 1990, reported higher rates of substitution; and Knight, based on data through 1997, reported even higher rates of substitution, although using a somewhat different definition of substitution. Our model also found evidence of such a trend. In 1996, we reported that the federal grant system as a whole does not encourage states to use federal dollars to supplement their own spending but rather results in states using federal grants to substitute for their own spending. In summarizing research over the past 30 years for a wide variety of federal grant programs, we reported that each additional dollar of federal grant funding substitutes for between 11 and 74 cents of funding states otherwise would have spent. On balance, we found that for every dollar of additional federal aid, states have withdrawn about 60 cents of their own funding. Our 1996 study found that federal grant programs produced a variety of fiscal effects, in part depending on the grant program’s structure. For example, grants are considered “open-ended” when there is no limit on federal matching, and “closed-ended” when total federal matching funds are capped. The influence of federal matching is essentially the same for both types of grants until a state obtains the maximum federal contribution for a closed-ended grant. After this point, closed-ended grants no longer provide additional matching funds in response to additional state spending. This lack of additional federal matching funds reduces the incentive for states to increase their own spending on aided activities. As a result, we found that open-ended grant programs, for example, Foster Care, Adoption Assistance and Medicaid, generally stimulated additional spending from state resources because the more states spent of their own resources, the more federal resources they would obtain. In contrast, closed-ended matching grant programs, such as the federal-aid highway program, which place a limit on the total amount of federal funds that states can receive through meeting matching requirements, as well as programs that do not require states to contribute matching funds to receive federal funds, were associated with higher rates of grant substitution and stimulated less additional spending on the aided activity. The federal-aid highway program is particularly susceptible to substitution because in general the current matching requirement for states is not high enough to require states to maintain or increase their spending in order to receive increases in federal funds. In most cases, the federal-aid highway program requires that the federal contribution be no more than 80 percent of the total cost of the project, while the state’s matching contribution be at least 20 percent. If the federal highway program worked to stimulate state spending, this might suggest that every $1.00 increase in federal funds would result in a total spending increase of $1.25 ($1.00 is 80 percent of $1.25), $0.25 of which would be funded with state and local government funds ($0.25 is 20 percent of $1.25). However, because in most cases state funding already exceeds the required state matching contribution, often by large amounts, states are not required to increase or even maintain their level of funding for projects in order to receive increases in federal funds. Several studies have demonstrated that state highway spending substantially exceeds federal matching requirements. The earliest study we reviewed found that, during the 1960s, 38 percent of aggregate state capital spending for noninterstate federal-aid highways was in excess of federal matching requirements. This study found that for the large majority of states, state spending on federal-aid highway system projects exceeded federal matching requirements by more than 10 percent. Another study found that in 1982, state spending on federal-aid highway system projects exceeded the required federal match by more than 19 percent. Other studies that have analyzed the fiscal effects of federal highway aid have also reported that state spending typically exceeds federal matching requirements. In general, states continue to spend more than their required match on federal-aid highway projects. In 2000, the most recent year for which data are available for federal-aid highways, states accounted for approximately 49 percent of all federal-aid-eligible highway capital spending, which is over twice the required 20 percent match on most federal-aid highway projects. Figure 7 shows the variation among states in their highway capital spending as a percent of total (federal plus state and local) highway capital spending during the period from 1997 through 2000. Although these data include spending on nonfederal-aid-eligible highways and therefore can not be used to determine precisely to what extent states are exceeding federal matching requirements, they show that in the majority of states, state and local spending counts for over half of total capital highway spending. The trends in funding and probable substitution described in this report imply that substitution may be limiting the effectiveness of strategies Congress has put into place to help the federal-aid highway program accomplish its overall goals. Congress and DOT have at various times enumerated goals for the federal-aid highway program, and, to meet these goals, Congress has put in place a number of strategies, including increasing its investment in highways and giving states wide latitude in deciding how to use and administer federal grants to best meet their transportation needs. However, because of substitution, the sizable increases Congress provided in federal funding for highways have not translated into commensurate increases in the Nation’s overall spending in its highway system. In part, this is because, while Congress can dedicate federal funds to highways, it cannot prevent state highway funds from being used for other purposes. Congress has also sought to meet the goals of the program through a strategy of emphasizing states’ priorities and decision-making. However, substitution may be limiting the effectiveness of this strategy. Although the federal-aid highway program has a considerable regulatory component, from a funding standpoint, the program is to some extent functioning as a cash transfer, general purpose grant program. This raises broader questions about the effectiveness of the federal investment in highways in accomplishing the program’s goals and outcomes, for although DOT has created performance measures and outcomes under GPRA, currently there is no link between the achievement of these measures and outcomes and federal funding provided to the states. Congress and DOT have at various times enumerated goals for the federal- aid highway program to, among other things, enhance safe and reliable travel, promote economic growth, enhance mobility, support interstate and international commerce, and meet national security needs. According to DOT’s 2003-08 Strategic Plan, the department’s mission is enumerated in 49 U.S.C. 101, which states that “the national objectives of general welfare, economic growth and stability, and the security of the United States require the development of transportation policies and programs that contribute to providing fast, safe, efficient, and convenient transportation…”. In establishing the Interstate Highway System, Congress, in the Federal-Aid Highway Act of 1956, stated that the Interstate system was to serve principal metropolitan areas and industrial centers, support the national defense, and connect with routes of continental importance in Canada and Mexico. Current law defines the primary focus of the federal-aid highway program as completion and expansion of the National Highway System, of which the Interstate is a part, to provide interconnected routes that serve, among other things, major population centers, international border crossings, commercial ports, airports, and major travel destinations. “…among the foremost needs that the surface transportation system must meet to provide for a strong and vigorous national economy are safe, efficient, and reliable (i) national and interregional personal mobility (including personal mobility in rural and urban areas) and reduced congestion; (ii) flow of interstate and international commerce and freight transportation; and (iii) travel movements essential for national security.” To meet the program’s goals, Congress has set out a number of strategies, including increasing investment in highways and providing states flexibility to best meet their transportation needs. Furthermore, under Congress’ direction, DOT has established strategic goals and performance measures and outcomes for the federal-aid highway program to enhance mobility and economic growth. Among these goals are to reduce the growth of congestion on the Nation’s highways and improve the condition of the National Highway System. Since the Federal-Aid Highway Act was enacted in 1956, every time Congress has reauthorized the highway program it has expanded either the size or scope, or both, of the federal-aid highway program. Since 1991, Congress has provided significant increases in federal spending on highways. ISTEA’s authorization of $121 billion for highways for the 6-year period from fiscal years 1992 through 1997 was a 73 percent increase over the $70 billion authorized in the prior 6-year bill, and TEA-21’s authorization of $171 billion for the federal-aid highway program from fiscal years 1998 through 2003 represented an increase of 41 percent over ISTEA’s authorization level. In 2004, the House and Senate each approved separate legislation to reauthorize the federal-aid highway program, increases of 32 percent and 50 percent over TEA-21, respectively. Despite these increases, numerous congressional transportation leaders stated that these increases were not enough, and that further spending was required to meet the country’s needs. Congress has also included features in the design of the federal-aid highway program to attempt to ensure that funds collected by the federal government for highways are used for that purpose. Prior to 1956, federal fuel and motor vehicle taxes were directed to the General Fund of the U.S. Treasury, and there was no relationship between the receipts from these taxes and federal funding for highways. Amid concerns that federal taxes on motor fuel were being used for nontransportation purposes, Congress established the Highway Trust Fund in 1956 and specifically provided that revenues from most highway user taxes would be used to finance the greatly expanded highway program enacted by the Federal-Aid Highway Act of 1956. Despite having a dedicated source of funding, highways competed for federal funding with other forms of domestic discretionary spending through the appropriations process over the years. As a result, Congress often appropriated less money than was authorized, even though sufficient funds were being collected in the Highway Trust Fund to support the authorized levels. So Congress took further action in TEA-21, establishing guaranteed spending levels for highway programs that protected highway programs from having to compete for funding through the congressional budget and appropriations process. It also established “Revenue Aligned Budget Authority,” directly linking highway revenues collected into the Highway Trust Fund with the apportionments provided annually to the states for their highway programs. Despite congressional efforts to increase the federal investment in the highway system and to ensure that funds collected by the federal government for highways are used for that purpose, due to probable substitution, the sizable increases in dedicated federal funding that Congress has provided for highways have not translated into commensurate increases in the Nation’s overall investment in its highway system. Moreover, the effectiveness of Congress’ strategy to dedicate federal funds to highways is limited because Congress has no similar ability to prevent state and local highway funds, where most of the investment occurs, from being used for other purposes. Therefore, while Congress can ensure that certain federal moneys are dedicated to highways and given to the states for that purpose, it cannot ensure that state and local highway funds are not used for other purposes. When substitution occurs, some dedicated federal highway funds replace state highway funds, and those state highway funds are then used for other purposes. Congress has also sought to meet the goals of the program by emphasizing the importance of states’ priorities and decision-making regarding how to meet their most pressing transportation needs. One way it has done so is by incorporating return-to-origin features into the program--returning to the states more of the money collected in fuel taxes. TEA-21’s Minimum Guarantee provisions ensure that each state receives back from most highway programs 90.5 percent of the total estimated percentage share of contributions to the Highway Account of the Highway Trust Fund from motor fuel and other taxes collected in that state. Under separate legislation passed by both the House and Senate in 2004, this amount could rise to 95 percent by 2009. In addition, Congress has given the states broad flexibility in the use of its federal aid grant funds by providing states significant discretion to use these funds flexibly across highway, bridge, transit, and other transportation projects. States have, if they choose, broad flexibility in the use of slightly more than half of their federal-aid highway funds. For example, the Surface Transportation grant program has broad eligibility rules, and states can use those funds for highways, bridges, transit capital projects, bus terminals, and many other uses. States may use some of their Minimum Guarantee Program grant funds under the same rules; in fiscal year 2003, the funds apportioned under these two programs accounted for one third of all federal aid highway funds apportioned nationwide. For eight states that receive higher levels of Minimum Guarantee grant funds, these two programs account for more than 40 percent of their funding, and in one of these eight states, for just over 50 percent. While other federal-aid highway grant funds have more limited uses, states have the authority to transfer funds from these limited programs to more flexible programs and uses. For example, states may transfer up to 50 percent of their National Highway System and Interstate Maintenance program funds to the Surface Transportation Program or certain other grant programs, and, in the case of the National Highway System program, 100 percent under certain conditions. Furthermore, states have broad flexibility in deciding which projects to pick and how to implement them. The projects for which states use federal funding must be for construction, reconstruction, and improvement on eligible federal-aid highway routes. Nevertheless, federal law (23 U.S.C. §145) provides that the authorization or appropriation of federal funds “shall in no way infringe on the sovereign rights of the States to determine which projects shall be federally financed.” Moreover, FHWA’s role in overseeing the design and construction of most projects is limited. Specifically, only high cost construction or reconstruction projects on the Interstate Highway System are always subject to “full” oversight in which FHWA prescribes design and construction standards, approves design plans and estimates, approves contract awards, inspects construction progress, and renders final acceptance when projects are completed. For projects that are not located on the National Highway System, states are required to assume oversight responsibility for the design and construction of projects unless a state determines that it is not appropriate for it to do so. As figure 8 shows, in 2002, about $1 out of every $5 obligated for federal-aid projects occurred on the Interstate system, while projects off the National Highway System accounted for about 57 percent, nearly 3 times as much. Substitution may be limiting the effectiveness of Congress’ strategy of emphasizing the role of states’ priorities and decision-making regarding how to meet their most pressing transportation needs. The program does have a substantial regulatory component that requires states to enact and follow certain laws as a condition of receiving federal funds; for example, states are required to enact drunk-driving laws, such as .08 blood alcohol laws, and to contract with disadvantaged business enterprises. However, from a funding standpoint, the federal-aid highway program’s return-to- origin features and flexibility, combined with substitution and the use of state and local highway funds for other purposes, means that the program is, to some extent, functioning as a cash transfer, general purpose grant program. This raises broader questions about the effectiveness of the federal investment in highways in accomplishing the program’s goals and outcomes. Our findings on substitution lead to broader questions about whether the federal-aid highway program is effective in meeting its goals. As required by the Government Performance and Results Act (GPRA), DOT has articulated goals for the department’s programs, including the federal-aid highway program, to achieve by establishing measurable performance goals, measures, and outcomes. One of the purposes of GPRA is to provide decisionmakers a means of allocating resources to achieve desired results. Linking resources and results will become even more important than it is today in the years ahead, as the Nation faces a fiscal crisis in which mandatory commitments to Social Security and Medicare will consume a greater share of the Nation’s resources, squeezing the funding available for discretionary programs, potentially including highways. These challenges require the Nation to think critically about all existing government programs and commitments. Among its performance goals, DOT has articulated goals for mobility and economic growth, including to improve the condition of the transportation system, reduce travel times, and increase access to and reliability of the transportation system. Two major performance measures related to the federal-aid highway program are to (1) improve the percentage of travel on the National Highway System meeting pavement performance standards for acceptable ride and (2) slow the growth of congestion--in particular, to limit the annual growth of urban area travel time under congested conditions to one-fifth of 1 percent below the growth that has been projected. These goals are shown in figure 9. Although DOT has articulated performance measures, the federal-aid highway program does not have the mechanisms to link funding levels with the accomplishment of specific performance-related goals and outcomes. In contrast, NHTSA has some incentive grant programs that link funding to particular outcomes, such as increasing the use of seat belts within states. As we have reported, although a variety of tools are available to measure the costs and benefits of transportation projects, they often do not drive investment decisions, and many political and other factors influence project selections. For example, the law in one state requires that most highway funds, including federal funds, be distributed equally across all the state’s congressional districts. Consequently, there is currently no way to measure how funding provided to the states is being used to accomplish particular performance-related results such as reducing congestion or improving conditions. We identified several options for the design and structure of the federal-aid highway program that could be considered in light of the issues raised by our findings. On the one hand, there are options that have been used in other federal programs that could limit substitution. Another option to consider may be to simplify the program towards a more flexible approach. Another option would be to consider whether a different program structure and different financing mechanisms could be used to target funding and more closely align resources with desired results. To increase the extent to which federal-aid highway program funds are used to supplement state highway funds rather than substitute for them, several options exist to re-design the program to limit substitution. These include: Revising federal matching requirements to increase the percentage of projects’ costs that must be paid for with state and local funds. Instituting the use of funding formulas that reward states that increase state and local highway funding by increasing their federal funding, while reducing the federal funding of those states that do not. Adding a requirement that states maintain their own level of highway spending effort over time in order to receive additional federal funds. All three options are designed to reduce or eliminate substitution. The first two options are designed to stimulate additional state spending on highways, while the third option is designed so that increased federal funding will supplement state spending rather than replace it. These objectives may not be perfectly achieved because models of substitution, like any models, produce estimates that are subject to uncertainty. As such, there is no way to objectively determine with certainty what states would have spent in the absence of increased federal funding. Table 3 summarizes the options, along with possible approaches that could be taken in implementing them. Each of these options and approaches would be likely to have somewhat different effects and would require careful consideration of various factors. Some possible effects are summarized below; see appendix IV for additional discussion of these options. The likely effect of revising the matching requirement would depend on the magnitude of the change. For example, if the requirement was changed so that states generally had to provide 60 percent of the total funding for eligible projects, states currently spending less than 60 percent of total highway funds for eligible projects would have an incentive to increase their spending in order to obtain the maximum federal match, while those spending more than 60 percent would not have an incentive to increase their spending. A few states with a low state/federal spending ratio might have to more than double their current spending in order to receive additional federal funds. Setting the required match at 40 percent would give fewer states an incentive to increase their spending and would generally require less of an increase in spending from those states with low state/federal spending ratios. An advantage of continuing to set the state match at 20 percent but counting only state spending in excess of what each state spent during a base time period towards the match is that it would stimulate state spending in all states to a similar degree. Using funding formulas that link federal funds to states’ highway funding effort could also be achieved through various approaches. For example, providing federal funds to states proportionally based on their effort in comparison to the average effort of all states would put states in competition with each other, rewarding states whose funding effort is already high and penalizing states whose funding effort is currently low. On the other hand, providing federal funds to states proportionally based on each state’s own effort during an initial base time period would put each state in competition with the funding effort it made in the base period, rewarding states whose spending grew more quickly in comparison to their spending during the base period and penalizing states whose spending stayed the same or dropped when compared to their spending during the base period. Such provisions could be designed so they could be suspended in a recession or severe economic downturn in order to prevent states from having to make disproportionate reductions in other state services to maintain highway funding. Instituting a maintenance of effort provision would require each state to continue to spend what it spent in a defined base period, plus inflation, in order to obtain increased federal funds. Therefore, it would not stimulate state spending, but it would attempt to ensure that states used federal funds to supplement rather than replace state and local funds. In previous work, we concluded that, to be effective, maintenance of effort provisions need to define a minimum level of state spending effort that can be objectively quantified and updated to keep pace with inflation in program costs so that the maintenance of effort provision ensures a continued level of activity when measured in inflation adjusted dollars. This could be achieved by defining a state’s base spending level as the amount spent per year during a recent historical period and then adjusting that base spending level for inflation. Another potential option would be to build on trends giving states greater flexibilities and discretion with their federal-aid highway program funds. In contrast to changes in program designs that would limit substitution, adopting such an option could be seen as recognizing substitution as an appropriate response on the part of states to increasing fiscal challenges and competing demands. Adopting such an option could also be seen as recognizing that the ability of states to meet a variety of needs and fiscal pressures might be better accomplished by providing states with federal funding for highways through a more flexible federal program. Such an option would also recognize the changing nature of FHWA’s role and the federal-aid highway program. Currently, FHWA reviews and approves transportation plans and environmental reviews, and—on some projects—designs, plans, specifications, estimates, and contract awards. FHWA also has duties related to the program’s considerable regulatory component. To carry out these responsibilities, FHWA has among the largest field office structure in DOT, and a larger field structure than many other federal agencies. FHWA has personnel in over 50 field offices, including one office in each state, and has had a field office in each state since 1944. However, the federal-aid highway program has changed considerably in 60 years. In 2004, the program’s return-to-origin features and flexibility, combined with substitution and the use of state and local highway funds for other purposes, means that from a funding standpoint, the federal-aid highway program is, to some extent, functioning as a cash transfer, general purpose grant program. Devolving funding responsibilities to the states in a manner consistent with that function would build on the flexibilities already present and obviate much of the need for FHWA’s extensive field organization, allowing it to be greatly reduced in scope. This could produce budgetary savings of some portion of FHWA’s $334 million annual budget. Adopting such an option would involve weighing numerous factors, including FHWA’s role and value. But devolving funding responsibilities to the states would not require abandoning the program’s regulatory component. Some federal laws and requirements in place originated outside the transportation program and would doubtless remain in force, such as civil rights compliance. Others that are currently part of the transportation program could also remain in effect. Depending on priorities, these could continue to be overseen by FHWA directly or a process could be established through which states certify their compliance with the requirements, as is done in other programs. In this manner, it would be possible to enforce these laws and requirements without an extensive field structure, as other federal agencies and programs do. Devolving authority to the states could also take the form of devolving not only the federal programs, but the revenue sources that support it. Considerable federal effort goes into collecting and accounting for motor fuel taxes and other highway user fees. One argument for maintaining a federal fuel tax is that this tax may be a useful public policy to prevent tax competition between states to avoid the disinvestment in the highway system that could potentially result. Such a “turnback” provision was considered in the form of an amendment to TEA-21 in the House of Representatives in 1998, but it did not pass. Devolving federal responsibilities to the states is not dissimilar to the Surface Transportation System Performance Pilot Program that was proposed in the administration’s reauthorization proposal, but which was not included in either the House or Senate version of the bill. Up to five states could have participated in the program, which would have allowed a state to assume some or all of FHWA’s authorities and responsibilities under most federal law or regulations.Once approved to participate, a state would have had to identify annually what goals it wanted to achieve with its federal funds and what performance measures it would use to gauge success. A state would also have had to agree to a maintenance of effort requirement that it maintain its total combined state and federal highway program expenditures at the level of at least the average level of the three previous years. A state’s participation in the pilot program would have been terminated if that state did not achieve the agreed performance for two consecutive years. Another option could be to consider whether a different program structure and different financing mechanisms could be used to target funding and more closely align resources with desired results. Restructuring the program in this way could take several forms. For example, the program could be reoriented to function more like a competitive discretionary grant program, in which program sponsors justify projects seeking federal aid based on an assessment of their potential benefits. This is not dissimilar to the program used by DOT to fund large transit capital projects. The program could also be revised to include the use of incentive grant programs similar to those that NHTSA has to link funding to particular outcomes, such as increasing the use of seat belts within states. Adopting such an option would require asking the following questions: What policy goals have been established by Congress for the performance of the federal-aid highway program, what outcomes and results have been articulated in DOT’s strategic plans to fulfill those goals, and are they the right goals and outcomes? What is the appropriate role of each level of government? Would the roles need to be redefined in order to align federal spending more closely with a greater performance and outcome orientation? In particular, what refocusing of federal involvement (e.g., interstate commerce, homeland security, national defense) would need to occur? How could the design of the federal-aid highway program’s grants and funding mechanisms best support accomplishment of agreed-upon performance goals and outcomes? What funding incentives are needed to introduce a greater performance and outcome orientation? What type of departmental administrative structure for the federal-aid highway program would best ensure that the performance goals established by Congress and articulated in DOT’s strategic plans and outcomes are measured and accomplished? Can a greater performance and outcome orientation to the federal-aid highway program be reconciled with congressional and state legislative policies and preferences toward providing at least some transportation funding in the form of specific project earmarks? Addressing the issues raised in this report would require weighing competing and sometimes conflicting options and strategies. If, for example, reducing the level of grant substitution is an important concern, then design changes in the current program, including adopting features that have been used in other federal programs, may be warranted. If, on the other hand, preserving states’ flexibility, including their ability to meet a variety of needs and fiscal pressures is a higher priority, then design changes in the direction of a different, more flexible program may be warranted. While some options are mutually exclusive, others could be enacted in concert. For instance, an option to limit substitution could be combined with efforts to align resources with desired results, and returning program authorities and resources to the states could be accompanied by adding performance measures. Beyond these options, our work raises broader and more fundamental issues given the challenges the Nation faces in the 21st Century. The fact that both the federal and state governments face budget deficits totaling hundreds of billions of dollars and a growing fiscal crisis requires policymakers to think critically about existing government programs and commitments and make tough choices in setting priorities and linking resources to results to ensure that every federal dollar is wisely and effectively spent. The opportunity to better align the federal-aid highway program with performance goals and outcomes comes at a time when both houses of Congress have already approved separate legislation to create a National Commission to examine future revenue sources to support the Highway Trust Fund and to consider the roles of the various levels of government and the private sector in meeting future surface transportation financing needs. The proposed commission is to consider how the program is financed and the roles of the federal and state governments and other stakeholders in financing it; the appropriate program structure and mechanisms for delivering that funding are important components of making these decisions. Thus, this commission may be an appropriate vehicle through which to examine these options for the future structure and design of the federal-aid highway program. In light of the issues raised in this report and the fiscal challenges the Nation faces in the 21st Century, Congress may wish to consider expanding the proposed mandate of the National Commission to assess possible changes to the federal-aid highway program to maximize the effectiveness of federal funding and promote national goals and strategies. Consideration could be given to the program’s design, structure, and funding formulas; the roles of the various levels of government; and the inclusion of greater performance and outcome-oriented features. We provided DOT a draft of this report for review and obtained comments from departmental officials, including FHWA’s Director of Legislation and Strategic Planning. These officials said that our analysis raised interesting and important issues regarding state funding flexibility and the federal-aid highway program that merit further study. DOT officials also stated that while they recognize that federal-aid highway grants can influence state and local governments to substitute federal funds for state and local funds that otherwise might have been spent on highways, they believe that this substitution is likely due to numerous factors. Specifically, the officials said that to the extent substitution occurred and increased during the 1990s, it was also likely due to changes in states’ revenues and priorities. DOT officials also emphasized that regardless of changes in the availability of state funds for highway programs, the overall federal share of capital spending on highways declined during the period we studied, from over 55 percent in the early 1980s to around 45 percent today. DOT officials also emphasized that there is no evidence that the substitution discussed in our report resulted in the diversion of federal-aid highway funds apportioned to the states. They further stated that substitution may reflect appropriate resource allocations by states and that preserving states’ flexibility has been a priority of the federal-aid highway program and is a goal of DOT’s reauthorization proposal. Finally, regarding options for changes in the design of the federal-aid highway program, officials emphasized that FHWA adds considerable value to the federal-aid highway program by providing program oversight and sharing its expertise with states to ensure states uniformly address key areas of national concern including safety and environmental protection. We agree with DOT’s characterization of the importance of the issues raised in this report, including the effect that federal-aid highway grants have on state spending decisions and states’ funding flexibility. We also agree with DOT officials that many factors influence state budgetary decisions, including changing state budget priorities and the availability of state revenues. It was for this reason that we used a statistical model that specifically took changing economic conditions and revenues into account in order to better isolate the effect of federal grants on state spending choices. We believe that our model has reasonably distinguished between the effects of changing economic conditions and revenues and the effect of federal grants, and, consistent with earlier models and studies, we found the relationship between federal grants and state spending, indicating substitution, to be statistically significant, particularly during the 1990s. However, determining specific causes of substitution is beyond the scope of our statistical model. For example, while states faced rising demands for health care and education during the 1980s and 1990s that could have resulted in states reducing their highway spending when federal highway funding increased, our model does not identify the specific causes responsible for rising substitution rates. Although DOT officials said that the overall federal share of capital spending on highways declined during the period we studied, these relative shares do not affect our findings on substitution since substitution can occur when the federal share of funding is either rising or falling; if substitution occurs when state funding is rising it simply means that state spending increased less than the increase that might have occurred had there been no substitution. While DOT officials stated that there is no evidence that substitution resulted in the diversion of federal-aid highway funds, there are important differences between diversion and substitution. In the context in which DOT officials raised it, diversion is the transfer of federal funds for purposes other than those authorized by law, while substitution, as we have reported it, is the transfer of state funds that would have otherwise been spent on highways. States can both use federal funds for the purposes authorized by law and at the same time substitute federal funds for state funds. Thus, while we agree that there is no evidence that substitution resulted in the diversion of federal-aid highway funds, we do not believe our report suggests the existence of such evidence. Finally, we agree with DOT officials that states’ flexibility and FHWA’s role are important factors in the federal-aid highway program; however, we believe that options for changing the design, structure, and funding mechanisms of the federal-aid highway program should be considered in light of substitution and the issues raised in this report, and that a variety of factors, including but not limited to these two, should be weighed when considering such changes. While the department took no position on the matter for congressional consideration to expand the mandate of the proposed National Commission, officials did state that they believe these issues merit further study. We continue to believe that Congress has the opportunity to maximize the effectiveness of federal funding and promote national goals and strategies by expanding the proposed mandate of the National Commission to consider these issues. We are sending copies of this report to the Honorable Norman Mineta, Secretary of Transportation. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at heckerj@gao.gov, or (202) 512-2834, or contact Jerry Fastrup at fastrupj@gao.gov or (202) 512-7211, or Steve Cohen at cohens@gao.gov or (202) 512-4864. GAO contacts and acknowledgments are listed in appendix V. In light of the increasing federal-aid highway program funding and concerns over future federal revenues for highways, you asked us to provide information on past trends in the federal, state, and local capital investment in highways, and how federal-aid highway program grants influence the level of state and local highway spending. We responded to the first part of your request in June 2003. This report (1) updates information on trends in federal, state, and local capital investment in highways; (2) assesses the influence that federal-aid highway grants have had on state and local highway spending; (3) discusses the implications of these issues on the federal-aid highway program; and (4) discusses options for the federal-aid highway program that could be considered in light of these issues. In addition, this report identifies characteristics associated with differences among states’ levels of effort for highways (see app. III). To update information on federal, state, and local capital investment in highways, we obtained 2002 (the most recent year available) expenditure data from the Federal Highway Administration. We converted these expenditure data to 2001-year dollars to coincide with the data in our previous report, which presented data from 1982 through 2001. To assess the influence that federal-aid highway grants have had on all state and local highway spending, we reviewed and synthesized the research literature on this issue. Our literature review revealed a number of studies that used statistical models to estimate the influence of federal funding on state spending. These models examined different time periods, employed different statistical methods, and considered different potential social, demographic, economic, and political factors that may affect state highway spending decisions. None of the models used in the studies we reviewed included the most recent data now available on highway funding, and none examined whether the effect of federal grants on state spending changed during the time period covered in the study. Therefore, based on the models used in the earlier studies, we developed our own statistical model of state highway capital and maintenance outcomes to estimate the fiscal effects of federal highway funding on state highway spending. The purpose of our statistical model was to isolate the effect of federal grants on highway spending in states by controlling for other factors that affect state spending decisions. Our model therefore considered a wide range of potential factors such as economic conditions and the size of a state’s highway system, that may affect state spending choices. In addition, our model included the most recent data available and examined whether the effect of federal grants on state spending changed during the time period. A more detailed description of the literature and our statistical model is contained in appendix II. Finally, our model was reviewed by experts in the Department of Transportation (DOT) and peer reviewed by three authors of the earlier studies on the fiscal effects of federal highway grants. These experts and authors generally agreed with our methods, and we made revisions based on their comments as appropriate. To address the implications of the effect of federal highway grants on state and local highway spending and options raised by these implications, we reviewed pertinent legislation and congressional actions affecting the federal-aid highway program, including goals, funding trends, program features, and financing mechanisms. We reviewed the Government Performance and Results Act and DOT’s strategic and performance plans and reports for 2003 and 2004. We then evaluated how our model results and other analysis on the existence of substitution affect the design and performance of the federal-aid highway program. Finally, to identify state characteristics associated with their effort to fund highways from state resources, we defined a state’s level of effort broadly to include both a state’s and its local governments’ spending for highway maintenance and capital construction relative to the personal income of state residents. We determined a multivariate analysis is required so that other factors, in addition to the state characteristic under consideration, can be taken into account and held constant. (See app. III for results.) To perform this multivariate analysis, we utilized the same statistical model of state highway spending used to analyze the fiscal effect of federal highway grants (see app. II). The variables expected to affect state highway spending fall into four broad categories: (1) fiscal capacity, (2) the cost of transportation services to the representative voter/consumer (tax price), (3) federal grants, and (4) indicators of state preferences for highway spending. The specific variables we considered are listed in table 6. We conducted our work from August 2003 through July 2004 in accordance with generally accepted government auditing standards. This appendix presents a thorough description of the statistical analysis that we conducted to estimate the extent to which states substitute federal highway grants for funds that would have been spent on highways from their own resources. The first section summarizes the literature on this topic because we built upon models from previous studies in developing our model. The next section describes the model that we developed. The final section describes the statistical tests that we used and presents the results of those tests. We reviewed a number of studies on substitution and relied most heavily on the models used in three of them in developing our statistical model. The three studies are similar in that each draws upon economic models that explain states’ highway spending in terms of the demand for mobility that flows from the construction and maintenance of a highway network. Within the context of these models, the potential for grant substitution arises in the response of state highway spending to changes in federal grant funding. However, these models differ in key details, such as the statistical methods used to estimate the extent of substitution, the definition of state highway expenditures, and the control variables used in the model. They also differ in their estimation of substitution rates. The models in each of the key studies are built upon the premise that the political process responds to the preferences of voters/consumers for highway transportation services. As a result, the models characterize the demand for and supply of highway spending as depending on four types of factors: 1. Fiscal capacity (FC), which is the ability of states to fund services using 2. The tax price (TP) faced by the typical voter/consumer of highway services, which can be thought of as the cost of an additional unit of mobility; 3. Intergovernmental grant funding (G), including both grants intended for highways and grants for other public services; and 4. Differences in voter/consumer preferences (P) for highway transportation services. This relationship can be summarized in the following relationship: State Highway Expenditure = f(FC,TP, G, P) In these models, greater tax paying capacity is expected to result in a higher demand for mobility that in turn increases the demand for a larger highway network. Similarly, more grant funding (both for highways as well as for other public services) increases the resources available to states and is expected to increase total highway spending. Differences in political culture are also expected to result in different preferences for transportation services relative to other public services, such as health and education. Finally, if the typical voter/consumer faces a higher unit cost of transportation services, also called the tax price of highway services, the demand for transportation services is likely to be lower. The tax price of highway services is, in turn, dependent upon several factors: 1. A higher cost of inputs (labor, building materials, supplies, etc.) used to build and maintain highways results in more expensive transportation services. A higher unit cost of mobility is expected to reduce the demand for transportation services, but will increase highway spending as long as the demand for transportation services is price inelastic. 2. Economies and/or diseconomies of scale may also affect the unit cost of mobility. A required minimum facility size may result in more lane miles per resident in smaller states, which may result in a higher unit cost for the typical voter/consumer. Similarly, very low lane miles per resident may be associated with more intensive usage, which may also result in a higher unit cost as well. Thus, unit cost may be U-shaped. 3. A greater number of voter/consumers with whom the cost of highway services may be shared is expected to reduce unit cost to the typical state voter, increasing the demand for transportation services. This will result in higher total highway spending and lower spending per voter so long as demand is price inelastic. 4. More highway users may lead to greater deterioration in the quality of highways and greater congestion, raising the unit cost of transportation services to the typical voter/consumer, reducing the demand for highway services. The effect on spending is expected to be positive if demand is price inelastic. In addition to cost considerations, more users could also be thought of as reflecting a stronger preference for highway services relative to other goods and services. 5. Matching grants on the marginal dollar of highway spending reduce the unit cost of services to the typical voter/consumer. To the extent that matching requirements apply to additional state spending the typical voter/consumer pays a smaller share of additional spending, lowering the cost of additional spending to the typical voter/consumer and raising the demand for highway services. In table 4, we summarize three studies that are representative of the variety of models that have been considered in the literature and upon which we base our analysis. The three studies employ a variety of statistical methods in estimating the substitution effect of federal highway grants. All use simultaneous equations estimators, but they treat different variables as endogenous. Knight and Gamkhar treat federal grant expenditures and state own-source highway expenditures as jointly determined and therefore use an instrumental variable estimator for their per capita federal grant variable to remove the endogenous component associated with this variable. In contrast, Meyers does not treat per capita federal highway grants as an endogenous variable and may have a biased estimate of the substitution rate. He does, however, treat the effective matching rate associated with highway grants (i.e., the ratio of highway grants to total highway spending) as endogenous and uses an instrumental variable procedure to correct for potential bias in that variable. Both Gamkhar and Meyers find autocorrelation in their error terms and, therefore, make an adjustment for autocorrelation. The Knight study does not correct for autocorrelation. Finally, both Knight and Gamkhar use a fixed effect estimating procedure to control for unique circumstances across states that are not captured by the other control variables included in their models. Neither study reports the significance of fixed effects in their model. In addition, Gamkhar also includes time dummy variables to capture systematic effects over time that the other control variables do not capture. Knight does not include a time adjustment in his model. Meyers includes neither a fixed effects nor time adjustment. Each of the three studies define state highway spending differently, which has important implications regarding how grant substitution is measured and influences the interpretation of the studies’ results. The earliest study, by Meyers, includes state capital spending only for projects eligible under the federal-aid highway program, excluding spending for interstate highways. Measuring the dependent variable in this way means the highway grant coefficient measures only the response of state capital spending on federal-aid highway projects to changes in federal funding. As a consequence, Meyers counts increased state or local spending for maintenance on federal-aid highway projects, or increased state or local capital and maintenance on nonfederal-aid highway spending, as grant substitution in the same way as increased spending for other state services such as education and health or increased state taxpayer relief would be counted as substitution. In contrast, Knight defines state highway spending more broadly to include all highway spending by state governments, whether for federal-aid highways or for other state highway projects. However, Knight does not include local spending on highways in his definition of state highway spending. As a consequence, increased state maintenance spending on federal-aid highway projects or spending on state government highway projects that are not part of the federal-aid system is not considered grant substitution in his study, even though such spending is not eligible for federal assistance. However, increased highway spending by local governments is considered to be grant substitution in the same way that increased state or local spending for other state services and increased tax relief are considered substitution. Finally, the Gamkhar study defines highway spending to include both capital and maintenance spending by both state and local governments. This study, therefore, counts only increased state or local spending for nonhighway purposes, including increased tax relief, as representing grant substitution. All three studies use federal grants expenditures to measure federal grants received by states. This variable is statistically significant in all studies. In addition to grant expenditures, Gamkhar also considers grant obligations as an alternative measure. Since obligated funds are available for expenditure for several years, she included this variable with lagged values. The reported estimates of substitution rates associated with federal highway grants vary across the three studies. These differences are, in part, due to differences in the time periods studied, the definitions of state highway spending, and the statistical methods employed. Among the highlights of the studies were the following: Knight’s study reports a grant substitution rate of over 90 percent for the period from 1983 to 1997. Knight defines substitution as the reduction in state (but not local) government spending on all highway-related projects. Gamkhar reports a substitution rate of 63 percent for the period 1976 through 1990. Gamkhar defines substitution as the reduction in state and local government spending on all highway-related projects; Gamkhar measured federal grants using grant expenditures. When grants were measured using obligations rather than actual grant expenditures, a lower substitution rate of 22 percent is reported. Meyers also reports a 63 percent substitution rate for the period 1976 through 1982. Meyers defines the substitution rate as the reduction in state and local government spending on federal-aid eligible highway projects net of spending on the Interstate Highway Systems; federal grants are measured using grant expenditures. However, when he defined substitution as the increase in state and local government nonhighway spending, he reports no substitution. Table 5 summarizes the definitions used and findings of these three studies. To isolate the effect of federal highway grants on state highway spending, these studies include additional variables in their models to control for other factors also related to state spending. Some of the control variables are similar across the studies, but others differ. All three studies use per capita personal income to represent states’ funding capacity, and in each study the variable is found to be statistically significant. All three studies include a wide variety of variables that are intended to capture various components of the tax price faced by the typical voter/consumer. All three studies measure financial variables in real dollars by adjusting for price level differences over time but otherwise do not explicitly include an input cost adjustment as a tax price proxy, except to the extent that the fixed effects procedure employed by Knight and Gamkhar capture these differences. Only Meyers uses an indicator of highway system size: lane miles on federal-aid highways. While this variable has the expected positive sign it is statistically insignificant. However, a quadratic term to capture a possible U-shaped functional form was not used. All studies use the number of registered vehicles as a measure of highway usage. In addition, Knight uses the number of drivers, whereas Meyers includes vehicle miles traveled. Gamkhar includes several additional proxies for highway use that are not included in the other studies: the percentage of light motor vehicles, population density, and percentage of population living in metropolitan areas. However, none of these factors was statistically significant. In general, only one of the use variables is statistically significant in each study and no one measure is statistically significant across studies. In several instances the coefficient has a negative sign, although a positive relationship between highway usage and state spending would be expected. Although highway grants require state matching, Knight and Gamkhar do not include highway matching rates as part of their models because they found that states’ highway spending exceeds the amounts required for their federal grant allotments and, therefore, have only an income effect but no price effect. Meyers, in contrast, does include the effective matching rate (highway grants as a percentage of highway expenditures) and reports a price elasticity of one. Other grants may also have a price effect because programs such as Medicaid, Foster Care, and Adoption Assistance are all open-ended matching grants. Including the effective matching rate associated with other grant spending (i.e., other grants as a percent of nonhighway spending) captures the potential price effect of other grants. The sign on the effective matching rate is expected to be negative because higher demand for other state services would reduce the demand for highway spending. These variables are statistically significant in both studies. The Knight study does not consider the tax price effect of nonhighway grant funding. Only Knight, by including population in his model, includes a factor that could be interpreted as reflecting the cost-reducing effect of having more taxpayers sharing the cost of highway services. Neither Gamkhar nor Meyers includes such a factor. In addition to the tax price effect of nonhighway grant funding, the studies may also have income effects. Both Meyers and Gamkhar include other nonhighway grants per capita in their models to capture the income effect of these grants. The income effect is expected to have a negative effect on own-source spending as some of these grants may be substituted into highway spending and supplant funding from state resources. The Knight study does not consider either price or income effects associated with nonhighway grant funding. Only Knight includes variables that are intended to reflect differences in state preferences for highway spending that may be associated with the political party of the state governor and the partisan representation in the state legislature. He finds the party of the state governor to be statistically significant at the 10 percent level, while the other political variables are not statistically significant. Consistent with previous studies, we model state spending choices as being conditioned on states’ fiscal capacities, the tax price faced by state voters, federal grant funding for highways and for other state services, and preferences of state voters for highway spending. Because both theory and the results of previous studies suggest that federal grants and state spending decisions are jointly determined, we use an instrumental variables (IV) approach to estimate the fiscal effect of federal grants. To capture other factors that may be systematically associated with differences in state spending choices, we estimate the model using a fixed effects estimating procedure. The fixed effects procedure is intended to capture factors such as topographical differences and weather conditions across states that do not change over time and to capture other unmeasured factors with large cross-state variation that exhibit relatively little change over time. In addition, we include a time trend to capture trend changes in state spending that may not be captured by the other variables included in our model. The specific variables considered for our model are listed in table 6. With each time period, various rules and regulations change that may affect the ability of states to substitute federal grants for state spending. Given the range of estimates over different time periods reported in past research, we also want to test whether the rate of grant substitution, if found, systematically differ across the time periods included in our data. To see if the substitution rate differs over time, we introduce dummy variables for each of the time periods covered in our study into our model. We then multiply these dummy variables by the grants variables and included these interaction variables in the model. If statistically significant, these variables would provide evidence that substitution has varied from one time period to another. The estimated effect of federal highway grants on state highway spending is measured by the regression coefficient associated with federal highway grants. As a consequence, the interpretation of that coefficient is directly affected by how the dependent variable, state expenditures, is defined. If we defined state highway spending narrowly as only capital expenditures on federal-aid highway projects, the federal grants coefficient in our model would be interpreted as the response of state capital spending to changes in federal highway aid. This approach, taken by Meyers, represents a definition of state spending that is consistent with the requirements of the federal-aid highway program, which restricts federal grants to authorized uses, such as capital investment on eligible federal-aid highway routes. Under this approach, grant funds that are used for purposes that are not eligible for federal aid, would represent grant substitution in the same way that increased spending for health and education and for state tax relief would represent grant substitution. Some policymakers may not view this as substitution, perhaps arguing that state transportation officials are better positioned to determine the best use of available funding for highway-related projects. Our analysis uses this broader definition of state highway spending. Thus, our measure of grant substitution considers only state grant funds that are effectively used for nonhighway purposes as substitution. We adopt this approach for two reasons. First, we want to be conservative in our definition of grant substitution. A broader definition of state highway spending that includes state and local spending on highway projects not eligible for federal funding would yield a lower estimate for the substitution rate because some types of adjustments would not be treated as grant substitution. Second, an estimate of grant substitution that is based only on state (but not local) government spending would be affected by cross-state differences in the extent to which highway spending is centralized at the state level. Since there are large differences across states in the extent to which highway spending is centralized, we include local as well as state government spending so that our measure of highway spending would be comparable across states. Because federal highway grants are provided on a reimbursement basis, we obtained from FHWA federal highway grant expenditures that are contemporaneous with states’ reported own-source highway spending. As with state spending, we express federal grants in real per capita dollars, using the BEA chain-price index for state and local government streets and roads. Because federal grant expenditures, by definition, represent formula grant allotments from current and prior years, any lagged response in state spending to federal highway grant funds is already included in our grants variable. We therefore do not include lagged values of federal highway grants in our model. Knight provides an economic argument explaining that state highway spending and federal grant funding are jointly determined because elected officials reflect the preferences of state voter/consumers both in state legislatures and in Congress. His study tests for and finds confirming evidence for his theoretical argument. Based on these findings, we also employ an IV estimator that provides a consistent estimate of the federal grant coefficient to measure the fiscal effect of federal grants. Using this approach, we estimate a first stage instrumental variable equation that models federal highway funding in terms of exogenous variables that are expected to influence the distribution of federal grants. The instrumental variables include the exogenous variables from the state expenditure equation (e.g., fiscal capacity, the individual components of tax price, and preferences) and variables that are highly correlated with federal grants but uncorrelated with state highway spending (e.g., variables included in federal grant formulas and those that may affect the distribution of discretionary grants). Predicted values of federal grants, derived from the instrumental variables (highway grants) equation, are then used in lieu of actual grant values to correct for the bias in ordinary least squares (OLS) estimates of the federal grants coefficient in the state expenditure equation. The excluded exogenous variables we consider include state contributions to the highway trust fund and variables that are intended to reflect the influence of state representatives on the distribution of federal highway grants: tenure in Congress, state representation on transportation committees, and state representation in the majority party. The exogenous variables we consider are summarized in table 7. Consistent with the state highway spending equation, we include real per capita income, real nonhighway grant funding, registered vehicles, licensed drivers, and vehicle miles traveled—including 1- and 2-year lagged values for each of these variables—and use a fixed effects estimating procedure. Fixed effects are intended to capture factors that have substantial variation across states with little variation over time. Examples would be factors such as state land area—a factor that has been part of highway funding formulas and that does not change over time—and constraints that are applied to funding formulas, such as the ½-of 1 percent minimum state grant that is included in highway funding formulas (see table 1). Consistent with previous studies, we use real per capita personal income to measure states’ taxing capacities. Unlike previous studies, we also include the squared value of per capita income to capture the possibility that demand for highways does not increase in proportion to increases in income, perhaps signifying that as basic transportation needs are met, increases in income are increasingly allocated to other uses such as health and education. Personal income is published by the BEA in the Department of Commerce. We include 1- and 2-year lagged values of real per capita income in the model to allow for lagged responses to changes in income and also to reflect cyclical changes affecting the level of state revenues. The tax price faced by state voters/consumers is reflected in a number of variables included in the model. Highway usage is reflected by vehicle miles traveled on state highways, and by registered vehicles and licensed drivers in the state, as reported by FHWA. We include 1- and 2-year lagged values in each of these variables to allow for lagged responses in spending to changes in highway usage. Consistent with prior studies, we do not include the matching rate on highway grants because states spend more than the required federal match, and therefore, states pay 100 percent of the cost of funding additional highway projects, and because highway matching rates vary little both over time and across states. However, we do include the effective match rate on other grant funding to capture the price effect of other grant funding. Medicaid, Foster Care, and Adoption Assistance, for example, are open- ended matching programs with price effects that may encourage states to spend less on highways in order to provide matching funds for these and possibly other matching programs. We include 1- and 2-year lagged values to capture these effects. Using data from the Census Bureau, we measure the effective matching rate for nonhighway spending by deducting states’ federal highway grants from their total federal grants and expressing the net amount (nonhighway grants) as a proportion of each state’s nonhighway spending, also calculated by deducting highway spending from total spending. Although previous studies do not include the size of the highway network to be maintained, we expect the per capita cost of maintaining an existing highway network to be higher in states with more miles of road per capita. Therefore, we include this variable in our model along with its squared value to test for evidence of per capita costs varying with the scale of the road network—that is, economies or diseconomies of scale. We obtained data on total lane miles of state highways from FHWA. In addition to federal highway grants, states receive federal grants for a variety of other purposes, including health, education, and welfare. While it is possible that state highway funds may be substituted into spending for other state services, it is also possible that some state funds that would have otherwise been used for other purposes may be redirected into highways. For this reason, we also include other federal grant funding in our model to capture the income effect of these grants and their potential substitution into highway spending. While some of this aid is provided on a reimbursement basis (Medicaid, for example) other grants can remain eligible for expenditure in subsequent years. For this reason, we include 1- and 2-year lagged values of other federal grants to capture these potential effects. Other federal grants are also expressed in real per capita dollars. The political culture of states may affect both the overall level of spending on public services, as well as spending priorities for different types of services, such as highways, versus education and health care. Differences in political culture and spending priorities may be relatively stable over time, in which case the fixed effects adjustment may adequately control for cross-state differences in these spending preferences. Nonetheless, in addition to including fixed effects, we have also included variables that may be associated with differences in political culture. For this purpose, we have included dummy variables that are equal to one if the state governor is Democratic and zero otherwise, and the percentage of the state Senate and state House that is represented by the Democratic Party. With the exception of some independents, office holders are either Democratic or Republican. Therefore, the choice of using the percentage Democrats or Republicans is arbitrary and has no effect on the statistical results except to change the sign of the regression coefficient. We obtained these data from the Elections section of the Census Bureau’s Statistical Abstract. To capture trend changes in state spending that cannot be captured by the other variables included in our model, while allowing for a possible curvilinear trend, we have also included time, time squared, and the inverse of time. Finally, we include a dummy variable for the state of Utah that was equal to 1 during the years 1997 through 2000 and zero otherwise to account for the unusually large increase in highway spending in that state just prior to the 2002 Winter Olympics. The means and standard deviations for the variables included in our statistical model are shown in table 8. Because we use time series and cross-section data to estimate the model, we expect autocorrelation to bias the estimates of the standard errors associated with variables in our model. To reduce the problem of heteroscedasticity, we normalize variables by expressing them on a per capita basis (except for those already expressed in ratio or percentage terms). We conducted statistical tests to determine if our data are affected by autocorrelation and found statistical evidence of its presence. Therefore, we estimate all our models using a correction for autocorrelation. As noted above, we use a fixed effects procedure that allows for a separate constant term associated with each state to represent differences in state funding that are unique to each state and independent of the other variables included in the model. The fixed effects coefficients of our model represent state differences in highway spending, after controlling for the other explanatory variables in our model. They are intended to capture the effect of variables that have comparatively little variation over time but are systematically associated with differences in spending across states. To identify those state characteristics that are systematically related to the fixed effects associated with state highway expenditures, we perform an additional stepwise regression analysis that regresses the following explanatory variables on our estimated fixed effects, using the following: Lane miles per capita, Percentage of Democrats in state House, Percentage of Democrats in state Senate, Federal nonhighway grants per person, and Ratio of federal nonhighway grants per person to state nonhighway We use the mean value of 21 observations from 1980 to 2000 per state to represent each variable in explaining our estimated fixed effects. We report the results for the second stage expenditure equation without a correction for autocorrelation in table 10. Again, regression results for variables that are statistically significant at the 5-percent level appear in bold in the table. The model explains 78 percent of the variation in state own-source highway spending and fixed effects alone account for 69 percent of the variation. The estimated substitution rate associated with federal grants is 84 percent and is statistically significant. That is, other things being equal, a dollar increase in federal highway grants is associated with an 84-cent reduction in highway spending from state own-source revenues. Alternatively, the coefficient also implies that states replace 84 cents of each dollar decline in federal funding. These results are similar to the findings reported by Knight, who reported a substitution rate of 91 percent, higher than the substitution rates reported by Gamkhar. However, the model also indicates the presence of autocorrelation (ρ=0.53, shown in the last row of table 10). As a consequence the standard error for the grants coefficient is biased downward, which raises the prospect that the grants coefficient may not be statistically significant. We therefore re- estimated the model adjusting for autocorrelation using two methods: Cochrane-Orcutt and Newey-West. The results are reported in table 11. The Cochrane-Orcutt procedure produces a feasible generalized-least squares estimate of the grants coefficient and its standard error. With this procedure, the point estimate of the substitution rate drops from 84 to 39 percent and is statistically insignificant (shown in the second column of tables 10 and 11). The Newey-West correction for autocorrelation does not involve re-estimating the grants coefficient, so the estimated substitution rate remains at 84 percent. The coefficient continues to be statistically significant after correcting for the bias in its standard error. The full model includes over 30 variables when all the lags are included and many of these variables are statistically insignificant. To simplify the model, we performed F- tests for the statistical significance of variables and removed variables with a statistical significance level below 10 percent. We tested variables that were included with 1- and 2-year lags as a group and removed them as a group if found insignificant. We summarize the results of these tests in table 12. The primary result is that neither the highway usage variables nor the variables intended to capture state differences in preferences are statistically significant. The only variables that are systematically associated with differences in state highway spending are the variables reflecting financial resources that could be used to fund highways. The result of removing statistically insignificant variables is shown in table 13. With the Cochrane-Orcutt method for autocorrelation correction, the grant substitution coefficient is 0.50 and with the Newey-West correction the coefficient is 0.58; both estimates are statistically significant at the 1 percent significance level. Thus, the difference in estimated substitution rates under the two methods narrowed with the simplified model. To be conservative in our findings regarding grant substitution, we are using the lower estimate of 0.50, based on the Cochrane-Orcutt method, as our preferred estimate. The 95 percent confidence interval ranges from 12 to 88 percent, which includes Gamkhar’s estimate of 63 percent but not Knight’s higher estimate of 91 percent. Because the Cochrane-Orcutt method does not include the first observation for each state, these estimates are based on observations from 1983 through 2000. The full model includes per capita income squared to test for nonlinear effects of income on state spending. However, the squared term is statistically insignificant. We conclude that state spending is proportional to income, which implies that both high- and low-income states respond to changes in income in roughly the same proportion, once other factors affecting state spending choices are taken into account. The lag structure on per capita income indicates that the largest increase occurs in the first year, but prior year changes in income also affect state expenditures (see table 13). The effect of nonhighway grants enters into the model in two ways: the absolute size of other grant funding, measured in per capita terms, representing the income effect of other-grant funding; and the ratio of the nonhighway grants to state nonhighway spending, representing the tax price effect of other-grant funding. The net income effect of other grants is small but positive. The coefficients on the nonhighway grant variables sum to a small positive effect with a statistically significant positive effect in the current year and a statistically significant negative effect in year 2. This result is contrary to expectations in that the net effect would be expected to result in some of the funding from other federal grants to be used as a substitute for states’ own highway spending. In contrast, the tax price effect of other grants is strongly negative indicating that matching requirements associated with other federal programs, such as Medicaid, result in states spending less of their own resources on highways. For every dollar spent by a state, the federal government reimburses the state for a percentage of the cost, reducing the tax price of these services to the state. The lower price for other public services raises the demand for those services and reduces the demand for highways, suggesting that highways and other public services are substitute goods. We enter the time trend variable into the model in linear, quadratic, and inverse form to provide a flexible functional form. The inverse term was statistically insignificant and we dropped it from the model. The coefficients on the linear and quadratic term indicate a negative trend for most of the years in state highway spending when other factors affecting state spending are taken into account. As we noted in our summary of previous studies, Meyers reports no evidence of substitution into nonhighway spending during the 1976 to 1982 time period. Gamkhar, based on data from 1976 through 1990, reports higher rates of substitution, and Knight’s study, based on data from 1983 through 1997, reports even higher rates of substitution. We therefore tested for evidence of increasing substitution rates using the Cochran-Orcutt method, which, as discussed earlier, uses the estimation period 1983 to 2000. The results are shown in table 14. To test whether the substitution rate has increased over the period of our sample data, we divided our sample into four time estimation periods, corresponding with the authorization periods for the federal-aid highway program. 1991 to 1997, and 1998 to 2000. Allowing the substitution rate to vary over time improves the explanatory power of the model, increasing the RWhen the substitution rate is allowed to vary over time, the time trend coefficients become statistically insignificant. This lack of significance suggests that there is no negative time trend in state spending once the increasing substitution rate associated with different time periods is taken into account. We use an IV estimator because we assume federal grants and state spending are jointly determined. To test the reliability and validity of the IV estimator we ran three additional statistical tests: (1) a weak instruments test, (2) a test for exogeneity of excluded exogenous instruments, and (3) a test for endogeneity of federal grants. The weak instruments test is intended to verify that the excluded exogenous instrumental variables included in the grants equation are correlated with federal grants. If they are not, the IV estimator provides no advantage to a simple (and more efficient) OLS estimator. To test the significance of the excluded exogenous variables, we calculated the partial R This test compares the estimated federal grant coefficients for each time period using the full set of excluded exogenous variables with coefficients derived from using a subset of instruments composed of predetermined variables that can safely be assumed to be exogenous. A finding that the set of grant coefficients from the two models are not statistically different from one another lends support for the hypothesis that the full set of excluded exogenous instruments are independent of the error term in the second stage expenditure equation. For this test, we used a subset of excluded exogenous variables. Differences between the grant coefficients for each time period using all instruments, and the coefficients using the subset of exogenous instruments, were not statistically significant and are quantitatively very similar to one another. Thus, we found no evidence that our excluded exogenous instruments were correlated with the error term of the expenditure equation. Finally, we conducted a Hausman test for the endogeneity of the federal grant variable. This test consists of comparing the IV estimate of the grant coefficient for each time period with the corresponding grant coefficient based on the OLS estimate. If the differences were not statistically significant there would be little justification for using the IV estimator. This test yielded statistically significant differences between the two sets of estimates, lending support for the assumption that federal grants and state spending are jointly determined. The results of each of the three tests are summarized in table 15. We also estimated alternative models that allow the substitution rate to vary according to state size (measured by population), per capita income, and state per capita spending on mass transit to test for a varying substitution rate related to these factors. The results of these models were negative. Overall we found no evidence that substitution rates systematically differ by either population size or the level of mass transit spending. We did obtain higher estimates of substitution rates in states with higher per capita income (56-66 percent in high income states compared to just over 30 percent in lower income states), but these differences were not statistically different from the average substitution rate of 50 percent found for the period from 1983 to 2000. In the models reported above, state fixed effects account for most of the variation in state highway spending. Based on our preferred model (the model in table 12 using the Cochrane-Orcutt autocorrelation correction method), differences in state spending associated with these fixed effects can be as much as $400 per capita. However, these fixed effects are difficult to interpret since they represent all factors that are systematically related to cross-sectional differences in state spending not included in the model (e.g., geography, weather, and other variables that have substantial cross- sectional variation). To determine if the differences in state spending measured by the fixed effects of our model are systematically associated with particular state characteristics, we performed a step-wise regression using the fixed effects from our preferred model as the dependent variable. Of the 12 variables we considered, 3 are statistically significant: per capita highway lane miles, per capita income, and heating degree days (see table 16). In the first step, lane miles account for 51 percent of the cross-state variation in our fixed effects, the second step equation added per capita income, and increases the explained variation to 68 percent. The third step equation adds heating degree-days, raising the variation explained to 77 percent. The remaining variables are statistically insignificant and provide little additional explanatory power. Based on our model of state highway spending, we found a number of factors that are systematically related to state highway spending and, in turn, a state’s level of effort to fund highway from state resources. Perhaps most importantly, more federal highway aid is associated with less state effort to fund highways from state resources once other factors related to state spending are taken into account. Our conservative estimate of grant substitution suggests that about half the increase in federal highway grants is used to reduce states’ level of highway spending effort. Increases in federal grant funding for nonhighway purposes, such as health, education, and welfare, are also associated with reduced effort on the part of states to fund highways. Based on our model of state highway spending, we found that states with a higher percentage of their nonhighway spending funded by federal grants reduced their effort to fund highways, presumably, to provide matching funds for programs like Medicaid, which is an open-ended matching program. In addition to federal grants, we found two cost factors that are systematically related to states’ levels of highway spending effort, other things being equal. States with large highway networks, as measured by the number of highway lane miles, systematically spend more per capita. Presumably, a larger road network is more expensive to maintain and states must therefore devote a larger share of their funding capacity to maintaining their highway network. In addition, we found that colder than average temperatures, as measured by heating degree days, are associated with higher state spending, suggesting that colder weather creates more wear and tear on the highways and hence the need for states to make a greater spending effort to maintain their highway network, other things being equal. Finally, we found that high per capita income states make less effort than states with lower incomes. This result is, perhaps, not surprising since the same effective tax rate, (level of effort), generates more revenues in high- income states than in states with lower incomes. Thus, the same level of highway spending can be funded with less effort in high-income states and low-income states compensate by undertaking a greater effort to fund highways from state resources. One program option that could be designed to reduce substitution would be to modify the matching requirement to leverage additional state highway spending. While the use of matching requirements as an economic tool is designed to leverage additional spending, the federal-aid highway program’s current matching requirements, which typically call for 20 percent state funding and 80 percent federal funding of eligible projects, permit substitution because most states’ highway funding is already higher than 20 percent of their total highway funds. The matching requirement, therefore, does not provide states with an incentive to increase or even maintain their level of funding in order to receive additional federal funds. Instead, states are free to substitute federal funds for funds they would have spent from their own resources and to use their own funds in other ways. For the matching requirement to leverage additional state spending, the states’ matching portion would have to be set high enough so that states would not receive additional federal funds without spending beyond what they would have otherwise spent without additional federal assistance. This objective cannot be perfectly achieved because models of substitution, like any models, produce estimates that are subject to uncertainty, and there is no way to objectively determine with certainty what states would have spent in the absence of increased federal funding. However, the likelihood that increased federal funding will leverage additional state highway spending can be achieved in several ways. The most direct approach would be to change the current 80 percent federal/20 percent state match ratio to a matching ratio closer to the 45 percent federal/55 percent state division of funding in fiscal year 2002. This would likely mean that some states (those whose spending is less than 60 percent of combined federal and state spending) would be required to increase their highway spending in order to qualify for any increased federal funding, while other states whose spending is already over 60 percent of combined federal and state spending would not have to increase or maintain their spending in order to receive increased federal funds. Increasing the required state match from 20 percent to 60 percent might require a few states, whose state highway funding levels are currently a comparatively small proportion of their total highway spending, to more than double their current level of highway spending to avoid losing federal funds. If increases of this magnitude were deemed too extreme, a more moderate increase in the state match could be established. For example, raising the state matching share to 40 percent instead of 60 percent would require smaller funding increases in states whose state and local spending is currently a smaller proportion of the total highway spending, but it would also reduce the number of states that would be required to increase their level of funding in response to increased federal funding. Another drawback of simply increasing state matching requirements is that even substantial increases in the requirements (raising the required state match from 20 percent to 60 percent) would not be likely to leverage additional state spending in all states. An alternative that would increase the likelihood of leveraging additional state spending in all states would be to continue with the 80 percent federal/20 percent state matching ratio but stipulate that only state spending in excess of what the state had spent for highways in an appropriate base time period be counted against its federal matching requirement. This approach has the advantage of maintaining the current 20 percent state matching rate, yet provides a leveraging incentive in all states rather than in only those states with below average spending. However, it might have the effect of making it easier for those states that were not spending much in the base time period to increase their spending and receive increased federal funds than it would be for those states whose spending was already high. Another approach that would reduce substitution by creating an incentive for states to increase their own highway spending would be to directly link the level of federal highway aid to each state’s level of highway funding effort. This link could be achieved by setting aside a fixed percentage of formula grant funding to be distributed in accordance with states’ highway funding efforts. As stated in the text, to avoid penalizing low income states, each state’s highway funding effort could be defined as the state’s highway spending compared to some measure of the state’s taxing capacity. There are a variety of indicators that could serve as a measure of states’ funding capacity. The most comprehensive that is available annually is Total Taxable Resources (TTR), which is produced annually by the Department of the Treasury and used to distribute substance and mental health block grants. Less comprehensive measures would include Gross State Product (GSP) and Personal Income (PI), both published annually by the Department of Commerce. This approach could be implemented in a variety of ways. One approach would be to compare each state’s funding effort to the average effort of all states. If, for example, $100 per capita were set aside and distributed in this way, states whose highway spending efforts were above the average spending effort would receive funding proportionally above the $100 per capita average and states whose effort was below the average spending effort would receive funding proportionally below the $100 per capita average. Initially, those states with an above-average highway funding effort would be rewarded with higher per capita funding, and those states with a below- average highway funding effort would be penalized with lower per capita funding. In following years, each state’s highway spending effort would continue to be compared to the average state highway spending effort, so that states whose funding effort rose relative to the national average would automatically be rewarded with higher per capita funding, while states whose effort fell relative to the national average would automatically be penalized. Distributing the set aside in this fashion would, in effect, put all states in competition with one another, automatically rewarding states whose effort rose compared to the national average and penalizing states whose effort fell compared to the national average. The approach just described would reward those states whose funding effort is currently high and penalize those whose effort is currently low. However, this approach could be modified to avoid rewarding or penalizing states based on their current level of effort. Instead, the linking of federal funds to state effort could be based only on future changes in each state’s level of highway funding effort. In this approach, each state’s highway funding effort would be compared to its own effort during an initial time period, such as the year (or an appropriate average of years) prior to initiation of the set aside. For example, all states could be awarded the same per capita grant amount in the first year of the set-aside program. Then, in future years, each state’s funding effort would be compared to its own funding effort in the first year of the set aside program and adjusted accordingly. Each state whose funding effort increased compared to the initial base year would receive an increase in federal funding proportionate to the increase in its own spending. Such an approach would, in effect, put each state in competition with the effort it made in the base period. If both approaches to rewarding state highway funding effort were deemed desirable, a combination of the two approaches could be employed. The strength of the incentive would depend on the amount of total formula funding distributed through the set aside program; the greater the amount of funding distributed in this manner, the larger the financial consequences to states of changing their level of highway funding effort. If, instead of seeking to stimulate additional state spending on highways, the goal of federal policy makers is for federal grants to supplement state spending on highways, then instituting a maintenance-of-effort (MOE) provision may be a more appropriate approach. MOE provisions require states to maintain existing levels of state spending on an aided program as a condition of receiving federal funds. As a tool, MOE requirements are designed not to stimulate additional state spending but to guard against grant substitution so that increased federal spending will supplement rather than replace states’ own spending. As with matching requirements, this objective cannot be perfectly achieved because models of substitution, like any models, produce estimates that are subject to uncertainty and there is no way to objectively determine with certainty what states would have spent in the absence of increased federal funding. However, the likelihood that increased federal funding will not be used as a substitute for state spending can be strengthened if MOE requirements are designed appropriately. In previous work, we concluded that, to be effective, MOE provisions should define a minimum level of state spending effort that can be objectively quantified based on reasonably current expenditures on the aided activity. Adjusting the MOE requirement for inflation in program costs would ensure the minimum spending level is maintained when measured in inflation adjusted dollars. This could be achieved by defining a state’s base spending level as the amount spent per year during a recent historical period and then adjusting that base spending level for inflation. One drawback of an MOE provision is that basing it on historical spending period could result in a base spending period for the MOE provision that represents an unusually high spending level for some states, effectively locking them into continued high spending in future years. This could be ameliorated however by establishing waivers for states that are able to demonstrate that spending in the base period chosen is unusually high, to allow a more “typical” spending level for purposes of the MOE provision. Developing an indicator of state highway spending effort to link federal funding to state spending or establishing a state’s base spending level to design an MOE requirement would require careful consideration. Among other issues, in defining these indicators, consideration would have to be given to whether to measure: Capital expenditures for highways or capital plus maintenance Expenditures on all state roads or for federal-aid roads only; State government expenditures only, or spending by state and local Total expenditures, or expenditures normalized on a per capita, per lane mile, or other basis. In addition, an indicator of state funding effort or a state’s base funding level for an MOE provision should, to the extent possible, be established by measuring spending levels that are typical rather than unusually high or low. Highway capital expenditures in a state can increase or decrease dramatically from year to year and may be unusually high or low for variety of reasons (e.g., Utah’s unusually high spending during preparations for the 2002 Winter Olympics, or a state particularly hard hit by recession that drops spending below its usual effort). To some extent, such factors can be taken into account by defining a state’s funding effort or base level of spending for an MOE provision using multi-year averages so that such unique circumstances are averaged out. In addition to those named above, Jay Cherlow, Catherine Colwell, Gregory Dybalski, Edda Emmanuelli-Perez, Scott Farrow, Donald Kittler, Alex Lawrence, Sara Ann Moessbauer, Robert Parker, Paul Posner, Teresa Renner, Stacey Thompson, and Alwynne Wilbur made key contributions to this report. The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.”
In 2004, both houses of Congress approved separate legislation to reauthorize the federal-aid highway program to help meet the Nation's surface transportation needs, enhance mobility, and promote economic growth. Both bills also recognized that the Nation faces significant transportation challenges in the future, and each established a National Commission to assess future revenue sources for the Highway Trust Fund and to consider the roles of the various levels of government and the private sector in meeting future surface transportation financing needs. This report (1) updates information on trends in federal, state, and local capital investment in highways; (2) assesses the influence that federal-aid highway grants have had on state and local highway spending; (3) discusses the implications of these trends for the federal-aid highway program; and (4) discusses options for the federal-aid highway program. The Nation's investment in its highway system has doubled in the last 20 years, as state and local investment outstripped federal investment--both in terms of the amount of and growth in spending. In 2002, states and localities contributed 54 percent of the Nation's capital investment in highways, while federal funds accounted for 46 percent. However, as state and local governments faced fiscal pressures and an economic downturn, their investment from 1998 through 2002 decreased by 4 percent in real terms, while the federal investment increased by 40 percent in real terms. Evidence suggests that increased federal highway grants influence states and localities to substitute federal funds for funds they otherwise would have spent on highways. Our model, which expanded on other recent models, estimated that states used roughly half of the increases in federal highway grants since 1982 to substitute for state and local highway funding, and that the rate of substitution increased during the 1990s. Therefore, while state and local highway spending increased over time, it did not increase as much as it would have had states not withdrawn some of their own highway funds. These results are consistent with our earlier work and with other evidence. For example, the federal-aid highway program creates the opportunity for substitution because states typically spend substantially more than the amount required to meet federal matching requirements--usually 20 percent. Thus, states can reduce their own highway spending and still obtain increased federal funds. These trends imply that substitution may be limiting the effectiveness of strategies Congress has put into place to meet the federal-aid highway program's goals. For example, one strategy has been to significantly increase the federal investment and ensure that funds collected for highways are used for that purpose. However, federal increases have not translated into commensurate increases in the nation's overall investment in highways, in part because while Congress can dedicate federal funds for highways, it cannot prevent state highway funds from being used for other purposes. GAO identified several options for the future design and structure of the federal-aid highway program that could be considered in light of these issues. For example, increasing the required state match, rewarding states that increase their spending, or requiring states to maintain levels of investment over time could all help reduce substitution. On the other hand, the ability of states to meet a variety of needs and fiscal pressures might be better accomplished by providing states with funds through a more flexible federal program--this could also reduce administrative expenses associated with the federal-aid highway program. While some of these options are mutually exclusive, others could be enacted in concert with each other. The commission separately approved by both houses of Congress in 2004 may be an appropriate vehicle to examine these options.
AD is a kind of dementia, with the essential feature the development of multiple cognitive deficits, including memory impairment, and at least one other deficit, such as impaired language functioning (aphasia). The definition of dementia also requires that the condition be severe enough to cause a significant impairment in social or occupational functioning that represents a decline from a previous level of functioning. Common clinical signs of dementia include emotional and behavioral disturbances. AD is a dementia of gradual onset and progressive decline. It may be difficult to distinguish clinically between mild—that is, early—AD and normal aging, but severe AD is characterized by a need for much help with personal care as the result of incontinence and almost total lack of comprehension of the environment. AD is said to be differentiated from other dementias on the basis of its cause, but that cause is not, in fact, well understood. That AD is accepted as a distinct disease entity is because AD patients manifest specific kinds of abnormalities in the brain—observable only in those who are autopsied or undergo a brain biopsy, a rare procedure—differing from the abnormalities found in other dementias with better understood causes. The prevalence of AD is defined as the number of people in a specific population who suffer from the disease at some specified time. It is often expressed as a rate, the number of cases of that disease existing at a given point in time divided by the total population at that same time. When the number of cases in the target population is too costly to count, prevalence may be estimated by testing, in a prevalence survey, a representative sample of the population. An alternative is to develop a register of people seeking services, but this does not work well for AD because many cases are not treated. The restriction of AD prevalence surveys to the elderly population, 65 years of age or older, makes sense because the majority of cases of AD are elderly. In addition, because AD prevalence tends to increase sharply with age, doubling about every 5 years, at least over the age range of 65 to 85 years, it is common to estimate age-specific prevalence rates of AD. Thus, the population studied is divided into age groups—for example, 65 to 69 years, 70 to 74 years, 75 to 79 years—and a prevalence rate is estimated for each group. This is especially important when comparing AD rates across groups so that differences in prevalence stemming from differences in age distribution can be separated from those stemming from real health differences. Although the dependence of AD rates on gender is not as well established in the scientific literature as dependence on age, there is some tendency for prevalence to be greater for women than for men; therefore, rates that are specific to both age and gender are of interest. The reasons for this tendency are not known. It may be that (1) women are more likely than men to contract the disease or (2) women live longer once they get AD and are therefore more likely to be counted when a prevalence survey is conducted or (3) both of these factors operate. When severity is measured, people with AD are categorized into degrees or levels of illness; thus, for every specific prevalence rate, several categories are used to describe the different levels of severity. For the dementias (including AD), descriptive categories like mild, moderate, and severe are commonly used to indicate how severe a person’s AD is. Sometimes a borderline category, “questionable,” is used for those whose impairment is not great enough to qualify as even mild AD. Today, there are standardized systems for rating severity, often using such descriptive categories. According to one system, the Clinical Dementia Rating (CDR), a person with mild AD needs only prompting in personal care activities, a person with moderate AD needs some assistance, and a person with severe AD needs much assistance and is frequently incontinent. The results of our meta-analysis, based on the pooling of the data of individuals from each of the studies, were used to project the numbers of Americans, in 1995, with (1) AD of any level of severity (mild, moderate, or severe) and (2) moderate or severe AD (see table 1). The AD prevalence estimates are generated by multiplying prevalence rates (discussed in the section below called “Estimated Prevalence Rates”) by the corresponding age- and gender-specific 1995 estimates from the Bureau of the Census.When these results are summed over the several age intervals and both genders, the overall estimate for Americans 65 years of age or older with any AD is 1.9 million. Of these 1.9 million cases, an estimated 1.1 million have moderate or severe AD. To see how this overall prevalence estimate compares with the projections that would be derived from individual studies, we calculated comparable estimates when it was possible to do so. Most of the people with AD—58 percent, or 1.1 million—fall between the ages of 75 and 89. Within this age group, it was possible to project from 9 of the 15 studies dealing with any AD and provide age- and gender-specific estimates for the population based on each study. Two of the studies yield estimates below 1 million. Six fall between 1.1 million and 1.4 million; one (the East Boston study) provides the basis for an estimate of 3.2 million. (For further detail, see app. VIII.) These numbers correspond to overall percentages for Americans 65 years of age or older of 5.7 and 3.3 with any AD and with moderate or severe AD, respectively. When adjusted for the cases of mixed dementia and missed cases not included in some of the studies, these percentages become 6.3 and 4.1, respectively. As noted, individual studies of the AD prevalence rate have produced varied estimates of the percentage of elderly with any AD, ranging from less than 2 percent to 12 percent. We also developed projections into the next century of the number of AD cases and the number of AD cases requiring assistance. We derived these results from the prevalence rates, which we used in conjunction with age- and gender-specific population projections from the Bureau of the Census. These projections, which take the aging of the U.S. population into account, are presented in 5-year intervals until 2015. As shown in table 2, based on the Bureau’s middle series of population projections, the numbers of cases of AD are expected to increase approximately 12 percent every 5 years. When adjusted to include all mixed cases and missed cases, the numbers in this table increase by 10 percent and 24 percent for any AD and moderate or severe AD, respectively. These adjustments yield, for example, in 2015, 3.2 million cases for any AD and 2.1 million cases for moderate or severe AD. Because prevalence is partially determined by the length of time people with AD survive, improvements in AD care will tend to increase future prevalence, just as any general improvements in human longevity will.Thus, significant unanticipated improvements in the longevity of either the elderly in general or people with AD in particular may lead to even greater numbers of people with AD in the future. We developed age- and gender-specific prevalence rates based on our meta-analysis, stopping at the age of 95, when the data become sparse for both any AD and moderate or severe AD (see tables 3 and 4). Our first step in generating these estimates was to take severity of disease into account by excluding any data presented in the 18 published studies that included persons with questionable AD. Even with this exclusion, the estimates of AD prevalence for a given combination of age and gender that we obtained from the 18 studies varied greatly. (See app. II for a list of the studies.) For example, the estimated prevalence for men in the age interval from 85 to 95 ranged from 12 percent to 54 percent (see app. IV). Before attempting to integrate these data, we took severity of disease further into account by dealing separately with the three studies—numbers 1, 5, and 18 (see table IV.1)—that excluded mild cases. In our first integration of these data, the results of our meta-analysis show prevalence rates for any AD in the 15 studies that do not exclude mild cases. (See table 3.) These results tend to differ only slightly from estimates presented in previous articles reviewing AD prevalence. These rates are each increased by 10 percent when all mixed cases and missed cases are included. These results demonstrate that the AD prevalence rate increases sharply with age, doubling about every 5 years at least until about the age of 85, as expected from previous reports of this relationship. In addition, the rate is greater for women than for men. In our second integration of these data, we included only studies that provided number of cases with moderate or severe AD. These studies either excluded mild cases (the three studies mentioned earlier) or enabled us to exclude mild cases by presenting the data for these cases separately (numbers 6 and 13). When only cases with moderate or severe AD are counted, the rates are lower. But the increase in prevalence with age and the higher rates for women are observed at the moderate and severe levels too. These rates are each increased by 24 percent when all cases of mixed dementia and missed cases are included. Two of the studies allow for the counting of only the cases with severe AD. When this is done, the resulting rates are still lower, as logic would dictate, but the margins of error are so large that we do not present these results for severe AD. NIA is currently supporting a number of studies of AD prevalence in the United States. Many of these studies include minority groups in addition to whites. For example, one such study looks at prevalence among African-Americans, Hispanics, and whites in a neighborhood of New York City. The results of these studies are expected to be published within the next several years. When they are, our knowledge about the extent of AD in the United States will be enhanced. Not only will our ability to estimate AD prevalence for the whole country improve but so will our ability to make such estimates for specific racial and ethnic populations. Projections for future numbers with AD will then be able to take into account the changing demographics of the country. The implications of these findings lie in the specific results and projections presented. The number of people with AD is at least 1.9 million now and can, with relatively conservative assumptions about population growth, be expected to grow to at least 2.9 million by 2015. Depending on severity, these cases will need some kind of long-term care. Such care will also be required by people with other disabling diseases, both dementias and nondementias. However, the kinds of care needed by people with AD and other dementias differ, for both patients and their caregivers, from the kinds needed by the disabled without any dementia. Noting that the results of our meta-analysis of AD prevalence (about 2 million) are lower than those NIA uses (about 4 million), NIA found three methodological limitations in our study that it believes call into question its validity. First, NIA noted that only 3 of the 18 studies are of U.S. populations and questions whether our combining of U.S. and non-U.S. populations is warranted, given that the U.S. data tend to yield higher prevalence rates than do the non-U.S. data. Second, NIA was concerned about variation in how some of the studies we reviewed applied diagnostic criteria. The agency was concerned that most of the questionable cases in the studies reviewed, which we did not include in our estimates, were actually mild cases of dementia. Further, NIA believes that many of the studies we relied on had insufficiently sensitive initial screens that led to their missing many mild cases of dementia. NIA was also concerned that in many of the studies reviewed, only persons with pure AD were coded as cases of AD but not those persons with a mixed dementia, including AD as a component. Third, NIA commented that the meta-analytic method we used cannot compensate for the large differences in rates observed across studies. Although we have made some adjustments to account for NIA’s criticisms, we believe our methodology remains useful for estimating AD prevalence. First, with regard to the use of non-U.S. studies, we note that when severity is taken into account, the results of U.S. and non-U.S. studies are comparable when one of the U.S. studies is excluded. This study, the East Boston study, yields prevalence estimates that are far higher than any of the other studies, suggesting a disparity in methodology rather than in population characteristics. As for NIA’s criticisms related to diagnostic criteria, we recognize the utility of including cases of mixed dementia and of adjusting for insensitive screens; we have, therefore, included in this report estimates to reflect these adjustments. We disagree, however, with the idea that questionable cases should be included. Although such people may become demented, they do not at the time of the prevalence survey satisfy the accepted diagnostic criteria for dementia or AD. Finally, we disagree with NIA’s conclusion that meta-analysis is an inappropriate method because of the heterogeneity of the prevalence rates in the studies we reviewed. With severity accounted for, 17 of the 18 studies we reviewed reported relatively homogeneous rates, with one outlying study. We also received a letter from the Alzheimer’s Association expressing similar concerns about our methodology. The Alzheimer’s Association is especially concerned about how forthcoming data from studies currently underway may change the picture of AD prevalence we present. Again, we acknowledge that NIA is supporting new and hopefully better studies of the extent of AD in the U.S. and that these should improve our understanding of how the disease is distributed in all the major subpopulations. The full text of NIA’s comments, along with our response, is included in appendix IX. We will send copies of this report to the directors of the National Institutes of Health, the National Institute on Aging, and the Administration on Aging. In addition, we will make copies available upon request to others who are interested. If you or your staff have any questions about this report, please call me at (202) 512-7119 or Donald M. Keller, Evaluator-in-Charge, at (202) 512-2932. GAO staff acknowledgments are listed in appendix X. The following 18 studies are the AD prevalence studies we reviewed for this report; 3 other studies are supplementary sources, providing data and other information about one or more of the studies reviewed, as indicated. The studies reviewed are numbered for reference in table IV.1. 1. Bachman, D.L., and others. “Prevalence of Dementia and Probable Senile Dementia of the Alzheimer Type in the Framingham Study.” Neurology, Vol. 42 (1992), pp. 115-19. 2. Brayne, C., and P. Calloway. “An Epidemiological Study of Dementia in a Rural Population of Elderly Women.” British Journal of Psychiatry, Vol. 155 (1989), pp. 214-19. 3. Canadian Study of Health and Aging Working Group. “Canadian Study of Health and Aging: Study Methods and Prevalence of Dementia.” Canadian Medical Association Journal, Vol. 150 (1994), pp. 899-913. 4. Coria, F., and others. “Prevalence of Age-Associated Memory Impairment and Dementia in a Rural Community.” Journal of Neurology, Neurosurgery, and Psychiatry, Vol. 56 (1993), pp. 973-76. 5. Corso, E.A., and others. “Prevalence of Moderate and Severe Alzheimer Dementia and Multi-Infarct Dementia in the Population of Southeastern Sicily.” Italian Journal of Neurological Sciences, Vol. 13 (1992), pp. 215-19. 6. D’Alessandro, R., and others. “Dementia in Subjects Over 65 Years of Age in the Republic of San Marino.” British Journal of Psychiatry, Vol. 153 (1988), pp. 182-86. 7. Evans, D.A., and others. “Prevalence of Alzheimer’s Disease in a Community Population of Older Persons: Higher Than Previously Reported.” Journal of the American Medical Association, Vol. 262 (1989), pp. 2551-56. 8. Fratiglioni, L., and others. “Prevalence of Alzheimer’s Disease and Other Dementias in an Elderly Urban Population: Relationship with Age, Sex, and Education.” Neurology, Vol. 41 (1991), pp. 1886-92. 9. Lobo, A., and others. “The Epidemiological Study of Dementia in Zaragoza, Spain.” In Psychiatry: A World Perspective. Proceedings of the VIII World Congress of Psychiatry, edited by C.N. Stefaniss, C.R. Soldators, and A.D. Rabavilas. Amsterdam: Elsevier, 1990, pp. 133-37. 10. Manubens, J.M., and others. “Prevalence of Alzheimer’s Disease and Other Dementing Disorders in Pamplona, Spain.” Neuroepidemiology, Vol. 14 (1995), pp. 155-64. 11. O’Connor, D.W., and others. “The Prevalence of Dementia as Measured by the Cambridge Mental Disorders of the Elderly Examination.” Acta Psychiatrica Scandinavica, Vol. 79 (1989), pp. 190-98. 12. Ott, A., and others. “Prevalence of Alzheimer’s Disease and Vascular Dementia: Association with Education. The Rotterdam Study.” British Medical Journal, Vol. 310 (1995), pp. 970-73. 13. Pfeffer, R.I., A.A. Afifi, and J.M. Chance. “Prevalence of Alzheimer’s Disease in a Retirement Community.” American Journal of Epidemiology, Vol. 125 (1987), pp. 420-36. 14. Rocca, W.A., and others. “Prevalence of Clinically Diagnosed Alzheimer’s Disease and Other Dementing Disorders: A Door-to-Door Survey in Appignano, Macerata Province, Italy.” Neurology, Vol. 40 (1990), pp. 626-31. 15. Roelands, M., and others. “The Prevalence of Dementia in Belgium: A Population-Based Door-to-Door Survey in a Rural Community.” Neuroepidemiology, Vol. 13 (1994), pp. 155-61. 16. Rorsman, B., O. Hagnell, and J. Lanke. “Prevalence and Incidence of Senile and Multi-Infarct Dementia in the Lundby Study: A Comparison Between the Time Periods 1947-1957 and 1957-1972.” Neuropsychobiology, Vol. 15 (1986), pp. 122-29. 17. Skoog, I., and others. “A Population-Based Study of Dementia in 85-Year-Olds.” New England Journal of Medicine, Vol. 328 (1993), pp. 153-58. 18. Sulkava, R., and others. “Prevalence of Severe Dementia in Finland.” Neurology, Vol. 35 (1985), pp. 1025-29. Beckett, L.A., P.A. Scherr, and D.A. Evans. “Population Prevalence Estimates From Complex Samples.” Journal of Clinical Epidemiology, Vol. 45 (1992), pp. 393-402. (Relevant to study 7.) Ebly, E.M., and others. “Prevalence and Types of Dementia in the Very Old: Results from the Canadian Study of Health and Aging.” Neurology, Vol. 44 (1994), pp. 1593-1600. (Relevant to study 3.) Rocca, W.A., and others. “Frequency and Distribution of Alzheimer’s Disease in Europe: A Collaborative Study of 1980-1990 Prevalence Findings.” Annals of Neurology, Vol. 30 (1991), pp. 381-90. (Relevant to studies 2, 9, 11, 14, 16, and 18.) The source of the diagnostic criteria is G. McKhann and others, “Clinical Diagnosis of Alzheimer’s Disease: Report of the NINCDS [National Institute of Neurological and Communicative Disorders and Stroke]—ADRDA Work Group Under the Auspices of the Department of Health and Human Services Task Force on Alzheimer’s Disease.” The criteria for the clinical diagnosis of probable Alzheimer’s disease (AD) include dementia established by clinical examination and documented by the Mini-Mental Test, Blessed Dementia Scale, or some similar examination and confirmed by neuropsychological tests; deficits in two or more areas of cognition; progressive worsening of memory and other cognitive functions; no disturbance of consciousness; onset between the ages of 40 and 90, most often after the age of 65; and absence of systemic disorders or other brain diseases that in and of themselves could account for the progressive deficits in memory and cognition. The diagnosis of probable AD is supported by progressive deterioration of specific cognitive functions such as language (aphasia), motor skills (apraxia), and perception (agnosia); impaired activities of daily living and altered patterns of behavior; family history of similar disorders, particularly if confirmed neuropathologically; and laboratory results of normal lumbar puncture as evaluated by standard techniques; normal pattern of nonspecific changes in the electroencephalogram, such as increased slow-wave activity; and evidence of cerebral atrophy on computerized tomography (CT), with progression documented by serial observation. Other clinical features consistent with the diagnosis of probable Alzheimer’s disease, after exclusion of causes of dementia other than Alzheimer’s disease, include plateaus in the course of progression of the illness; associated symptoms of depression; insomnia; incontinence; delusion; illusions; hallucinations; catastrophic verbal, emotional, or physical outbursts; sexual disorders; and weight loss; other neurological abnormalities in some patients, especially with more advanced disease, and including motor signs such as increased muscle tone, myoclonus, or gait disorder; seizures in advanced disease; and CT normal for age. Criteria that make the diagnosis of probable Alzheimer’s disease uncertain or unlikely include sudden, apoplectic onset; focal neurological findings such as hemiparesis, sensory loss, visual field deficits, and lack of coordination early in the course of the illness; and seizures or walking disturbances at the onset or early in the course of the illness. Clinical diagnosis of possible Alzheimer’s disease may be made on the basis of the dementia syndrome, in the absence of other neurologic, psychiatric, or systemic disorders sufficient to cause dementia, and in the presence of variations in the onset, in the presentation, or in the clinical course; may be made in the presence of a second systemic or brain disorder sufficient to produce dementia, which is not considered to be the cause of the dementia; and should be used in research studies when a single, gradually progressive severe cognitive deficit is identified in the absence of other identifiable cause. We defined relevant studies as published studies of original research satisfying each of three inclusion criteria. The studies had to (1) include age- and gender-specific prevalence rates of AD (2) diagnosed by NINCDS-ADRDA (or equivalent) criteria (see app. II), along with the corresponding sample sizes, (3) from white (that is, European-American or European) populations. Because the AD prevalence rate is known to vary by age and may vary by gender, overall rates for elderly people are likely to be sensitive to differences among populations in age and gender. One way in which the AD prevalence rates from different populations can be validly compared is if the rates are specific to a particular combination of age and gender (for example, the rate for women between the ages of 70 and 74). Thus, we include only studies that present age- and gender-specific AD prevalence rates, along with the sample sizes needed to weight them in a quantitative integration. The published studies presenting these rates include populations from North America, Europe, and Asia. These populations are typically small, often a neighborhood within a city or a small town, and none of them individually or in any combination can be assumed to be representative of the U.S. population. The white (that is, European-American and European) populations studied contain few participants not of European background. The best that can be done until a sufficiently large population representative of the United States is studied is to integrate the results from available studies, excluding those with AD prevalence rates that are likely to differ systematically from those of the majority white population of the United States. Prevalence rates for AD from Asian countries tend to be lower than those observed in Europe and North America, although Asian-American rates are closer to those of the white population. The reason for this difference is not known, but we decided that to be cautious in extracting prevalence rates, we would exclude the numerous studies of Asian and Asian-American populations. This leaves us with only studies of populations not known to differ systematically from European-Americans with respect to AD prevalence. If different diagnostic criteria are used to ascertain cases in various studies, then observed differences in AD prevalence may reflect the different criteria rather than true population differences in AD prevalence. Integrating only prevalence estimates with the same diagnostic criteria can reduce the effects of criteria as a source of differences among estimates. In order to minimize the possible role of differences in diagnostic criteria, we include only studies using the NINCDS-ADRDA criteria for probable AD (or for probable and possible AD—see app. II) or equivalent diagnostic criteria. We used a systematic computer-assisted search of the medical and social science literature, supplemented by expert advice and references found in the literature, in order to locate published studies on AD prevalence that meet the inclusion criteria listed above. We found 18 studies meeting these criteria (see app. I). Using published results (in tables or graphs) from each of the studies, we recorded the age- and gender-specific AD prevalence rates for all reported age intervals with lower limits of 60 years or older. In most of the studies, the AD rates excluded all other kinds of dementia, but in four of them a number of cases of mixed dementia (cases diagnosed with AD and another dementia) were included. For each age interval reported, we recorded the midpoint. If the open-ended age interval “85 and older” was used, we considered it as extending to 95 and recorded the midpoint (90.5). We considered “90 and older” and “95 and older” as extending to 99. When prevalence rates were not given explicitly, we computed them from available data or read them from graphs. When differing estimates of the same rate were presented in different articles about the same study, we consulted an expert to determine the correct values. The rates for analysis are listed in table IV.1, with each of the 18 studies numbered, as identified in appendix I. We refer to cases of mild, moderate, or severe AD as cases of “any AD.” Some rating systems include other categories of severity. For example, “questionable dementia”—a category intermediate between “normal” and mild dementia—is used in the Clinical Dementia Rating (CDR) for people who are only slightly impaired and do not satisfy the NINCDS-ADRDA criteria for dementia. People in this intermediate category may or may not be counted as cases of dementia in different studies. We do not consider them to be cases of dementia, however. Table IV.1: AD Prevalence Rates According to Age and Gender in 18 Studies We Reviewed (continued) Mild, moderate, and severe (continued) Questionable, mild, moderate, and severe (Table notes on next page) Rate undefined. In the presentation of these data, different age categories were used for men and women. By our definition of AD, one of the 18 sets of prevalence rates—the set from the Southern California study (study 13)—does not qualify because it includes cases of questionable dementia. Therefore, we omitted these rates from further analysis. However, using the published reports of this study, it is possible to isolate the cases of AD at each of the severity levels—mild, moderate, or severe—for further analysis. We did this, and the resulting data are included in the analyses that follow. We extracted not just the overall AD prevalence rates as indicated above but also, wherever possible, the rates according to each of three cumulative severity levels: (1) cases of mild or greater severity (any AD), (2) cases of moderate or severe AD, and (3) cases of severe AD. The mild or greater severity level would include all cases. The moderate or severe level would include only cases of moderate or severe dementia, and the severe level would include only the cases of severe dementia. For many of the studies, the data were presented in such a way that a breakdown of this kind was not possible. For 13 of the 18 studies (2-4, 7-12, and 14-17), the only information available about severity for the age- and gender-specific AD prevalence rates is that all cases of mild or greater severity are included. Their prevalence rates correspond exactly to their overall rates, as listed in table IV.1. For three more of the studies (1, 5, and 18), the cases include only moderate or severe AD. Their prevalence rates also correspond exactly to their overall rates in table IV.1, but the prevalence rates represent the level of moderate or severe AD. In the remaining two studies (6 and 13), age- and gender-specific AD prevalence rates are presented for different levels of severity. It is therefore possible to compute from these two studies, by the process of summation, the rates of each cumulative severity level: any AD, moderate or severe AD, and severe AD. To obtain relatively precise estimates, based on all the data for each level of severity, we quantitatively integrated the estimates. To integrate the data for a given level to arrive at estimates of age- and gender-specific prevalence, we used a method previously employed by Maria Corrada and her associates at Johns Hopkins University. This method, which pools the data of individuals from each of the studies, involves fitting a logistic regression model to the data—age interval midpoints, gender, numbers of participants, numbers of cases, and levels of severity—from a series of relevant prevalence studies so as to estimate the age- and gender-specific prevalence rates for each level of severity. Such a model implies that (1) AD prevalence at a given level is determined by age and gender and (2) the quantitative nature of the relationship is of the kind known to statisticians as logistic. Logistic regression models are similar to the more commonly encountered linear regression models, but they are especially designed to analyze variables that take on only two values (variables that may be called binary or dichotomous). An example of such a variable is the presence or absence of disease in a person. The status of all people in a population with respect to this binary variable determines the prevalence rate for that population. Thus, logistic regression is an appropriate method for analyzing data for prevalence rate. We applied the approach of Corrada and her colleagues. Our work differs from theirs both in that we were able to include some more recent studies than they were and in the way in which we took severity into account. The approach was applied to three sets of data. One set was composed of the age- and gender-specific AD prevalence rates from the 15 studies that include such rates for cases with any AD. The second set was composed of the rates from the five studies that include these rates for cases with moderate or severe AD. The third set was composed of the rates from the two studies that include these rates for cases with severe AD. The results of this application are presented in tables 3 and 4, which correspond to the first two sets of data. The results of the application to the third set are not presented because these results included relatively imprecise prevalence estimates. We did not extend our estimates beyond the age of 95 because of the few people in the study older than 95. These results can be compared with other reviews of the AD literature. Most published reviews of the literature on AD prevalence rates are qualitative. These reviews have not been designed so as to obtain prevalence estimates through a systematic quantitative integration of the study data, such as that provided by meta-analysis. One representative qualitative review notes that rates are typically estimated at about 0.5 percent, 3 percent, and 10 percent for the ages of 65, 75, and 85, respectively, both genders combined. The percentages from one of the few quantitative reviews—based on combining data from individual studies—were similar to those from the qualitative reviews. The strengths of the studies reviewed are that they include representative samples of well-defined populations we believed to be similar to the population of the United States with respect to AD prevalence and diagnosed by accepted diagnostic criteria for AD. There are some limitations concerning how well the study populations can represent the population of interest, the residents of the United States. Although each sample represents a well-defined population and each population provides estimates of AD, the samples, taken individually or combined, are not representative of the U.S. population with respect to all likely determinants of the AD prevalence rate. Two of the possibly significant ways the study populations differ from the U.S. population are discussed here. In addition, the geographic difference must be acknowledged: most of the studies included in our analysis are based on European populations. We know of no argument, however, that the prevalence of AD for white Americans differs from that of Europeans. None of the studies include significant amounts of data from major U.S. subpopulations, such as blacks (that is, African-Americans). It is not known whether blacks or other minorities (for example, Native Americans) have different prevalence rates than do whites and Europeans. As indicated in appendix III, there is some evidence that Asian-American rates differ little from the rates of white populations, in spite of the racial similarity between Asian-Americans and Asians; Asian rates tend to be systematically lower than those for whites. If minorities do have different rates, it is desirable to know their rates for at least two reasons: (1) these rates affect the overall U.S. estimates and (2) the kinds of care these minorities require may differ, for cultural reasons, from the kinds required by other Americans. NIH supports research designed to compare the AD prevalence rates of different racial and ethnic groups. Most of the studies include institutionalized people in the populations they survey, but two of the three U.S. studies do not. Logically, one might expect that since AD rates for the institutionalized are most likely higher than those for the noninstitutionalized (that is, community dwellers), omitting the institutionalized would lower prevalence estimates. There is little evidence from a previous analysis, however, that such omission has any effect. Further, the two U.S. studies that omit the institutionalized present prevalence estimates that are high relative to those from most of the other studies. It may be that too small a proportion of the elderly population is institutionalized for the assumed higher AD prevalence rate to have mattered much in these studies. Nevertheless, prevalence studies of AD should ideally include all elderly people, whether institutionalized or noninstitutionalized. We have no reason to conclude, however, that variation across studies in the handling of institutional status compromises the validity of the estimates to any significant extent. All prevalence studies are based on conventions that may be questioned. When a convention is judged to be inappropriate for a given purpose, it may lead to biased prevalence estimates. Our use of quantitative integration to generate estimates and projections is, in part, an attempt to get around some of the inappropriate conventions in individual studies by diluting them, if not by canceling them out. Two common conventions, in particular, seem inappropriate for our purposes, although they were reasonable to those who adopted them in the 1980s and early 1990s, when most of the reviewed work was done. One is that people with mixed dementia (AD and another kind of dementia) should not be counted as AD cases. Most of the studies we reviewed have counted as cases of AD only people with “pure” AD, AD in the absence of other dementia. Although this convention was useful for isolating those with no known cause of dementia—those with AD only—it is illogical if one wants to know how many people have AD. A person with both AD and another dementia is logically a person with AD. If we drop the usual convention and instead treat all cases of AD the same, regardless of other dementias, we can then infer that the estimates presented are too low. This is because only four of the studies include mixed cases in the age- and gender-specific AD rates; therefore, our estimates, based on an integration of the published rates, underestimate the true rates of all AD cases. Given the available data, it is not possible to derive, on the basis of our revised convention, age- and gender-specific estimates of the true rates of all AD. We can provide rough overall ones, however. Ten of the studies enable us to estimate the overall percentage increase in the number of AD cases that would be obtained if mixed cases are added. These estimates vary, with a median (middlemost) value of 20 percent. If none of the studies reviewed included mixed cases, then it would be reasonable to assume that this 20 percent is the adjustment factor needed to increase our estimates by taking into account the mixed cases. However, the 11 studies of mild or more severe dementia that do not include mixed cases in their AD counts also include only 29 percent of the participants in the studies of any AD; in addition, these 11 studies happen to use, on the average, relatively small samples. Thus, the value of 20 percent greater prevalence with mixed cases can only be applied to this 29 percent of the participants in the studies of any AD, yielding an overall percentage bias of 5.8 percent (20 percent times 29 percent). The 20-percent adjustment for mixed cases can also be applied to 90 percent of the participants in studies of moderate or severe AD since this is the percentage of participants in studies of similar populations that exclude mixed cases from their AD counts. The resulting adjustment factor is 18 percent (20 percent times 90 percent). When this factor is applied to the estimates given above, the findings for 1995 are 2 million people with AD of any kind and 1.3 million cases of moderate or severe AD, rather than the 1.9 million and 1.1 million, respectively, that were originally reported. The other common convention that now seems inappropriate is that prevalence counts are not corrected for the likely number of people with AD missed by initial screening tests (and therefore not given workups for dementia). In some of the studies we reviewed, we found a potential problem: Some AD cases were missed as a result of the use of insufficiently sensitive screens for which no corrections in the age- and gender-specific rates were made. When such a problem is likely, it is illogical to present prevalence rates without correcting the prevalence estimates for the expected number of people with AD missed by the screen. Because studies that do not avoid the problem of missed cases generate rates that are too low, our estimates, to the extent they are based on such studies, are underestimates of the true rates. As with the adjustment for mixed cases, it is not possible to derive age- and gender-specific rates adjusted for missed cases, but a rough overall adjustment factor can be derived. Of the 15 studies we reviewed that investigated mild dementia, 5 can be identified as having a known or possible problem with missed cases. Of these five studies, two present their overall percentage increases as a result of missed cases (although they do not correct the age- and gender-specific rates for these); the higher of these two increases is 7.2 percent. The correction rate undoubtedly varies with the specific screen used, but we adopted this 7.2 percent as a representative value. If all the studies we reviewed had a possible problem with missed cases, then it would be reasonable to assume that this 7.2 percent is the adjustment factor needed to increase our estimates so as to take into account the expected missed cases. The five studies known to have a possible problem with missed cases, however, use relatively large samples, including 50 percent of all participants in the studies of any AD. Thus, the value of 7.2 percent greater prevalence with missed cases can be applied only to the 50 percent of the participants in the studies of any AD, yielding an overall percentage bias of 3.6 percent (7.2 percent times 50 percent). The 7.2-percent adjustment for missed cases can also be applied to 71 percent of the participants in studies of moderate or severe AD. This is because 71 percent is the percentage of the participants in studies of severe AD without correction to all or part of the data for missed cases. The resulting adjustment factor is 5.1 percent (7.2 percent times 71 percent). These adjustments for missed cases, when made in addition to the adjustment for mixed dementia that was already made, yield an increase of 10 percent in 1995, for an adjusted estimate of 2.1 million cases for any AD. The corresponding adjustment for moderate or severe AD is 24 percent, for a 1995 total of 1.4 million cases at these levels of severity. Table VIII.1 facilitates the comparison of the individual studies reviewed in the meta-analysis of the prevalence of any AD. The basis of comparison is the age- and gender-specific numbers of cases of any AD projected for the U.S. population of 1995 aged 75 to 89 years. As explained in the report, only nine of the studies we reviewed provide the appropriate data; the meta-analysis is based on these 9 plus an additional 6, for a total of 15. The following are GAO’s comments on NIA’s June 4, 1997, letter. 1. We disagree with NIA’s comment about the countries in which the prevalence studies were conducted. Studies of European and other countries with predominantly white populations are legitimately used to arrive at prevalence rates characteristic of the white population in the United States. None of the populations studied, including those from the United States, is representative of the white U.S. population with respect to ethnic, linguistic, and socioeconomic variables, but these have not been shown to determine AD prevalence rates. The relatively high rates of the combined U.S. studies are driven by the contributions of a single study that focused on the population of East Boston. This study, the one that NIA bases its estimates on, yields rates that are higher than those of all other studies, including the other U.S. studies. The difference between the U.S. and non-U.S. studies reduces to a difference between East Boston and all other studies. 2. The report has been changed to reflect the likely limiting role of screen cut-off scores. 3. We disagree with NIA’s point about “questionable” dementia. In accordance with the conventions of the field, we defined people with AD as those rated as having mild or more severe dementia. While it is true that a certain proportion of those with questionable dementia will develop AD, prevalence estimates are traditionally given for persons who have a disease and do not include those who may get the disease at a later time. 4. The report has been changed to reflect the likely role of excluding mixed dementia. 5. To examine the role of heterogeneity among studies in our meta-analysis, we took the following steps: We analyzed the data for any AD, focusing on each study as a possible source of variation. Then we dropped the one study (East Boston) found to be a statistically significant source of variation relative to the set of studies as a whole, and we reanalyzed the remaining ones. No other study was a statistically significant source of variation, and thus the heterogeneity among studies was eliminated with the elimination of that single outlier. If the remaining studies are used to generate prevalence estimates, these are somewhat lower than the original ones, for example, by 5 percentage points for men at the age of 95. One can debate about which estimates are better. However, although the heterogeneity of the data can be eliminated by dropping the one outlier, we are reluctant to exclude the data from a major American study of prevalence. This report was prepared under the direction of George Silberman. Sushil K. Sharma, Assistant Director, and Lê X. Hy were responsible for much of the research and analysis. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO provided estimates of the prevalence of Alzheimer's Disease (AD) in the United States, including projections for prevalence in the near future. GAO noted that: (1) on the basis of existing studies, it is estimated that at least 1.9 million Americans 65 years of age or older suffered from any level of AD--mild, moderate, or severe--in 1995; (2) this number would be closer to 2.1 million if adjusted for the omissions, in many of these studies, of cases with mixed dementia and cases missed by the screening instruments used; (3) when only people likely to need at least some active assistance with personal care are considered, the results of GAO's meta-analysis show slightly more than 1 million people with AD over the age of 64; (4) the result would be higher--closer to 1.4 million people with AD--if all mixed cases and missed cases were included in all studies; (5) consistent with all earlier research, the results for both any AD and moderate or severe AD demonstrate that the prevalence rates increase sharply with age, doubling about every 5 years, at least until the age of 85, when the increase begins to slow; (6) also consistent with some earlier research, the estimated rates for women are higher than for men; (7) projecting the number of people with AD into the future gives some indication of the long-term care and research challenges that will face the nation as people grow older; (8) GAO's meta-analysis, when combined with Bureau of the Census projections, shows that more than 2.9 million people would have at least a mild case of AD in 2015; of these, over 1.7 million would need active assistance in personal care; (9) these figures jump to 3.2 million and 2.1 million, respectively, when mixed cases and missed cases are included; (10) given the uncertainty surrounding existing estimates of AD, a number of studies are now under way, supported by the National Institute on Aging, that should yield better prevalence estimates of African-Americans, Hispanics, and other nonwhite subpopulations; and (11) the results of these studies, expected to be published over the next several years, should improve GAO's picture of AD prevalence for the United States as a whole, as well as for the specific population segments studied.
DOD is one of the largest and most complex organizations in the world. In support of its military operations, the department performs an assortment of interrelated and interdependent business functions, including logistics management, procurement, health care management, and financial management. As we have previously reported, the DOD systems environment that supports these business functions is complex and error prone, and is characterized by (1) little standardization across the department, (2) multiple systems performing the same tasks, (3) the same data stored in multiple systems, and (4) the need for data to be manually entered. For fiscal year 2015, the department requested about $10 billion for its business system investments. According to the department, as of April 2015, its environment includes approximately 2,179 business systems. Of these systems, DOD reports that, for fiscal year 2015, the department approved certification requests for 1,182 business systems covered by the fiscal year 2005 NDAA’s certification and approval requirements. Figure 1 shows how many of these 1,182 covered systems are associated with each functional area. DOD currently bears responsibility, in whole or in part, for about half (17 of 32) of the programs across the federal government that we have designated as high risk. Seven of these areas are specific to the department, and 10 other high-risk areas are shared with other federal agencies. Collectively, these high-risk areas in major business operations are linked to the department’s ability to perform its overall mission and affect the readiness and capabilities of U.S. military forces. As such, DOD’s business systems modernization is one of the department’s specific high-risk areas and is essential for addressing many of the department’s other high-risk areas. For example, modernized business systems are integral to the department’s efforts to address its financial, supply chain, and information security management high-risk areas. Congress included provisions in the fiscal year 2005 NDAA, as amended, that are aimed at ensuring DOD’s development of a well-defined business enterprise architecture and associated enterprise transition plan, as well as the establishment and implementation of effective investment management structures and processes. The act requires DOD to, among other things, establish an investment approval and accountability structure along with an investment review process; not obligate funds for a defense business system program with a total cost in excess of $1 million over the period of the current future-years defense program unless the approval authority certifies that the business system program meets specified conditions, including complying with the business enterprise architecture and having appropriate business process reengineering conducted; develop a business enterprise architecture that covers all defense develop an enterprise transition plan for implementing the architecture, and identify systems information in DOD’s annual budget submissions. The fiscal year 2005 NDAA also requires that the Secretary of Defense submit an annual report to the congressional defense committees on the department’s compliance with these provisions. DOD submitted its most recent annual report to Congress on April 6, 2015, describing steps taken, under way, and planned to address the act’s requirements. DOD’s approach to business systems modernization includes reviewing systems annually to ensure that they comply with the fiscal year 2005 NDAA’s business enterprise architecture and business process reengineering requirements. This effort includes both a certification of compliance by lower-level department authorities and an approval of this certification by higher-level department authorities. According to the act, this certification and approval is to occur before systems are granted permission to obligate funds for a given fiscal year. These efforts are to be guided by DOD’s Chief Management Officer (CMO) and Deputy Chief Management Officer (DCMO). Specifically, the CMO’s responsibilities include developing and maintaining a departmentwide strategic plan for business reform; establishing performance goals and measures for improving and evaluating overall economy, efficiency, and effectiveness; and monitoring and measuring the progress of the department. The DCMO’s responsibilities include recommending to the CMO methodologies and measurement criteria to better synchronize, integrate, and coordinate the business operations to ensure alignment in support of their warfighting mission and developing and maintaining the department’s enterprise architecture for its business mission area. Table 1 describes selected roles and responsibilities and the composition of key governance entities and positions related to business systems modernization as they were documented for the fiscal year 2015 business system certification and approval cycle. Within the military departments, the entities described in table 1 are supported by portfolio managers who oversee groups of business system investments within specific functional areas. For example, the Department of the Navy’s financial management portfolio manager is responsible for overseeing the Navy’s portfolio of financial management systems. In order to manage and oversee the department’s business operations and approximately 1,180 covered defense business systems, the Office of the DCMO developed the Integrated Business Framework. According to officials from the office, this framework is used to align the department’s strategic objectives—laid out in the National Security Strategy, Quadrennial Defense Review, and Strategic Management Plan—with its defense business system investments. Using the overarching goals of the Strategic Management Plan, principal staff assistants developed six functional strategies that cover nine functional areas. These functional strategies are to define business outcomes, priorities, measures, and standards for a given functional area within DOD. The business objectives and compliance requirements laid out in each functional strategy are to be integrated into the business enterprise architecture. The precertification authorities in the Air Force, Navy, Army, and other departmental organizations use the functional strategies to guide the development of organizational execution plans, which are to summarize each component’s business strategy for each functional area. Each plan includes a description of how the component’s goals and objectives align with those in the functional strategies and the Strategic Management Plan. In addition, each organizational execution plan includes a portfolio of defense business system investments organized by functional area. The components submit each of these portfolios to the Defense Business Council for certification on an annual basis. According to the department’s 2015 Congressional Report on Defense Business Operations, for the fiscal year 2015 certification and review cycle, the department empowered the military department chief management officers to manage their business systems portfolios and conduct portfolio reviews. Results were presented to the Defense Business Council and were to address topics such as major improvements and cost reductions, return on investment, risks and challenges, deviations from prior plans, and future goals. According to DOD’s investment management guidance, for the fiscal year 2015 certification and approval cycle, the Defense Business Council was to review the organizational execution plans and associated portfolios based on four investment criteria—compliance, strategic alignment, utility, and cost—to determine whether or not to recommend the portfolio for certification of funding. The Vice Chairman of the Deputy’s Management Action Group/Defense Business Systems Management Committee was to approve certification decisions and then document the decision in an investment decision memorandum. These memoranda were to indicate whether an individual organizational execution plan has been certified; conditionally certified (i.e., obligation of funds has been certified and approved but may be subject to conditions that restrict the use of funds, a time line for obligation of funds, or mandatory changes to the portfolio of business systems); or not certified (i.e., certification is not approved due to misalignment with strategic direction, mission needs, or other deficiencies). DOD’s business enterprise architecture is intended to serve as a blueprint for the department’s business transformation efforts. In particular, the architecture is to guide and constrain implementation of interoperable defense business systems by, among other things, documenting the department’s business functions and activities and the business rules, laws, regulations, and policies associated with them. According to DOD, its architecture is being developed using an incremental approach, where each new version of the architecture addresses business mission area gaps or weaknesses based on priorities identified by the department. The department’s business enterprise architecture focuses on documenting information associated with its end-to-end business process areas (e.g., hire-to-retire and procure-to-pay). These end-to-end business process areas may occur across the department’s nine functional areas. For example, hire-to-retire occurs within the human resources management functional area, while the cost management business process area occurs across the acquisition, financial management, human resources management, installations and environment, and logistics and materiel readiness functional areas. According to DOD officials, the current approach to developing the business enterprise architecture is both a “top down” and “bottom-up” approach. Specifically, the architecture focuses on developing content to support investment management and strategic decision making and oversight (top down) while also responding to department needs associated with supporting system implementation, system integration, and software development (bottom up). Consistent with DOD’s tiered approach to business systems management, the department’s approach to developing its business enterprise architecture involves the development of a federated enterprise architecture, where member architectures (e.g., Air Force, Army, and Navy) conform to an overarching corporate or parent architecture and use a common vocabulary. This approach is to provide governance across all business systems, functions, and activities within the department and improve visibility across the respective efforts. DOD defines business process reengineering as a logical methodology for assessing process weaknesses, identifying gaps, and implementing opportunities to streamline and improve the processes to create a solid foundation for success in changes to the full spectrum of operations. DOD’s reengineering efforts are intended to help the department rationalize its covered business system portfolio, improve its use of performance management, control scope changes, and reduce the cost of fielding business capability. According to DOD officials, the department has taken a holistic approach to business process reengineering, which includes a portfolio and end-to-end perspective. It has also issued business process reengineering guidance that calls for alignment of defense business systems within the Organizational Execution Plan to its functional strategy’s strategic goals. An important component of the department’s business process reengineering efforts is the problem statement development and review process. A problem statement is developed when a defense business system is seeking certification for a development or modernization effort. The statement is to include, among other things, a description of the problem that the system intends to address and a discussion of the costs, benefits, and risks of various alternatives that were considered. As part of the annual certification and approval process, problem statements are to be reviewed to support that appropriate business process reengineering has been conducted on investments seeking certification. The department has implemented 5 of the 16 recommendations that GAO has made since June 2011 to address each of the overarching provisions for improving business systems management in the fiscal year 2005 NDAA. The fiscal year 2005 NDAA, as amended, includes provisions associated with developing a business enterprise architecture and enterprise transition plan, improving the department’s investment management structures and processes, improving its efforts to certify defense business systems, and mandated budgetary reporting. Since 2011, we have issued four reports in response to the act’s requirement that we assess the actions taken by the department to comply with the In those reports, we have made recommendations to act’s provisions.address each of the act’s overarching provisions for improving business systems management. Table 2 identifies the recommendations we have made since 2011 associated with the fiscal year 2005 NDAA. Table 3 presents a summary of the current status of these recommendations. Appendix II provides additional information about the status of each recommendation. As of April 2015, the department had implemented 5 of the 16 recommendations that we have made since June 2011. For example, the department has implemented the recommendation to improve its reporting of business system data in its annual budget request. In particular, the department has established common elements in its three primary repositories used for tracking information about business systems, which allows information about individual business systems to be matched across systems. In addition, the Office of the CIO demonstrated that it conducts periodic data quality assessments. As a result, the department is better positioned to report more reliable information in its annual budget request and to maintain more accurate information about business systems to support its efforts to manage them. In addition, the department has improved the alignment of its Planning, Programming, Budgeting, and Execution process with its business systems certification and approval process. For example, according to the department’s February 2015 certification and approval guidance, Organization Execution Plans are to include information about certification requests for the upcoming fiscal year as well as over the course of the Future Years Defense Program. As a result, the department’s business system certification and approval process can support better informed decisions about system certifications and inform recommendations on the resources provided to defense business systems as part of the Planning, Programming, Budgeting, and Execution process. The department has partially implemented the remaining 11 recommendations. For example, the department’s February 2015 investment management guidance, which describes DOD’s business system certification and approval process, identifies four criteria and specifies the associated assessments that are to be conducted when reviewing and evaluating component-level organizational execution plans in order to make a portfolio-based investment decision. The guidance also states that return on investment should be considered when evaluating program cost. However, it does not call for the use of actual- versus-expected performance data and predetermined thresholds. Further, the Office of the DCMO has developed a draft resource allocation plan for each of its directorates and their respective divisions. This draft plan includes staffing profiles that describe each division’s needed staff competencies and qualifications. However, the Office of the DCMO did not demonstrate that it has addressed other important aspects of strategic human capital planning. For example, the office did not demonstrate that it has developed a skills inventory, needs assessment, gap analysis, and plan to address identified gaps, as called for by our recommendation. Appendix II provides additional information about the recommendations that DOD has fully and partially implemented. Implementing the remaining 11 recommendations will improve DOD’s modernization management controls and help fulfill the department’s execution of the requirements of the act. DOD’s business enterprise architecture and process reengineering efforts are not fully achieving the intended outcomes described in statute. More specifically, with respect to the architecture, portfolio managers (managers) we surveyed reported that it was generally not effective in achieving its intended outcomes and that its usefulness in achieving benefits, such as reducing the number of applications, was limited. With respect to process reengineering, managers reported these efforts were moderately effective at streamlining business processes, but less so in limiting the need to tailor commercial off-the-shelf systems. Portfolio managers cited a number of challenges impeding the usefulness and effectiveness of these two initiatives, such as the availability of training, lack of skilled staff, parochialism, and cultural resistance to change. DOD has various improvement efforts under way to address some of these challenges; however, additional work is needed and the managers provided some suggestions for closing the gap. More fully addressing the cited challenges would help increase the utility and effectiveness of these initiatives in driving greater operational efficiencies and savings. Appendix I provides additional details about our survey methodology. The fiscal year 2005 NDAA, as amended, requires DOD to develop a business enterprise architecture that covers all defense business systems and will be used as a guide for these systems. According to the act, the architecture is intended to help achieve the following outcomes: Enable DOD to comply with all applicable laws, including federal accounting, financial management, and reporting requirements. Guide, permit, and constrain the implementation of interoperable defense business systems. Enable DOD to routinely produce timely, accurate, and reliable business and financial information for management purposes. Facilitate the integration of budget, accounting, and program information and systems. Provide for the systematic measurement of performance, including the ability to produce timely, relevant, and reliable cost information. The act also specifies that the department is not to obligate funds for defense business system programs that have a total cost in excess of $1 million unless the system’s approval authority certifies that the program complies with the business enterprise architecture and the certification is subsequently approved by the department’s Investment Review Board. Achieving the act’s intended outcomes would contribute to the department’s ability to use the architecture to realize important benefits that we and others have previously identified, such as cost savings or For example, if the architecture effectively guides, permits, avoidance.and constrains the implementation of interoperable systems, that would contribute to increased information sharing and improved system interoperability. As another example, using the architecture to produce timely and reliable business and financial information would contribute to improving management decisions associated with enhanced productivity and improved business and IT alignment, among other things. The majority of DOD portfolio managers we surveyed reported that the business enterprise architecture has not been effective in meeting its intended outcomes. More specifically, half of the managers surveyed reported that the business enterprise architecture was effective in enabling compliance with all applicable laws. However, fewer than 40 percent reported that the architecture was effective in helping to achieve the other outcomes called for by the fiscal year 2005 NDAA. Table 4 provides additional information on survey responses regarding the act’s specific requirements. Portfolio managers provided additional details to further explain their survey responses. Their comments included the following: The architecture is a standalone effort that does not drive comprehensive portfolio and business management through the various DOD components. The architecture is overwhelming to review and is not integrated with other activities that occur throughout the remainder of the year. The compliance requirements are not sufficiently defined to enable system interoperability. Portfolio managers also reported that the usefulness of DOD’s business enterprise architecture in achieving various potential benefits is limited. For example, 75 percent reported limited achievement of improved change management and 74 percent reported limited achievement of streamlined end-to-end business processes. In addition, 71 percent reported limited achievement of benefits such as a reduced number of applications, improved business and IT alignment, enhanced productivity, and achieving financial benefits such as cost savings or cost avoidance. Table 5 summarizes the portfolio managers’ survey responses. Although managers reported limited achievement of benefits, two provided specific examples of individual benefits associated with the business enterprise architecture. More specifically, one cited saving $10 million annually due to the establishment of a DOD-wide military housing system that has replaced a number of individual systems. A second reported $11.5 million in architecture-related savings through the retirement of 48 real property and financial management systems. In addition, officials from the Office of the DCMO provided specific examples of benefits that they stated can be attributed, at least in part, to the department’s business architecture. For example, according to these officials, two proposed new defense business system investments were not approved by DOD due, in part, to architecture reviews that revealed the requested capabilities were already available in existing systems. The surveyed DOD portfolio managers reported that their functional areas face many challenges in achieving the outcomes described in the NDAA for fiscal year 2005. The most frequently cited challenges reported were the usability of the compliance tool (79 percent), frequent changes to the architecture (75 percent), the availability of training (71 percent), the availability of skilled staff (71 percent), parochialism (67 percent), and cultural resistance to change (63 percent). Table 6 identifies the survey responses to achieving the architecture’s intended outcomes. Officials from the Office of the DCMO, including the Lead Architect for the business enterprise architecture and the Chief of Portfolio Management, described various efforts under way to address selected challenges identified in our survey results. With regard to the top ranked challenge (usability of DOD’s architecture compliance tool), the office has been working on a more robust replacement tool. As of April 2015, the office had moved architecture content and associated compliance information from its previous tool into its Integrated Business Framework-Data Alignment Portal. Further, the department plans to require all fiscal year 2016 compliance assessments to be completed in this portal environment. According to officials from the Office of the DCMO, this change will help ensure that architecture-related information is available in the same place, which will help support more sophisticated analysis of information about business systems. For example, by combining information about the architecture, compliance information, functional strategies, and organizational execution plans, the department could more easily conduct analyses that will help support portfolio management. According to these officials, examples of such analyses include the ability to identify the funds certified and approved for various business activities and the ability to identify systems that conduct similar system functions. With regard to the challenge associated with limited alignment between corporate and component architectures, the officials from the Office of the DCMO stated that they intend to develop an overarching (or federated) architecture that will capture content from, and allow governance across, the department (e.g., Army, Navy, and Air Force). We previously recommended that DOD establish a plan for how it will address business enterprise architecture federation in 2013. The department’s improvement efforts only address selected reported challenges. However, portfolio managers offered a number of suggestions that relate to other identified challenges that may help close gaps in these efforts. Key suggestions included: Improve tools: Four of 24 managers offered suggestions that relate to compliance tool usability. For example, one portfolio manager stated that functionality should be added to the architecture compliance tool to automatically create and build the architecture artifacts mentioned in compliance guidance using the information already included in the tool for each system. Another portfolio manager stated that there are no tools available that portfolio managers can use to analyze their portfolios relative to the architecture. Provide additional training: Two managers offered suggestions associated with additional training. For example, one manager reported that the compliance tool is not user friendly and little to no training was offered when programs were required to use it to assert compliance. As a result, this manager added that more training should be made available for using the compliance tool. Start the process earlier in a system’s life cycle: One manager suggested the architecture be addressed earlier in the acquisition life cycle, such as in the analysis of alternatives phase, in order to help assess whether existing solutions are already employed in other areas of the enterprise. If the architecture compliance process uncovers potential duplication or overlap, it might be easier to stop development of a duplicative system earlier in its life cycle rather than waiting until a business process is more reliant on a planned system that is closer to becoming operational. Establish priorities: One portfolio manager suggested that the department develop departmental business improvement and integration priorities and develop clearly understandable and verifiable compliance standards that will guide and constrain systems development to help achieve those priorities. Improve guidance: Two managers suggested that the department improve its guidance to clarify the documentation that systems developed prior to the existence of the business enterprise architecture are required to prepare to address the business enterprise architecture compliance requirement. Improve content: Seven managers offered suggestions associated with improving content. For example, one manager stated that the business enterprise architecture is large and cumbersome and incomplete in many areas. Addressing the challenges cited by the portfolio managers could help increase the utility and effectiveness of the department’s business enterprise architecture in driving greater operational efficiencies and cost savings. The fiscal year 2005 NDAA, as amended, establishes expected outcomes for the department’s business process reengineering efforts. The act states that funds for covered business system programs cannot be certified and approved unless each program’s pre-certification authority has determined that, among other things, appropriate business process reengineering efforts have been undertaken to ensure that the business process supported by the program is, or will be, as streamlined and efficient as practicable and the need to tailor commercial off-the-shelf systems to (a) meet unique requirements, (b) incorporate unique requirements, or (c) incorporate unique interfaces has been eliminated or reduced to the maximum extent practicable. As we have previously reported, modifications to commercial off-the-shelf systems should be avoided to the extent practicable as they can be costly to implement. Achieving the intended outcomes of the fiscal year 2005 NDAA would increase the department’s ability to realize key benefits to business systems modernization. For example, reengineering business processes to be as streamlined as possible can result in increased efficiencies, a reduced number of interfaces, and decreased program costs. The department’s business process reengineering efforts have had mixed success in achieving their intended outcomes. Specifically, 63 percent of the portfolio managers we surveyed reported that the efforts were effective in helping to ensure that the business processes supported by the defense business systems they manage are (or will be) streamlined and efficient as practicable. As an example, one manager reported this effort highlighted the strengths and weaknesses of the systems within their specific portfolio. Another reported that their portfolio has been reduced from 147 systems to 13 due, in part, to the business process reengineering efforts. However, the general consensus among surveyed portfolio managers was that the department’s efforts were less effective in helping to limit tailoring of commercial off-the-shelf systems. Only 29 percent reported that DOD’s business process reengineering efforts were effective in eliminating or reducing the need to tailor commercial off-the-shelf systems. Tailoring might be required, for example, because existing policy and guidance might limit a system’s ability to conform to a specific approach for executing a business process that is already built into an individual commercial off-the-shelf system. Another reason given was that managers have limited knowledge about the commercial off-the-shelf products that are available via established enterprise licenses and this limited knowledge makes it difficult to conduct effective business process reengineering. Table 7 provides additional information on portfolio managers’ responses regarding the effectiveness of DOD’s business process reengineering efforts. Portfolio managers reported that business process reengineering has been useful in helping to achieve selected benefits. In particular, 70 percent reported that efforts have resulted in streamlined business processes. Sixty-seven percent reported that efforts have resulted in improved documentation of business needs, which is consistent with DOD’s focus on developing problem statements for new capabilities. Such problem statements reflect analysis of a perceived business problem, capability gap, or opportunity. According to officials from the Office of the DCMO, they help ensure that programs are aligned with DOD’s strategic needs, and also assist the department’s efforts in identifying redundancies and duplication. However, only 29 percent of the portfolio managers surveyed reported that efforts to reduce program costs have been effective. Table 8 summarizes the portfolio managers’ survey responses. The surveyed DOD portfolio managers identified a range of challenges to fully achieving the business process reengineering outcomes described in the fiscal year 2005 NDAA. In particular, cultural resistance to change was the most frequently cited challenge (71 percent), followed by parochialism (i.e., focusing on one’s own sub-organization rather than having an enterprise-wide view.), availability of skilled staff, and availability of training (all at 67 percent). The quality of business process reengineering compliance guidance, the compliance review process, and the timing of the reengineering relative to system development work were also reported as important challenges (all at 63 percent). Table 9 summarizes survey responses to questions about the challenges to business process reengineering. DOD has taken steps to improve its reengineering efforts that may, in part, address some of the challenges identified in our survey results. With regard to parochialism (i.e., focusing on one’s own sub-organization rather than having an enterprisewide view), the department is developing online tools that provide additional information to program managers, portfolio managers, pre-certification authorities, and the Defense Business Council. For example, the department’s problem statement portal is to be a repository for problem statement submissions and is to be available departmentwide. In addition, the department has developed its Integrated Business Framework-Data Alignment Portal, which is to provide, among other things, additional information about individual business systems, such as information about which systems execute specific business activities and system functions. Further, with respect to addressing the challenge associated with the business process reengineering compliance review process, the department has taken steps to help ensure improved accountability for a portion of certification and approval requests. In particular, according to officials from the Office of the DCMO, the DCMO allowed the military departments more autonomy and responsibility for reviewing their system portfolios during fiscal year 2015 certification and approval reviews. Nevertheless, as we have previously reported, and as discussed in appendix II, this process is not guided by specific criteria for elevating certain systems to the Defense Business Council that might require additional oversight. Notwithstanding these improvement efforts, as reported in feedback by the military department portfolio managers, additional work is needed. These managers provided a number of suggestions to help address the identified challenges. Suggestions included: Improve business process reengineering training: Two portfolio managers offered suggestions that relate to improved training. For example, one manager stated that the department should establish minimum training standards. Improve business process reengineering guidance: Two managers offered suggestions associated with improved guidance. For example, one portfolio manager stated that sufficient guidance does not exist to describe meaningful business process models or how such models should be analyzed. Align business process reengineering with system development activities: One portfolio manager stated that the reengineering process should be more closely tied to acquisition milestones instead of being assessed on an annual basis. According to GAO’s standards for internal controls,ensure that there are adequate means of obtaining information from stakeholders that may have a significant impact on the agency achieving its goals. While we did not evaluate the effectiveness of these suggestions, they may be valuable for the Office of the DCMO to consider in its ongoing and future business process reengineering improvement efforts. More fully addressing the challenges cited by the portfolio managers would help the department achieve better outcomes, including limiting the tailoring of commercial off-the-shelf systems. DOD has made progress in improving its compliance with section 332 of the NDAA for fiscal year 2005, as amended. Specifically, the department has implemented 5 of the 16 recommendations that we have made since 2011 that are consistent with the requirements of the act and has partially implemented the remaining 11 recommendations. The recommendations not fully implemented relate to improving the department’s investment management processes and efforts to certify defense business systems, among other things. Fully implementing them will help improve DOD’s modernization management controls and fulfill the department’s execution of the act’s requirements. Collectively, DOD’s business enterprise architecture and business process reengineering efforts show mixed results in their effectiveness and usefulness in achieving the intended outcomes and benefits. Among other things, portfolio managers reported that the architecture does not enable DOD to produce reliable and timely information for decision- making purposes. Additionally, DOD’s reengineering efforts are effective in streamlining business processes, but not in reducing the tailoring of commercial software products. Portfolio managers reported that various challenges exist in achieving intended outcomes and benefits, including cultural resistance, parochialism, and a lack of skilled staff. DOD has various improvement efforts under way; however, gaps exist and portfolio managers provided suggestions on how to close some of them. Until these gaps are addressed, the department’s ability to achieve important outcomes and benefits will continue to be limited. To help ensure that the department can better achieve business process reengineering and enterprise architecture outcomes and benefits, we recommend that the Secretary of Defense utilize the results of our portfolio manager survey to determine additional actions that can improve the department’s management of its business process reengineering and enterprise architecture activities. We received written comments on a draft of this report from DOD’s Deputy Chief Management Officer (DCMO). The comments are reprinted in appendix III. In the comments, the DCMO concurred with our recommendation and stated that the department will use the results of our portfolio manager survey to help make improvements. The DCMO also described associated improvement efforts. For example, the DCMO stated that the department plans to restructure the Business Enterprise Architecture to focus more explicitly on the business processes being executed within the functional domains, which span all levels of the department. DOD officials also provided technical comments, which we have incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees; the Director, Office of Management and Budget; the Secretary of Defense; and other interested parties. This report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions on matters discussed in this report, please contact me at (202) 512-4456 or chac@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Our objectives were to (1) assess the actions by the Department of Defense to comply with section 332 of the National Defense Authorization Act (NDAA) for Fiscal Year 2005, as amended and (2) determine the usefulness and effectiveness of DOD’s business enterprise architecture and business process reengineering processes. To address the first objective, we identified recommendations related to DOD’s business systems modernization efforts that we made in our annual reports from 2011 to 2014 (16 recommendations total) in response to the fiscal year 2005 NDAA’s requirements. Though we have made recommendations in this area prior to 2011, those recommendations have since been closed. We evaluated the department’s written responses and related documentation on steps completed to implement or partially implement the recommendations. Documentation we analyzed included guidance on business enterprise architecture and business process reengineering compliance, guidance on certifying and approving defense business systems, and documentation about the department’s problem statement development and review process. In addition, we interviewed officials from the Office of the Deputy Chief Management Officer and the Office of the Chief Information Officer, and observed a demonstration of the Office of the Deputy Chief Management Officer’s Integrated Business Framework-Data Alignment Portal tool to better understand the actions taken to address our recommendations. We also reviewed the department’s annual report to Congress, which was submitted on April 6, 2015, to identify gaps or inconsistencies with the implementation of the 16 recommendations. To address our second objective, we determined the intended outcomes of the business enterprise architecture and business process reengineering processes by analyzing the fiscal year 2005 NDAA. We also determined potential benefits associated with the processes by reviewing department guidance on the processes and related documentation. This includes DOD’s business enterprise architecture and business process reengineering guidance, Defense Business System Investment Management Process Guidance, the Business Case Analysis Template, DOD’s Business Enterprise Architecture 10.0 AV-1 Overview and Summary Information, the department’s Strategic Management Plan, and the Information Resource Management Strategic Plan for fiscal years 2014 and 2015. We also reviewed relevant GAO reports on business enterprise architecture and business process reengineering. We then developed a structured data collection instrument (survey) to gather information on the usefulness of the two specified IT modernization management controls at DOD in achieving their intended outcomes and their effectiveness in achieving associated benefits. As part of this survey, we also developed questions to help us determine (1) challenges related to complying with the processes and (2) suggestions for achieving business enterprise architecture and business process reengineering outcomes, including suggestions for achieving these outcomes in a more cost-effective manner. Selected questions contained a ratings scale for managers to choose a response that was consistent with the aforementioned topic areas. For example, we asked managers to rate the effectiveness of the business enterprise architecture and business process reengineering efforts using a scale containing the following choices: neither effective nor ineffective, not applicable/no basis to judge. We also asked managers to identify the extent to which their portfolios had achieved benefits associated with business enterprise architecture and business process reengineering efforts using a scale containing the following choices: little or no extent, or not applicable/no basis to judge. We pre-tested the questions with various DOD officials including officials from the Office of the Deputy Chief Management Officer, and with portfolio and program-level officials within the military departments. As a result, we determined that the military department portfolio managers were in the best position to answer our questions because they manage and have a perspective across an entire portfolio of defense business systems. Officials from DCMO’s Management, Policy, and Analysis Directorate provided us with a list of portfolio managers for the three military departments. We did not include portfolio managers for DOD entities outside of the military departments. We obtained responses from all surveyed portfolio managers (24 in total). Accordingly, these results are generalizable. We analyzed and summarized the survey results to help determine the usefulness and effectiveness of DOD’s business process reengineering and enterprise architecture efforts, as well as related challenges and suggestions for improvement. In addition, though we collected examples of cost savings estimates from managers, and cite them in the report, we did not assess the cited cost savings estimates. We also met with managers of selected DOD business system programs and other knowledgeable DOD officials to discuss their perspectives on DOD’s business enterprise architecture and business process reengineering efforts. This included interviewing officials associated with defense business programs from each of the military departments and from across various business functions, including program managers, enterprise architects, and other technical and program operations officials. Further, when available, we reviewed documentation provided by DOD program managers to substantiate answers provided as part of our interviews. We also discussed the survey results with officials from the Office of the DCMO to obtain their perspectives on the results and discussed with these officials ongoing efforts to improve the department’s business process reengineering and enterprise architecture efforts. The practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in how a particular question is interpreted, in the sources of information that are available to respondents, or in how the survey data are analyzed can all introduce unwanted variability into survey results. To minimize such nonsampling errors, a social science survey specialist designed the questionnaire in collaboration with GAO staff with subject matter expertise. As stated earlier, the questionnaire was pre-tested to ensure that the questions were relevant, clearly stated, and easy to comprehend. When data from the survey were analyzed, an independent analyst reviewed the computer program used for the analysis of the survey data. Since this was a web-based survey, respondents entered their answers directly into the electronic questionnaire, thereby eliminating the need to have the data keyed into a database and avoiding data entry errors. We conducted this performance audit from October 2014 to July 2015 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objective. Table 10 describes the status of open GAO recommendations associated with the fiscal year 2005 NDAA’s requirement that we annually assess the actions taken by the department to comply with its provisions. Since June 2011, we have made 16 recommendations to DOD regarding defense business systems. As of April 2015, the department had implemented 5, and partially implemented 11 recommendations. The table also identifies the category that we assigned to the recommendation to demonstrate its relationship to the requirements outlined in the act. In addition to the contact above, individuals making contributions to this report include Michael Holland (assistant director), Camille Chaires, Carl Barden, Susan Baker, Nabajyoti Barkakati, Wayne Emilien, Nancy Glover, James Houtz, Monica Perez-Nelson, Stuart Kaufman, Adam Vodraska, and Shawn Ward.
GAO designated DOD's multibillion dollar business systems modernization program as high risk in 1995, and since then has provided a series of recommendations aimed at strengthening its institutional approach to modernizing its business systems investments. Section 332 of the NDAA for fiscal year 2005, as amended, requires the department to take specific actions consistent with GAO's prior recommendations and included a provision for GAO to review DOD's efforts. In addition, the Senate Armed Services Committee Report for the NDAA for fiscal year 2015 included a provision for GAO to evaluate the usefulness and effectiveness of DOD's business enterprise architecture and business process reengineering processes. This report addresses both of those provisions. In evaluating the department's compliance, GAO analyzed DOD's efforts to address open recommendations made in previous reviews. To evaluate the usefulness and effectiveness of the department's business enterprise architecture and business process reengineering processes, GAO surveyed the military department portfolio managers (24 in total) and interviewed officials. The response rate for the survey was 100 percent, making the results of the survey generalizable. The Department of Defense (DOD) has implemented 5 of the 16 recommendations made by GAO since June 2011 to address each of the overarching provisions for improving business systems management in the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 , as amended (NDAA) (10 U.S.C. § 2222) (see table). For example, it has implemented the recommendation to improve the data available for its business systems by making improvements to its repositories used for tracking information about the systems. Based on GAO's analysis, the department has partially implemented the remaining 11 recommendations. Implementing all recommended actions will improve DOD's modernization management controls and help fulfill the department's execution of the act's requirements. Source: GAO analysis of DOD documentation. | GAO-15-627 . DOD's business enterprise architecture and process reengineering efforts are not fully achieving the intended outcomes described in statute. More specifically, portfolio managers reported through GAO's survey that the architecture was not effective in constraining system investments or enabling DOD to produce reliable and timely information for decision-making purposes, among other things. As a result, the architecture has produced limited value. Portfolio managers reported that the department's business process reengineering efforts were moderately effective in streamlining business processes, but much less so in limiting the tailoring of commercial off-the-shelf systems. They also reported that these efforts have been useful in realizing selected benefits, such as improved documentation of business needs. Managers GAO surveyed reported various challenges that impede the department's ability to fully achieve intended outcomes, such as cultural resistance to change and the lack of skilled staff. The department has work under way to address some of these challenges; however, gaps exist and the portfolio managers provided suggestions on how to close some of them. More fully addressing the challenges cited by the portfolio managers would help the department achieve better outcomes, including greater operational efficiencies and cost savings. GAO recommends that DOD utilize the results of the survey to determine additional actions that can improve management of its business process reengineering and enterprise architecture activities. DOD concurred with the recommendation.
The Child Support Enforcement Program was established in 1975 to help strengthen families and reduce dependence on welfare by helping to ensure that the responsibility for supporting children was placed on parents. The states operate programs to locate noncustodial parents, establish paternity, and obtain support orders, along with enforcing actual collections of those court-ordered support payments. The federal government—through OCSE—funds 66 percent of state administrative and operating costs, including costs for automated systems, and up to 90 percent of expenses associated with planning, designing, developing, installing, and/or enhancing automated systems. The Family Support Act of 1988 required that statewide systems be developed to track determination of paternity and child support collections. To address that requirement, OCSE developed regulations and guidance for conducting certification reviews. In 1993, OCSE published a certification guide, which addresses the functional requirements for child support enforcement systems. In general, the certification guide requires that the systems be operational, statewide, comprehensive, and effective and efficient. The guide also provides 53 specific requirements, which are grouped into the following categories: case initiation, location of parents, establishment of paternity, case management, enforcement, financial management, reporting, and security and privacy. (See appendix I for the system regulations and appendix II for descriptions of the guide’s specific requirements by category.) The guide was developed to help OCSE’s analysts ensure that certification reviews are conducted consistently—using the same criteria and standards for documentation. The analysts use the certification guide in conducting certification reviews, and states refer to it in preparing for their certification reviews. To ensure that states are meeting the functional requirements specified in the certification guide, OCSE also developed a certification questionnaire.The questionnaire provides a series of questions for analysts’ use in determining if the states’ systems address the functional requirements. The format and content of the questionnaire mirror those of the certification guide. In addition to the certification guide and questionnaire, OCSE has provided supplementary guidance to (1) aid in developing and testing specific areas such as financial requirements and (2) clarify and expand upon the requirements provided in the certification guide and questionnaire. OCSE uses its guidance to ensure that its staff consistently perform three types of certification reviews: functional, level 1, and level 2. A functional review occurs early in the development of a system before it is operational in a pilot site. During functional reviews, analysts evaluate parts of the system against the certification requirements to inform the state of the work that remains before its system can be certified. A level 1 review occurs when an automated system is installed and in operation in one or more pilot locations. (OCSE created this level of review in 1990 due to state requests for agency guidance prior to statewide implementation.) A level 2 review occurs when the system is considered by the state to be operational statewide. This review is required for final certification. Systems are granted full certification when they meet all functional requirements and conditional certification when the system needs only minor corrections that do not affect statewide operation. According to OCSE analysts, states whose systems receive either type of level 2 certification are exempt from penalties for failing to meet system requirements imposed by the Family Support Act. The Family Support Act of 1988 set a deadline of October 1, 1995, for implementation and federal certification of such systems. However, when only a few states met the deadline, the Congress passed legislation extending it by 2 years, to October 1, 1997. Current law requires HHS to impose substantial financial penalties on states that did not have certified child support enforcement systems by October 1, 1997. The Congress, under the House bill HR 3130 with Senate modifications SP 2286, is considering legislation to reduce those penalties. OCSE certified 17 states by the extended deadline and another 8 states since the deadline (as of March 31, 1998). In June 1997 we made several recommendations designed to strengthen OCSE’s oversight of child support enforcement systems. Specifically, we reported that the certification reviews are conducted too late for effective oversight. Because the reviews are conducted toward the end of the system development projects, the reviews come too late for timely redirection of systems development without significant costs being incurred. Our objectives were to determine (1) whether HHS’ certification guidance addresses the system provisions in the Family Support Act of 1988 and implementing regulations, (2) whether HHS has consistently administered the certification process, and (3) the certification status of the state systems. Our work was done to determine whether OCSE’s certification guidance completely addresses the system requirements in the act and supporting regulations; it does not determine the overall adequacy of OCSE’s certification review process. This issue was addressed in our June 1997 report, in which we identified weaknesses in HHS’ oversight of these systems, including the timeliness of the certification reviews. To document the certification process, we obtained and analyzed OCSE’s guidance for certifying child support enforcement systems. To determine whether this guidance addresses the legal and regulatory requirements for child support enforcement systems, we compared the certification guide and questionnaire to the child support enforcement system regulations. We also analyzed whether the regulations addressed the system provisions of the Family Support Act of 1988. To determine whether OCSE consistently administered the certification process, we obtained and reviewed all certification reports issued as of March 31, 1998, and assessed how OCSE officials at headquarters and in one regional office plan, administer, and report the results of certification reviews. While we discussed this review process with these officials, we did not visit states to observe OCSE conducting certification reviews or conduct independent work to verify the information presented in OCSE’s certification reports. We performed our work at HHS headquarters in Washington, D.C., and at the HHS regional office in Atlanta, Georgia. We conducted our work between December 1997 and April 1998, in accordance with generally accepted government auditing standards. HHS provided written comments on a draft of this report. These comments are highlighted in the “Agency Comments” section of this report and are reprinted in appendix IV. OCSE’s guidance for certification reviews generally complies with the system provisions in the Family Support Act of 1988 and the implementing regulations established by the Secretary of HHS. This guidance includes: (1) the certification guide, which defines systems functional requirements and (2) the certification questionnaire, which in essence, is the certification guide presented in a questionnaire format. Our analysis showed that the certification guide and questionnaire address key system elements of the law and implementing regulations. OCSE included references to system and program regulations in both the certification guide and questionnaire. We analyzed those references to determine whether the certification guidance addressed the regulations cited. The comparison in appendix III shows that each of the implementing regulations is addressed in OCSE’s certification objectives. For example, section 307.10(b)(1) of the regulation requires that child support enforcement systems maintain identifying information on individuals involved in child support cases. Seven different certification objectives in the certification guidance address this requirement. Two of those certification objectives, A-8 and D-4, demonstrate how the guide addresses this requirement; respectively, they state that, “the system must accept and maintain identifying information on all case participants,” and “the system must update and maintain in the automated case record, all information, facts, events, and transactions necessary to describe a case and all actions taken in a case.” OCSE has been consistent in the way it administers certification reviews. Specifically, it used the same types of teams, the same guidance that was discussed earlier, and the same method for certification reviews. Although the scope and length of functional, level 1, and level 2 certification reviews varied, OCSE has been generally consistent in the way that it conducted each type of review. OCSE’s review process is as follows. It begins preparing for a certification review when the state notifies it that the state system is compliant and ready for certification. When OCSE receives the request, it requires the state to submit consistent documentation, which includes the completed certification questionnaire. After OCSE receives the documentation, it assigns a team to review the information and develop issues for discussion during the certification review. These teams consistently included at least one supervisor and two systems analysts. In some cases, regional analysts also participated in the documentation review. Following that review, certification teams are formed from staff who have similar background and expertise. For example, the certification team leaders are usually systems analysts from OCSE headquarters. These leaders are assisted by teams from the Administration for Children and Families regional offices responsible for the states being reviewed. The regional teams are usually a combination of staff with systems, policy, or audit expertise. In performing the certification reviews, these teams consistently use the certification questionnaire. OCSE used the same certification questionnaire for all of its level 1 and level 2 certification reviews except one. The first level 1 review was conducted before OCSE developed the certification questionnaire. OCSE analysts have also used a consistent method for conducting certification reviews. Certification review teams spend approximately 1 week on-site conducting certification reviews. (Because functional reviews and level 1 reviews are more limited in scope, those reviews do not always take a full week.) During the certification review, the review team usually holds an entrance conference at the state office and allows the state staff to provide an overview of the child support enforcement system. The next few days are spent reviewing the state’s responses to the certification questionnaire and observing how adequately system screens and functions address the federal requirements. This review at the state office is often performed using a test version of the system—one that does not include actual cases. To supplement the information obtained at the state office, the certification team usually spends at least one day visiting local offices to observe the system in operation. At the local offices, the team interviews staff about their use of the system and the systems training they have received. In addition, they have the staff process sample cases to ensure that the system will handle them correctly, observe the staff processing actual cases, and review reports and documents generated by the system. OCSE uses the certification guide and questionnaire in lieu of a manual to instruct its staff on how to conduct certification reviews and relies heavily upon on-the-job training to ensure that the reviews continue to be conducted consistently. In one instance when a new certification staff member was added, that person was paired with experienced staff for the first two or three reviews after joining the review team, to gain experience and learn how to consistently cover the issues addressed by the certification teams. OCSE began reporting on the results of its certification reviews in 1994. In general, the format and process for preparing certification reports has been standardized. However, we noted that several reports contained inconsistencies, such as including inaccurate descriptions of the criteria against which the systems’ financial components were measured. OCSE’s analysts used a standard template for preparing certification reports. As a result, we found the certification reports to be very similar in format and content. Even though the scope of the different reviews varied, the reports for functional, level 1, and level 2 reviews addressed similar topics. For example, they typically included a background section giving the history of the development of and funding for the system and describing the scope and methodology of the certification review. The reports also presented both certification findings and management findings. Certification findings are those system problems that must be addressed prior to system certification. Management findings are optional systems changes for management to consider. These findings often relate to the efficiency of the states’ systems. OCSE used a consistent process for reviewing the draft certification reports. According to an OCSE supervisor, division management reviewed all certification reports for consistency prior to their issuance. In addition, the office requested comments from states before publishing the final reports. According to OCSE officials, the nature, extent, and timeliness of the states’ comments varied and, when appropriate, states’ comments were incorporated into the final certification reports. While OCSE published many standardized certification reports on the results of its certification reviews, we noted three types of exceptions with the reporting process. First, OCSE certified two state systems in July and December 1997, respectively, by sending a brief letter to each state instead of issuing a complete standardized written report. The division director explained that standardized reports were not prepared for those systems because the certification team found no problems with them during the review. Second, according to officials, a report was not published for one state’s level 1 review because the level 2 review was requested before the earlier report was published. Finally, the reports for one level 1 and five functional reviews contained a qualifying statement not contained in the boilerplate language of the standardized reports. This qualifying statement said that, in order to even be conditionally certified, a system must process the financial component of all sample cases correctly, in accordance with predetermined results. In contrast, the other standardized reports’ paragraph on this subject did not contain this qualification. The division director told us that the boilerplate language in the standardized reports was appropriate and that the qualifying language in the six reports was incorrect. She said OCSE will conditionally certify a system even though it does not process all sample cases correctly, as long as the majority of the financial transactions are processed accurately and the state has reasonable explanations for any variances. She added that none of the systems was denied level 2 certification based on the qualifying statement, and that she was unaware of any other systems that were denied certification for failing to process all test cases correctly. The division director also noted that the problem was not widespread because only one lead analyst was responsible for the incorrect language. However, the review process did not prevent the incorrect language from being incorporated into the six published reports. Finally, she said that, until we brought this issue to her attention, she was not aware that any reports included this language; and that she would act to ensure that such qualifying language did not appear in future reports. As of March 31, 1998, OCSE had either certified or conditionally certified 25 of 54 child support enforcement systems, representing approximately 38 percent of the reported average national caseload for fiscal year 1995. OCSE had conducted 67 certification reviews for the 54 state systems as of March 31, 1998. Some states have had several levels of review. Figure 1 shows the highest level of certification for the 54 child support enforcement systems as of March 31, 1998. On page 10, as figure 2 indicates, 25 state systems were level 2 certified as of March 31, 1998. Figure 2 shows the status of level 2 certification for each state. Since the October 1997 deadline, OCSE’s certification review workload has increased substantially, as shown by figure 3. OCSE conducted 13 level 2 certification reviews for the first quarter of calendar year 1998, equaling the number of level 2 reviews conducted in 1997—the most done in any previous year. The first quarter of calendar year 1998 is the second quarter of OCSE’s fiscal year 1998. documenting the results of and preparing certification reports for the certification reviews performed in 1998. The systems director said she expects the rate of certification reviews to decline sharply because, as of March 31, 1998, only one request for a certification review was pending. OCSE’s certification guidance addresses the system requirements of the Family Support Act of 1988 and HHS’ implementing regulations, and OCSE has administered the certification process consistently across states. Further, while OCSE, in general, used a standardized format and process in preparing certification reports on the results of its reviews, these reports were not always consistent. We recommend that the Assistant Secretary of the Administration for Children and Families increase OCSE’s oversight of the reporting process to ensure that the reports consistently address criteria for evaluating the financial components of state systems. The Assistant Secretary for Children and Families agreed with our recommendation to increase OCSE’s oversight of the reporting process. She stated that OCSE would increase its oversight and consistency of reporting by subjecting the functional and level 1 reports to the same degree of management review being provided to the level 2 reports. We will provide copies of this report to the Assistant Secretary, Administration for Children and Families, Department of Health and Human Services; the Director of the Office of Management and Budget; and appropriate congressional committees. We will also make copies available to others upon request. Please contact me at (202) 512-6253 or by e-mail at willemssenj.aimd@gao.gov if you have any questions concerning this report. Major contributors are listed in appendix V. At a minimum, each state’s computerized support enforcement system established under the title IV-D state plan at § 302.85 of this chapter must: (a) Be planned, designed, developed, installed, or enhanced in accordance with an initial and annually updated APD approved under § 307.15; and (b) Control, account for, and monitor all the factors in the support collection and paternity determination processes under the state plan. At a minimum this must include: (1) Maintaining identifying information such as Social Security numbers, names, dates of birth, home addresses, and mailing addresses (including postal zip codes) on individuals against whom support obligations are sought to be established or enforced and on individuals to whom support obligations are owed, and other data as required by the Office; (2) Periodically verifying the information on individuals referred to in paragraph (b)(1) of this section with federal, state, and local agencies, both intrastate and interstate; (3) Maintaining data necessary to meet federal reporting requirements on a timely basis as prescribed by the Office; (4) Maintaining information pertaining to (i) Delinquency and enforcement activities; (ii) Intrastate, interstate and federal location of absent parents; (iii) The establishment of paternity; and (iv) The establishment of support obligations; (5) Collecting and distributing both intrastate and interstate support payments; (6) Computing and distributing incentive payments to political subdivisions which share in the cost of funding the program and to other political subdivisions based on efficiency and effectiveness if the state has chosen to pay such incentives; (7) Maintaining accounts receivable on all amounts owed, collected, and distributed; (8) Maintaining costs of all services rendered, either directly or by interfacing with state financial management and expenditure information; (9) Accepting electronic case referrals and update information from the state’s title IV-A program and using that information to identify and manage support enforcement cases; (10) Transmitting information electronically to provide data to the state’s AFDC [Aid to Families With Dependent Children; now Temporary Assistance for Needy Families (TANF)] system so that the IV-A agency can determine (and report back to the IV-D system) whether a collection of support causes a change in eligibility for, or the amount of aid under, the AFDC program; (11) Providing security to prevent unauthorized access to, or use of, the data in the system; (12) Providing management information on all IV-D cases under the state plan from initial referral or application through collection and enforcement; (13) Providing electronic data exchange with the state Medicaid system to provide for case referral and the transfer of the medical support information specified in 45 C.F.R. 303.30 and 303.31; (14) Providing electronic data exchange with the state IV-F program for purposes of assuring that services are furnished in an integrated manner unless the requirement is otherwise met through the exchange conducted under paragraph (b)(9) of this section; (15) Using automated processes to assist the state in meeting state plan requirements under part 302 of this chapter and Standards for program operations under part 303 of this chapter, including but not limited to: (i) The automated maintenance and monitoring of accurate records of support payments; (ii) Providing automated maintenance of case records for purposes of the management and tracking requirements in § 303.2 of this chapter; (iii) Providing title IV-D case workers with on-line access to automated sources of absent parent employer and wage information maintained by the state when available, by establishing an electronic link or by obtaining an extract of the data base and placing it on-line for access throughout the state; (iv) Providing locate capability by automatically referring cases electronically to locate sources within the state (such as state motor vehicle department, state department of revenue, and other state agencies), and to the Federal Parent Locator Service and utilizing electronic linkages to receive return locate information and place the information on-line to title IV-D case workers throughout the state; (v) Providing capability for electronic funds transfer for purposes of income withholding and interstate collections; (vi)Integrating all processing of interstate cases with the computerized support enforcement system, including the central registry; and (16) Providing automated processes to enable the Office to monitor state operations and assess program performance through the audit conducted under section 452(a) of the Act. The system must accept, maintain, and process information for non-AFDC services. The system must automatically accept and process referrals from the State’s Title IV-A (AFDC) agency. The system must accept and process referrals from the State’s Title IV-E (Foster Care) agency. The system must automatically accept appropriate referrals from the State’s Title XIX (Medicaid) agency. The system must automatically accept and process interstate referrals. The system must uniquely identify and edit various case types. The system must establish an automated case record for each application/referral. The system must accept and maintain identifying information on all case participants. The system must electronically interface with all appropriate sources to obtain and verify locate, asset and other information on the non-custodial/putative parent or custodial parent. The system must automatically generate any needed documents. The system must record, maintain, and track locate activities to ensure compliance with program standards. The system must automatically resubmit cases to locate sources. The system must automatically submit cases to the Federal Parent Locator Service (FPLS). The system must automatically track, monitor, and report on the status of paternity establishment and support Federal regulations and State laws and procedures for establishing paternity. The system must automatically record, track, and monitor information on obligations, and generate documents to establish support including medical support. The system must accept, maintain, and process information concerning established support orders. The system must accept, maintain, and process information concerning medical support services. If the State chooses to have case prioritization procedures, the system must automatically support them. The system must automatically direct cases to the appropriate case activity. The system must automatically accept and process case updates and provide information to other programs on a timely basis. The system must update and maintain in the automated case record all information, facts, events, and transactions necessary to describe a case and all actions taken in a case. The system must perform routine case functions, keep the caseworker informed of significant case events, monitor case activity, provide case status information, and ensure timely case action. The system must automatically support the review and adjustment of support obligations. The system must allow for case closure. The system must provide for management of all interstate cases. The system must manage Responding-State case actions. The system must manage initiating-State case actions. The system must automatically monitor compliance with support orders and initiate enforcement actions. The system must support income withholding activities. (continued) The system automatically must support Federal tax refund offset. The system must automatically support State tax refund offset. The system must automatically identify, initiate, and monitor enforcement actions using liens and bonds. Where action is appropriate under State guidelines, the system must support Unemployment Compensation Intercept (UCI). The system must be capable of forwarding arrearage information to credit reporting agencies. The system must support enforcement through Internal Revenue Service full collection services when previous enforcement attempts have failed. In cases where previous enforcement attempts have failed, the system must periodically re-initiate enforcement actions. The system must support the enforcement of spousal support. The system must automatically monitor compliance with and support the enforcement of medical insurance provisions contained within support orders. With the exception of those cases with income withholding in force, the system must automatically bill cases with obligations. The system must automatically process all payments received. The system must support the acceptance and disbursement of payments using electronic funds transfer (EFT) technology. The system’s accounting process must be uniform statewide, accept and maintain all financial information, and perform all calculations relevant to the IV-D program. The system must distribute collections in accordance with 45 C.F.R. 302.32, 302.51, 302.52, 303.72, and 303.102. The system must generate notices to AFDC and former AFDC recipients, continuing to receive IV-D services, about the amount of support collections; and must notify the IV-A agency about collections for AFDC recipients. The system must maintain information required to prepare Federal reports. The system must provide an automated daily on-line report/worklist to each caseworker to assist in case management and processing. The system must generate reports required to ensure and maintain the accuracy of data and to summarize accounting activities. The system must provide management reports for monitoring and evaluating both employee, office/unit, and program performance. The system must support the expeditious review and analysis of all data that is maintained, generated, and reported by the system. The State must have policies and procedures to evaluate the system for risk on a periodic basis. The system must be protected against unauthorized access to computer resources and data in order to reduce erroneous or fraudulent activities. The State must have procedures in place for the retrieval, maintenance, and control of the application software. The State must have procedures in place for the retrieval, maintenance, and control of program data. The system hardware, software, documentation, and communications must be protected and back-ups must be available. (Table notes on next page) The certification guide is currently being revised to incorporate changes required by welfare reform. The new version will refer to Temporary Assistance for Needy Families, the program that replaced Aid to Families With Dependent Children. Child Support Systems Certification Objectives (A-H) Child Support Systems Certification Objectives (A-H) Child Support Enforcement: Privatization: Challenges in Ensuring Accountability for Program Results (GAO/T-HEHS-98-22, Nov. 4, 1997). Child Support Enforcement: Leadership Essential to Implementing Effective Automated Systems (GAO/T-AIMD-97-162, Sept. 10, 1997). Child Support Enforcement: Strong Leadership Required to Maximize Benefits of Automated Systems (GAO/AIMD-97-72, June 30, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Child Support Enforcement: States’ Experience with Private Agencies’ Collection of Support Payments (GAO/HEHS-97-11, Oct. 23, 1996). Child Support Enforcement: States and Localities Move to Privatized Services (GAO/HEHS-96-43FS, Nov. 20, 1995). Child Support Enforcement: Opportunity to Reduce Federal and State Costs (GAO/T-HEHS-95-181, June 13, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Department of Health and Human Services (HHS) certification process for state child support enforcement systems, its administration of the process and the certification status of the state systems. GAO noted that: (1) certification guidance issued by the Office of Child Support Enforcement (OCSE) addresses the system requirements of the Family Support Act of 1988 and HHS' implementing regulations; (2) analysis of the certification process shows that OCSE has administered this process consistently across states since it began certifying child support enforcement systems in 1993; (3) it has used the same guidance for certification reviews and conducted reviews that were similar in scope and length for each level of certification; (4) while OCSE published many certification reports on the results of its certification reviews, its reporting was not always consistent; (5) as of March 31, 1998, OCSE had either certified or conditionally certified 25 of the 54 child support enforcement systems; and (6) OCSE had also conducted 13 additional reviews and was preparing certification reports for those systems.
Congress created FDIC in 1933 to restore and maintain public confidence in the nation’s banking system. The Financial Institutional Reform, Recovery, and Enforcement Act of 1989 sought to reform, recapitalize, and consolidate the federal deposit insurance system. The act created the Bank Insurance Fund and the Savings Association Insurance Fund, both of which are responsible for protecting insured bank and thrift depositors, respectively, from loss due to institutional failures. The act also created the FSLIC Resolution Fund to complete the affairs of the former FSLIC and liquidate the assets and liabilities transferred from the former Resolution Trust Corporation. It also designated FDIC as the administrator of these funds. As part of this function, FDIC has an examination and supervision program to monitor the safety of deposits held in member institutions. FDIC insures deposits in excess of $3.3 trillion for about 9,200 institutions. Together, the three funds have about $49.5 billion in assets. FDIC had a budget of about $1.1 billion for calendar year 2003 to support its activities in managing the three funds. For that year, it processed more than 2.6 million financial transactions. FDIC relies extensively on computerized systems to support its financial operations and store the sensitive information it collects. Its local and wide area networks interconnect these systems. To support its financial management functions, it relies on several financial systems to process and track financial transactions that include premiums paid by its member institutions and disbursements made to support operations. In addition, FDIC uses other systems that maintain personnel information for its employees, examination data for financial institutions, and legal information on closed institutions. At the time of our review, about 6,300 individuals were authorized to use FDIC’s systems. FDIC’s chief information officer (CIO) is the corporation’s key official for computer security. The CIO is responsible for establishing, implementing, and overseeing a corporatewide information security program. Information security is a critical consideration for any organization that depends on information systems and networks to carry out its mission or business. Without proper safeguards, there is enormous risk that individuals and groups with malicious intent may intrude into inadequately protected systems and use this access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. We have reported information security as a governmentwide high-risk area since February 1997. Our previous reports, and those of agency inspectors general, describe persistent information security weaknesses that place a variety of federal operations, including those at FDIC, at risk of disruption, fraud, and inappropriate disclosure. Congress and the executive branch have taken actions to address the risks associated with persistent information security weaknesses. In December 2002, the Federal Information Security Management Act (FISMA), which is intended to strengthen information security, was enacted as Title III of the E-Government Act of 2002. In addition, the administration undertook important actions to improve security, such as integrating information security into the President’s Management Agenda Scorecard. Moreover, the Office of Management and Budget and the National Institute of Standards and Technology have issued security guidance to agencies. The objective of our review was to assess the effectiveness of FDIC’s information system general controls, including the progress the corporation had made in correcting or mitigating weaknesses reported in our financial statement audits for calendar years 2001 and 2002. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data, and (2) our May 1998 report on security management best practices at leading organizations, which identifies key elements of an effective information security program. Specifically, we evaluated information system controls intended to protect data and software from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties involving application programming, system programming, computer operations, information security, and quality assurance; ensure recovery of computer process operations in case of disaster or other unexpected interruption; and ensure an adequate information security program. To evaluate these controls, we identified and reviewed pertinent FDIC security policies and procedures, and conducted tests and observations of controls in operation. In addition, we reviewed FDIC’s corrective actions taken to address vulnerabilities identified in our audits for calendar years 2001 and 2002. In 2001 and again in 2002, we reported computer security weaknesses at FDIC, including specific weaknesses related to mainframe and network security, physical access, application change control, and service continuity. These weaknesses placed critical corporation operations at risk of misuse and disruption. Although FDIC has made significant progress in correcting these weaknesses and has taken other steps to improve security, our testing in our calendar year 2003 audit identified additional control weaknesses. Specifically, FDIC had not adequately limited the access granted to all authorized users or completely secured access to its network. Further, FDIC had not yet completed a program to fully monitor user activities for unusual or suspicious patterns that could indicate unauthorized access. As a result, critical FDIC financial and sensitive personnel and bank examination information was at risk of unauthorized disclosure, disruption of operations, or loss of assets— possibly without detection. A key reason for FDIC’s weaknesses is that it had not yet fully implemented a comprehensive security management program. FDIC has made significant progress in correcting previously identified information security weaknesses. FDIC took action to address current and prior year weaknesses, including completing action on (1) the 22 weaknesses that remained open from our 2001 audit and (2) 28 of the 29 weaknesses from our 2002 audit. Specifically, FDIC reduced user access to sensitive program libraries and critical financial strengthened security over certain network platforms, expanded its application software change control procedures to include developed and implemented disaster recovery plans for all its major systems and incorporated unannounced testing procedures into its service continuity process, and enhanced system software change control processes. In addition to responding to previously identified weaknesses, FDIC established several other computer controls to enhance its information security. For example, it established procedures for securing new remote access and private network services. In addition, it strengthened security procedures over its system that handles large files submitted to FDIC by banking institutions. Further, FDIC initiated reviews of its network infrastructure as a precursor to establishing an ongoing program of tests and evaluations of its computer environment. A basic management control objective for any organization is to protect data supporting its critical operations from unauthorized access, which could lead to improper modification, disclosure, or deletion. Organizations can protect this critical information by granting employees the authority to read or modify only those programs and data that they need to perform their duties and by periodically reviewing access granted to ensure it is appropriate. Effective access controls should be designed to restrict access to computer programs and data and prevent and detect unauthorized access. These controls include assigning user access rights and permissions and ensuring that access remains appropriate on the basis of job responsibilities. Although FDIC restricted access to certain data and programs on its systems, we identified instances in which access to sensitive data and programs had not been sufficiently restricted. For example: Many users had unnecessary access to production systems that includes financial and bank information. These users were inadvertently granted access to the systems that could allow these users to gain access to critical financial management information. This vulnerability was further heightened because an undetermined number of the users were system developers. These developers have detailed knowledge of the systems’ processing functions; knowledge that could allow them to improperly add, alter, or delete critical financial and sensitive information or programs—possibly without detection. A large number of users had access that allowed them to read a powerful user identification (ID) and password used to transfer data among FDIC production computer systems. With this ID and password, the users could gain unauthorized access to financial and sensitive corporation information—possibly without detection. FDIC did not adequately restrict users from viewing sensitive information. For example, all network users had unrestricted read access to sensitive bank information. Failure to adequately control access to this type of information could result in users gaining unauthorized access to privileged information. Although FDIC has initiated actions to correct these weaknesses, the access vulnerabilities continue because the corporation has not yet fully established a process for reviewing the appropriateness of individual access privileges. Specifically, FDIC’s process did not include a comprehensive method for identifying and reviewing all access rights granted to any one user. Such reviews would have allowed FDIC to identify and correct inappropriate access. In response, FDIC said that it has since taken steps to restrict access to critical financial data and programs and related sensitive information. Further, the corporation stated that it enhanced its process for identifying and reviewing user access granted and was establishing a policy that will require quarterly reviews of users with broad access privileges. Networks are a series of interconnected devices and software that allow individuals to share data and computer programs. Because sensitive programs and data are stored on and transmitted along networks, effectively securing networks is essential for protecting computing resources and data from unauthorized access, manipulation, and use. Organizations can secure their networks, in part, by limiting the services that are available on the network and by installing and configuring network devices that permit authorized network service requests and deny unauthorized requests. Network devices include (1) firewalls designed to prevent unauthorized access into the network, (2) routers that filter and forward data along the network, (3) switches that filter and forward information among parts of a network, and (4) servers that host applications and data. Insecurely configured network services and devices can make a system vulnerable to internal or external threats, such as hackers, cyberterrorist groups, and denial-of-service attacks. Since networks provide the entry point for access to electronic information assets, failure to secure them increases the risk of the unauthorized use of sensitive data and systems. FDIC continued to take steps to secure its network through enhancements to its firewall and specific network platforms. Further, it established processes to strengthen the security of its local area network and password management. In addition, FDIC initiated a testing cycle to review the effectiveness of information system controls for specific network resources. Nonetheless, we identified weaknesses in the way that FDIC managed network services, controlled network connectivity, and maintained network software, as the following examples demonstrate. A network service was not configured to restrict access to sensitive network resources. As a result, anyone—including contractors—with access to the FDIC network could obtain copies or modify configuration files containing control information such as access control lists. With the ability to read, copy, or modify these files, an intruder could disable or disrupt network operations by taking control of sensitive and critical network resources. Access connectivity to critical network resources was not adequately restricted. With connectivity to these key resources, an unauthorized user could attempt to exploit network vulnerabilities and gain control of key segments of the network. Certain network connections to off-site locations were not adequately controlled. These connections are essential to securing operations of the network they serve. Ineffectively secured network connections could expose the internal network to unauthorized access and make it easier for this access to go undetected. Further, FDIC did not consistently secure its network against known software vulnerabilities or minimize the operational impact of potential failure in a critical network device. Failure to address known vulnerabilities increases the risk of system compromise, such as unauthorized access to and manipulation of sensitive system data, disruption of services, and denial of service. In responding to our findings, FDIC’s CIO said that the corporation had taken steps to improve network security. Specifically, he said that FDIC had reconfigured network resources to restrict access, made software modifications to secure against known vulnerabilities, and established a process for assessing contractor connectivity requirements. The risks created by these access and network security weaknesses were heightened because FDIC had not yet completed a program to fully monitor user activities. Such a program to monitor access would include routinely reviewing user access activity and investigating failed attempts to access sensitive data and resources, as well as unusual and suspicious patterns of successful access to sensitive data and resources. To effectively monitor user access, it is critical that logs of user activity be maintained for all critical processing activities. This includes collecting and monitoring activities on all critical systems, including mainframes, network servers, and routers. A comprehensive monitoring program should include an intrusion-detection system (IDS) that monitors all key network resources and automatically logs unusual activity, provides necessary alerts, and terminates access. Further, to safeguard IDS operations and the access information it collects, the duties and responsibilities of staff assigned to the monitoring program should be adequately segregated. Although FDIC has made progress in developing systems to identify unauthorized or suspicious access activities for both its mainframe and network systems, its program as implemented does not fully monitor for such activities. As a result, there are weaknesses in FDIC’s monitoring program that could result in significant breaches to its computer security environment. For example, the network IDS did not monitor all network traffic originating from certain locations. Further, certain network resources were not configured to monitor network traffic, which lessens the corporation’s ability to identify anomalies. In addition, responsibilities for operating the IDS were not appropriately segregated. For example, the corporation assigned the responsibilities for design, implementation, and maintenance to one individual. By assigning these functions to one person, it did not adequately ensure a system of checks and balances. Thus, FDIC is at risk that its program designed to monitor access activities for unusual or suspicious activities could be altered to allow unauthorized system actions that could go undetected. In response to our findings, FDIC’s CIO said that the corporation had developed and begun implementation of a monitoring strategy for information technology security. This includes monitoring, event correlations, and incident identification and response. Further, the corporation plans to hire additional staff to allow it to segregate responsibilities for operating the IDS. A key reason for FDIC’s continuing weaknesses in information system controls is that it has not yet fully established a comprehensive security management program to ensure that effective controls are established and maintained and that information security receives significant management attention. Our May 1998 study of security management best practices determined that a comprehensive information security management program is essential to ensuring that information system controls work effectively on a continuing basis. The recently enacted FISMA, consistent with our study, describes certain key elements of a comprehensive information security management program. These elements include a central security management structure to provide overall security policy and guidance along with oversight to ensure compliance with established policies and reviews of the effectiveness of the security environment; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; security awareness training to inform personnel, including contractors and other users of information systems, about information security risks and the responsibilities of these individuals in complying with agency policies and procedures; periodic assessments of the risk and magnitude of the harm that could result from unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; and periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices—to be performed with a frequency depending on risk, but no less than annually—that include testing of management, operational, and technical controls of major information systems. During the past year, FDIC made substantial progress in establishing a comprehensive computer security management program. As discussed below, FDIC has (1) strengthened its central security management structure, (2) updated its security policies and procedures, (3) enhanced its security awareness training, and (4) developed a risk assessment program. Central security management structure. FDIC strengthened accountability and authority for its previously established central security management function by appointing a permanent CIO who reports directly to the chairman. Further, FDIC realigned its security management function so that it reports directly to the CIO. Also, FDIC provided additional staff management resources to oversee its certification and accreditation process, system test and evaluation program, computer security incident response team activities, and firewall administration. Additionally, other staff resources were added to maintain and enhance security policies and procedures and provide oversight to the corporation’s newly established risk assessment and test and evaluation programs. Security policies and procedures. FDIC enhanced its overall security policies covering network security, computer center access, mainframe controls, and security management. For example, it developed new policies covering controls for the use of wireless networks and requirements for patch management. In addition, it developed network security procedures to ensure compliance with policy on the use of default vendor accounts, restrictions on network services, and adherence to network password standards. Further, FDIC strengthened its policies on requesting and granting access to its computer center and provided updated requirements to address weaknesses in its configuration management procedures for financial system changes. Also, the corporation issued new policies on performing risk assessments of its security program and information systems. Security awareness training. FDIC enhanced its current security awareness program for employees and contractors. Specifically, it updated the program to reflect FISMA security requirements, new policies and procedures developed to mitigate newly identified security risks, and discussions of internal threats. The corporation also developed specialized security awareness training to address the needs of selected technical staff and enhanced its reporting process to ensure that all security awareness training is reported. Risk assessments. Recently, FDIC developed a framework for assessing and managing risk on a continuing basis. This framework specifies (1) how the assessments should be initiated and conducted, (2) who should participate in the assessment, (3) how disagreements should be resolved, (4) what approvals are needed, and (5) how these assessments should be documented and maintained. At the completion of our audit, the corporation had performed risk assessments on all of its major systems. Although FDIC has made substantial progress in each of the elements discussed above, it only recently established a program to test and evaluate its computer control environment, but this program was incomplete. Test and evaluation is a key element of an information security program that includes ongoing reviews, tests, and evaluations of information security to ensure that systems are in compliance with policies and procedures and to identify and correct weaknesses that may occur. FDIC began implementing this program during 2003. In October 2003, the corporation used a contractor to (1) develop a self-assessment process that includes annual general and application control reviews and (2) begin to perform ongoing quarterly tests of its systems. Still, FDIC’s test and evaluation program does not address all key areas. Specifically, the program does not include the following provisions. All key computer resources supporting FDIC’s financial systems are routinely reviewed and tested, as appropriate. FISMA requires agencies to develop, document, and implement an agencywide information security program that includes routine security reviews of key computer resources supporting critical information systems, such as those supporting the corporation’s financial systems. These reviews should include those managed by other agencies or contractors. Although it initiated a program of tests and evaluations, this program did not yet address all key computer resources. For example, FDIC relies extensively on contractors to support its financial systems, and accordingly, provides them with connections and access to its internal network. Yet, during the past 2 years, the corporation has performed only limited security reviews of these contractor connections—a key computer resource. Further, FDIC did not schedule a review of these contractor connections in conjunction with its newly established self- assessment process. Without routine tests and evaluations of all key computer resources, including contractor connections, the corporation’s financial or sensitive bank information is at risk of unauthorized disclosure, disruption of operations, or loss of assets. Information security weaknesses detected are analyzed for systemic solutions. To ensure that actions taken to correct identified security weaknesses are effective, security management best practices prescribe that procedures should include an assessment of systemic causes of related security weaknesses. Although FDIC has been very proactive in addressing the individual information security weaknesses identified, it currently lacks an ongoing process to collectively analyze related weaknesses for systemic problems that could adversely affect critical financial and bank information systems. A comprehensive assessment of related weaknesses, such as those related to user access privileges, which is a recurring security weakness we have reported to FDIC, could assist in identifying systemic causes of security weaknesses and result in remediation efforts that could be more effective in addressing security vulnerabilities. Further, such an assessment provides an organization with a process of identifying emerging problems, assessing the effectiveness of current policies and awareness efforts, and determining the need for stepped-up education or new controls to address problem areas. Corrective actions are independently tested. FISMA requires that agencies establish a process to document and track remedial actions taken to address security deficiencies in agency operations. This process includes requirements for independent testing to ensure that prescribed remediation actions are effective. Although FDIC has established a system for documenting and tracking corrective actions, it has not developed a specific process for independently testing or reviewing the appropriateness of the corrective actions taken. Newly identified weaknesses or emerging security threats are incorporated into the test and evaluation process. To ensure an effective test and evaluation program, security management best practices prescribe that the scope of information system control tests include an evaluation of recently identified weaknesses and an assessment of emerging security threats to the computer control environment. FDIC’s self-assessment process includes provisions for updating its annual review of information system controls to evaluate control weaknesses that were identified in prior audits. However, the process does not specifically include provisions for weaknesses reported in other audits or those identified internally in connection with operational issues. Further, there are no procedures to ensure that emerging security threats are considered for inclusion in the self- assessment reviews. For example, in our current review at FDIC, we identified network security weaknesses that are linked to specific new security threats that had not been addressed by FDIC. To perform a comprehensive review of information system controls, it is critical that all previously identified weaknesses and emerging security threats be considered as part of the test and evaluation process to ensure that these weaknesses have been corrected. Incorporating these key areas into its test and evaluation program should allow FDIC to better identify and correct security problems, such as those identified in our 2003 audit. FDIC has made significant progress in correcting the computer security weaknesses we previously identified and has taken other steps to improve security. However, we identified additional computer control weaknesses that place critical FDIC financial and sensitive personnel and bank examination information at risk of unauthorized disclosure, disruption of operations, or loss of assets. Specifically, FDIC had not adequately limited the access granted to all authorized users or completely secured access to its network. The risks created by these access weaknesses are heightened because FDIC has not yet completed a program to fully monitor access activity to identify and investigate unusual or suspicious access patterns that could indicate unauthorized access. Implementation of FDIC’s plan to correct these weaknesses is essential to establishing an effective information system control environment. A key reason for FDIC’s continuing weaknesses in information system controls is that it has not yet fully established a comprehensive security management program to ensure that effective controls are established and maintained and that information security receives significant management attention. Although FDIC has made substantial progress during the past year toward establishing key elements of this program—including strengthening its security management structure, updating security policies and procedures, enhancing security awareness, and implementing a risk- assessment program—it only recently established a program to test and evaluate its computer control environment, and this program does not yet address all key areas. Specifically, the test and evaluation program does not include adequate provisions to ensure that (1) all key computer resources supporting FDIC’s financial environment are routinely reviewed and tested, (2) weaknesses detected are analyzed for systemic solutions, (3) corrective actions are independently tested, and (4) newly identified weaknesses or emerging security threats are incorporated into the test and evaluation process. Until FDIC takes steps to correct or mitigate its information system control weaknesses and fully implements a computer security management program, FDIC will have limited assurance that its financial and sensitive information is adequately protected. To fully establish a comprehensive computer security management program, we recommend that the FDIC chairman instruct the CIO, as the corporation’s key official for computer security, to strengthen the testing and evaluation element of this program by taking the following actions: all key computer resources supporting FDIC’s financial environment should be routinely reviewed and tested, weaknesses detected should be analyzed for systemic solutions, corrective actions should be independently tested, and newly identified weaknesses or emerging security threats should be incorporated into the test and evaluation process. We are also making recommendations in a separate report designed for “Limited Official Use Only.” These recommendations address actions needed to correct the specific information security weaknesses related to user access, network security, and monitoring access activities. In providing written comments on a draft of this report, FDIC’s Chief Financial Officer (CFO) agreed with our recommendations. His comments are reprinted in appendix I of this report. Specifically, FDIC plans to correct the information system control weaknesses identified and strengthen the testing and evaluation element of its computer management program by December 31, 2004. According to the CFO, significant progress has already been made in addressing the identified weaknesses. We are sending copies of this report to the Chairman and Ranking Minority Member of the Senate Committee on Banking, Housing, and Urban Affairs; the Chairman and Ranking Minority Member of the House Committee on Financial Services; members of the FDIC Audit Committee; officials in FDIC’s divisions of information resources management, administration, and finance; and the FDIC inspector general. We will also make copies available to other parties upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-3317 or David W. Irvin, Assistant Director, at (214) 777-5716. We can also be reached by e-mail at daceyr@gao.gov and irvind@gao.gov, respectively. Key contributors to this report are listed in appendix II. In addition to the person named above, Edward Alexander, Jr., Gerald Barnes, Nicole Carpenter, Lon Chin, Debra Conner, David Hayes, Jeffrey Knott, Leena Mathew, Duc Ngo, Rosanna Villa, Charles Vrabel, and Chris Warweg made key contributions to this report. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Effective controls over information systems are essential to ensuring the protection of financial and personnel information and the security and reliability of bank examination data maintained by the Federal Deposit Insurance Corporation (FDIC). As part of our calendar year 2003 financial statement audits of three FDIC Funds, GAO assessed the effectiveness of the corporation's general controls on its information systems. Our assessment included follow up on the progress that FDIC has made in correcting or mitigating computer security weaknesses identified in our audits for calendar years 2001 and 2002. FDIC has made significant progress in correcting prior year information security weaknesses. The corporation addressed almost all the computer security weaknesses we previously identified in our audits for calendar years 2001 and 2002. Nonetheless, testing in our calendar year 2003 audit identified additional computer control weaknesses in FDIC's information systems. These weaknesses place critical FDIC financial and sensitive examination information at risk of unauthorized disclosure, disruption of operations, or loss of assets. A key reason for FDIC's continuing weaknesses in information system controls is that it has not yet fully established a comprehensive security management program to ensure that effective controls are established and maintained and that information security receives significant management attention. The corporation only recently established a program to test and evaluate its computer control environment, and this program does not yet include adequate provisions to ensure that (1) all key computer resources supporting FDIC's financial environment are routinely reviewed and tested, (2) weaknesses detected are analyzed for systemic solutions, (3) corrective actions are independently tested, and (4) newly identified weaknesses or emerging security threats are incorporated into the test and evaluation process.
Most Americans use the U.S. mail service. Their opinions of that service may depend on such factors as the timeliness of mail delivery compared to their expectations, the time spent waiting in line for window service, the availability of vending machines that work, and the helpfulness of window clerks who are there to serve postal customers. Concerned about untimely mail service at post offices, the Chairman of the former Subcommittee on Information, Justice, Transportation, and Agriculture, House Committee on Government Operations, asked us to review selected aspects of the Postal Service’s efforts to measure and improve customer service. Subsequently, we agreed to also report the results of our review to the Chairman of the Subcommittee on the Postal Service, House Committee on Government Reform and Oversight. Under the Postal Reorganization Act of 1970, as amended, the Postal Service is an independent establishment in the executive branch operated as a basic and fundamental service provided to the people by the government and accountable to Congress. It is to provide “prompt, reliable, and efficient services” to patrons in all areas and render these services to all communities. Over the years, the Service has increasingly functioned as a businesslike entity competing with electronic communication and private businesses to provide communication and merchandise delivery services to residential and business customers. Today, the Postal Service’s customer base is diverse, and the quality of mail service has many dimensions, such as whether the time to deliver mail meets standards, access to service is convenient, and service is timely and courteous at post offices. Until recent years, the Postal Service’s measurement of service quality was internally focused. For example, it measured the time to process mail from points within the postal system; it did not measure the time from deposit of mail into the system to delivery of mail to customers. The Service’s customer orientation continues to change. Increasingly, it is focusing on customer needs, expectations, and perceptions. Its two principal measures of service quality are the Customer Satisfaction Index (CSI) and Business Customer Satisfaction Index (BCSI), which measure how residential and business customers, respectively, perceive the Postal Service’s performance; and the External First-Class Measurement System (EXFC), a quantitative measure of total delivery performance. Both measurements are done independently of the Postal Service: residential CSI by Opinion Research Corporation; business CSI by Gallup Organization, Inc.; and EXFC by Price Waterhouse and Co. The Vice President/Consumer Advocate, who reports directly to the Postmaster General, oversees the CSI survey process and the EXFC end-to-end measurement system and, until December 1994, was responsible for analyzing and disseminating CSI and EXFC results. The quality focus of the Postal Service leadership team is consistent with current national objectives of making government more responsive to the American public. These objectives, outlined recently by the National Performance Review (NPR) task force, emphasize the need to change the way government works by putting the customer first, giving the customer a voice, and setting customer service standards. The Postal Service is following the NPR guidance by recognizing the need to continuously improve customer service to remain competitive. Establishing and maintaining consistently high levels of delivery and retail service are critical to the Service’s success in an increasingly competitive communications marketplace. We previously reported that the Postal Service is losing profitable business to the private sector, especially in the parcel post and overnight mail markets. Private carriers dominate the profitable business-to-business segment because they offer cheaper and faster service and have left the Postal Service with the more dispersed and less profitable household market segment. Soon after taking office in July 1992, the Postmaster General outlined broad strategic goals that included improving service quality and empowering employees to act responsively when customer satisfaction is at stake. In 1992, the Postmaster General downsized the Postal Service but also reorganized it to focus greater attention on serving customers. The positions of vice president for customer services and vice president for processing, both reporting to the chief operating officer, were established at postal headquarters as part of the reorganization. The Postmaster General created 10 area customer service offices, which oversee 85 customer service districts, and 10 area processing offices, which oversee hundreds of mail processing plants in the field. In 1994, to better coordinate customer service and mail processing functions, the Postmaster General eliminated the two above-mentioned vice president positions at headquarters. He also combined the two manager positions in each area into a single vice president position responsible for both customer service and mail processing. Below the area level, district and plant managers continue to report separately to the area vice president. Postmasters report to customer service district managers, who report to an area vice president and oversee retail service operations of about 40,000 post offices, stations, and branches nationwide. Plant managers report directly to an area vice president and oversee about 500 air, bulk, and general mail processing plants. The Postal Service serves 125 million households 6 days a week. Its residential customer surveys are done every postal quarter to measure these customers’ perceptions of virtually all postal services. Its surveys cover the Postal Service’s 10 geographical areas; 85 service areas, which include customer service districts and processing plants (called “performance clusters”); and 170 metropolitan areas of the United States.Under the $10.9 million contract with Opinion Research Corporation, through December 1994, the Service had received residential CSI results for 13 postal quarters dating back to April 1991. The results show the perceptions of residential customers regarding the Service’s overall performance (question 1a of the CSI survey questionnaire) and other aspects of U.S. mail services (37 other questions) for the 3 months preceding the survey. (A copy of the CSI survey questionnaire is included as app. I.) Customers receiving the questionnaire are also asked to provide written comments on (1) especially good experiences with the Postal Service and (2) anything that the Service could do to increase customer satisfaction. We previously reported that the CSI surveys were designed to provide a statistically valid survey for measuring customer satisfaction with the quality of postal services. The Service makes some residential CSI results publicly available each quarter showing overall customer satisfaction nationally and for the 170 metropolitan areas. The results are to be used internally to help track trends in customer satisfaction over time and by organizational component. The results also are to serve as a diagnostic tool for improving the quality of both delivery and retail services. In April 1993, the Postal Service awarded a 4-year $8.3 million contract to The Gallup Organization, Inc., to develop and operate for the Postal Service a Business Customer Satisfaction Index (BCSI) measurement system. Subsequent contract amendments increased the estimated total cost to about $11.9 million, and the Service had spent about $6.0 million under the contract through September 1995. The information from the system was to be used to measure the satisfaction of these customers and determine the allocation of resources needed to maximize customer satisfaction. The system was to produce valid and projectable data for each of 170 metropolitan areas and provide for aggregating the data for performance clusters and higher postal organizational levels. The Chairman of the former Subcommittee on Information, Justice, Transportation, and Agriculture, House Committee on Government Operations, requested that we review selected aspects of the Postal Service’s efforts to measure, report, and improve customer satisfaction. Subsequently, the Chairman of the Subcommittee on Postal Service, House Committee on Government Reform and Oversight, requested that we report the results of our review to that Subcommittee. Our objectives were to determine (1) to what extent the Postal Service disseminates residential and business CSI data internally and to Congress and (2) whether opportunities exist for the Postal Service to improve the dissemination of CSI data and their potential use by Congress and the Postal Service. We were also to determine (3) the steps that the Postal Service is taking to improve customer satisfaction using CSI and other data and (4) any additional steps the Service could take to improve customer service and thereby improve customer satisfaction. Because the Postal Service had not made public any BCSI data at the time of our review, our work on the dissemination and use of customer satisfaction data was limited to residential CSI data. We reviewed the Gallup contract, analyzed Postal Service data on the relative importance of residential and business mail to the Service’s overall mail volumes, and obtained explanations from Postal Service officials of the status of the Gallup contract and plans for dissemination of BCSI data. As part of our work on residential CSI data collection, dissemination, and use, we interviewed Postal Service headquarters officials, including the Chief Operating Officer and the Vice Presidents for Customer Services, Consumer Affairs, and Quality. We interviewed various other headquarters officials responsible for customer retail service to find out how residential CSI data were used and what improvement initiatives were under way. We reviewed various materials and documents, such as reports, video tapes, and briefing documents, used by postal headquarters and selected field offices to disseminate CSI data. We analyzed annual reports sent by the Postal Service to Congress during fiscal years 1991 through 1994 as part of our efforts to determine any opportunities for the Service to improve the sharing of information on customer satisfaction and its performance with Congress. Along with interviews with numerous headquarters and field postal officials, we reviewed CSI-related reports prepared by Opinion Research Corporation and the Postal Inspection Service to identify opportunities to improve the dissemination and potential use of CSI data. We used the results of all of these tasks to assess the planning and monitoring of initiatives undertaken to improve customer satisfaction by improving customer service. To determine the extent of improvement in customer satisfaction with postal services, we obtained CSI metropolitan area data on question 1a responses relating to customers’ perceptions of satisfaction with the “overall performance” of the Postal Service and 22 other questions on window, telephone, and related retail services. We also analyzed EXFC data to determine changes in on-time delivery rates since the measurements began in 1990 and compared EXFC data with CSI data for the nation and selected metropolitan areas. We estimated sampling errors for the CSI results in each of 170 metropolitan areas using CSI data provided by the Service for the first quarter of fiscal year 1992 through the third quarter of fiscal year 1994. Estimates of sampling errors for each area are based on simple random sampling assumptions. Sampling errors are not reported for specific metropolitan areas because the Postal Service did not provide us with the names of specific metropolitan areas associated with the data. We also analyzed national and metropolitan area CSI results from the first quarter of fiscal year 1991 through the fourth quarter of fiscal year 1994. We could not calculate sampling errors for national CSI results using the data provided. The Postal Service uses 95-percent confidence intervals as indicators of sampling errors for percentages. This means that the chances are about 95 out of 100 that the actual percentage falls within the confidence interval. For example, if 83 percent are reported to be satisfied with Postal Service performance and the sampling error is reported to be – points, the chances are about 95 out of 100 that the actual percentage satisfied is between 80 and 86 percent. To identify steps that the Service is taking to improve customer satisfaction using CSI and other data, we held numerous interviews at postal headquarters and selected postal field offices and, as appropriate, obtained supporting documentation. We visited six customer service districts—three having among the highest CSI ratings for retail services in the nation (Billings, MT; Central Plains in Omaha, NB; and Springfield, MA) and three having some of the lowest CSI scores for retail services (Chicago, IL; New York, NY; and San Francisco, CA). Our purpose in selecting a mix of high-scoring and low-scoring districts was to identify innovative service improvement initiatives in some districts with different levels of customer satisfaction. We interviewed the six area managers for customer services with responsibility for the six districts we visited. We held discussions with the district manager and his/her key assistants, the consumer affairs manager, representatives of employee groups, retail specialists, and selected postmasters and/or station managers. We toured several post offices or stations in each district. Appendix II presents background data on the six districts. Our review followed generally accepted government auditing standards. Our visits to postal field offices were made between November 1993 and May 1994. For a significant period of time during our review, a portion of our work was delayed because we did not have access to CSI data needed to analyze customer satisfaction with retail services. We requested the data needed for this work in January 1993. In February 1994, the Postal Service provided the data we requested, and we were then able to complete our CSI data analysis. Our analysis of CSI data was done between April 1994 and October 1995. We received written comments on a draft of this report from the U.S. Postal Service. Summaries of these comments and our evaluation are included at the end of chapters 2 and 3. The comments are reprinted in appendix VI. The Postal Service makes extensive internal dissemination of residential CSI data to track customer satisfaction and identify opportunities to improve customer service. However, it has not disseminated much of that data to Congress and recently further limited the data provided in required reports to Congress. Moreover, the Service has gathered business CSI information but has not disseminated it internally or to Congress. Along with improving CSI data dissemination, the Postal Service can potentially improve use of the residential data by giving field offices more guidance on analyzing certain CSI results. Postal Service officials believe that the use of residential CSI results can help improve organizational and employee commitment to customer satisfaction. Accordingly, the Consumer Advocate and other officials have taken numerous steps to make Postal Service leadership and employees aware of and help them use CSI results. Soon after residential CSI results became available each quarter, the Consumer Advocate provided the results to the Postmaster General and other top postal leaders. The results were disseminated widely within the organization in several ways: • The Postmaster General highlighted CSI results in his quarterly report to the Board of Governors. • The Consumer Advocate provided more detailed briefings for the Board of Governors on the survey results each quarter, highlighting customers’ ratings of the Service’s overall performance and identifying the highest and lowest ranked customer service districts. The Consumer Advocate also visited postal facilities in metropolitan areas having the highest rating for the quarter to commend local management and employees. • The Service’s contractor, Opinion Research Corporation, provided quarterly written reports detailing CSI results for use by postal headquarters and each subordinate management level in Washington, DC, and field locations. • The Postal Service made CSI results available electronically to executives, managers, and employees through automated information systems. To further promote the use of CSI results, in November 1992, the Consumer Advocate established an Independent Service Analysis Group to assist offices and individuals throughout the Service in using CSI and other customer service data, such as EXFC and customer complaint data. The group made various analyses and issued reports of CSI results periodically and on demand for postal leadership, including the Board of Governors, and postal managers at all levels. Each quarter the group identified the top 10 and bottom 10 metropolitan areas of the total 170 metropolitan areas. The group also made comparisons each quarter to show whether and to what extent each performance cluster’s CSI ratings differed from (1) the current median rating for all clusters and (2) the cluster’s rating for the same quarter of the previous year. The Service made available to all management levels, through an automated information system, the results of these comparisons for those performance clusters and CSI questions having significant differences from the prior year. Along with data analysis, the group provided instructions to field offices on how to use CSI results for changing internal processes that caused customer dissatisfaction. The manager and other members of the group had visited all 10 area offices and numerous district offices to assist with CSI data analysis. The group also developed and furnished video tapes on how to analyze and use CSI data. At the time of our review, Postal Service headquarters had not prescribed specific procedures and methods for area, district, and processing plants to use in disseminating and using CSI results. The three area offices and six districts that we visited used a variety of means to provide CSI results to managers, supervisors, and employees. For example, the Central Plains District in Omaha, NB, published Newsbreak, a monthly information newsletter for its employees that periodically included CSI information. The Pacific Area Office in San Francisco, CA, had prepared a 12-minute videotape to be shown to employees in which area managers provided an overview of the CSI process and employees then spoke about their roles in improving customer service. District managers said that they discussed CSI results in regular meetings with postmasters and employees. Local managers said that they found narrative comments included in CSI reports to be especially useful because the managers could identify customer concerns about service at specific post offices. For example, customer service managers in the New York District said that every quarter they analyze hundreds of narrative comments made by customers to better understand customers’ perceptions of service quality in specific locations. Although the Postal Service has generated valuable information from its residential customer surveys since 1991, it has provided relatively little of the information to Congress. The Postal Service also shares very little information on residential customer satisfaction with the public. In recent years, the Service has reduced the amount of residential customer satisfaction data and other performance data provided in required annual comprehensive statements to Congress. The Service publicly discloses the responses to only 1 CSI question on the Service’s overall performance for the nation and 170 metropolitan areas each quarter. The Service included data on only this one CSI question in comprehensive statements to Congress that are required annually by the 1970 act (39 U.S.C. 2401). Provisions of the act calling for comprehensive statements specify several categories of data to be included in each statement. The statements are to cover the Service’s plans, policies, and procedures for carrying out its universal mail service mission, which is stated in section 101 of the act. The statements are to also describe postal operations generally and include data on the speed and reliability of service provided for the various classes of mail and types of mail service, mail volume, productivity, trends in postal operations, and analyses of the impact of internal and external factors on the Postal Service. The act also says that the Senate and House postal oversight Committees of Congress are to hold hearings on the Postal Service in March each year. As a stakeholder in the delivery of U.S. mail, Congress has not only described in the 1970 act certain information it needs from the Service but also has often expressed interest in particular aspects of the Service’s performance and customer satisfaction. In 1994, this interest was manifested in congressional hearings and public statements of several Members of Congress regarding the quality of service in the Washington, DC, and Chicago, IL, areas. Typically, Members of Congress have responded after news accounts and complaints from the public regarding the quality of mail service in particular areas of the country. The Postal Service reacted to concerns about the quality of delivery service in testimony before Congress several times in 1994. More recently, the House oversight committee has held numerous general oversight hearings with a view toward determining the need for any changes in the 1970 act. The Postal Service has submitted the required comprehensive statements, and the oversight Committees have held hearings on the Postal Service’s operations and services. However, the usefulness of the comprehensive statements has been limited by the scant CSI and other performance data included in them, particularly in the statements for fiscal years 1993 and 1994. Our review of the last four statements (fiscal years 1991 through 1994) showed that the Postal Service has reduced the amount of information on customer satisfaction and delivery performance provided to Congress. The 1992 statement tabulated on-time delivery rates from EXFC for overnight, 2-day, and 3-day delivery service for each quarter and the year. The 1992 statement also included CSI results for each quarter, with the results broken out into several categories of customer responses, i.e., excellent only; excellent and very good combined; good only; excellent, very good, and good combined; fair only; poor only; and fair and poor combined. Following the Service’s efforts to downsize and reduce overhead costs, the 1993 statement to Congress provided less CSI information than the 1992 statement. The 1993 statement showed only one rating for the year, which included the sum of all excellent, very good, and good responses for only the fourth quarter of 1993. For comparison purposes, the sum of these same responses was included for the fourth quarter of 1992. The 1994 statement had even less CSI information than the 1993 report. The 1994 statement showed one rating (excellent, very good, and good combined), and it was for the fourth quarter of fiscal year 1994 only. No comparison was presented of the 1994 rating with the same quarter of 1993. The Postal Service provided more CSI information to the general public than was provided in its required comprehensive statements to Congress. In quarterly publications available to the public, the Service included CSI ratings for 170 metropolitan areas. In addition, the publications also provided the EXFC ratings for 96 of these same areas. Further, the Service recently added new measurements of its on-time delivery performance for the mailings of those publishers and mailers participating in the external second-class and third-class mail measurement systems. The Service is also participating with some foreign postal administrations in the development of processes for measuring international air letters. No data were provided in any of the four comprehensive statements that we reviewed relating to these new measurements. The Service could use data that it already routinely releases to the public and that it previously provided to Congress for more informative analysis and presentations in required comprehensive statements to Congress. Use of these CSI results and other performance indicators could provide a more complete picture of customer satisfaction and the Service’s performance. No additional data-gathering would be necessary. CSI and EXFC results are available to the public in quarterly publications prepared by the Consumer Advocate, but the results are not compared, analyzed, and summarized for potential use by Congress. The following illustrates some ways in which the Postal Service might present additional CSI and EXFC results to Congress. Although the Service compiles data similar to EXFC for other mail classes, we did not include data on these classes because the Service does not release any of that data to the public. Congress could use comparisons of CSI results for several years to review the progress the Postal Service has made in improving perceptions of its overall performance. The Postal Service’s annual report for 1994 did not present comparative CSI results for the current year and preceding year. In future annual reports and other communications with Congress, the Service could use available CSI data that are currently or were previously made public to compare customer satisfaction, by postal quarter, for the current and previous years. (See fig. 2.1.) The Postal Service could also use tables or graphics to show how the results differ among the 10 postal area offices and 85 performance clusters. CSI ratings differ significantly among geographic areas of the country, and with better disclosure of the ratings, Congress could compare and contrast customer satisfaction levels and changes not only nationally but also for various regions and cities. For example, the Postal Service could show how the results compare among the 10 postal areas for selected periods. (See fig. 2.2.) CSI results could also be presented for the Service’s performance clusters. For its internal use, the Postal Service arrays CSI results by cluster and compares current and preceding year results. Thus, the Service would need little additional effort to include such information in required annual reports to Congress. Although the Service does not currently present CSI data to the public in this manner, it does publish CSI ratings every quarter on smaller geographic areas—the 170 metropolitan areas. Presenting the data for larger geographic areas would not appear to pose any greater threat to the Service’s competitive interests than disclosing the data by metropolitan areas, as is done now. CSI and EXFC are the Service’s two most widely publicized externally developed performance measures. Although the two systems are very different and so are the results, the Service presented quarterly ratings from the two systems, both for metropolitan areas and nationally, side by side in its publicly disseminated documents. The Service could present CSI and EXFC data in a way that helps ensure that the extent of and reasons for differences in customer perception of the Postal Service’s performance and measurement of delivery performance are understood. The Service has tended to focus much of its attention on publicizing customers’ perceptions of the Service’s overall performance and improving these perceptions. While the publicized ratings disclose perceptions of overall performance, various data compiled by the Service show that customers are most concerned about the length of time that the Service takes to deliver mail and the consistency of mail delivery service. However, the Service recognizes that a number of factors, not necessarily related to mail delivery, influence customer perception of the Service’s overall performance, as measured by CSI surveys. To illustrate how performance perceptions can differ from delivery measurements, in postal quarter 4, 1994, EXFC scores for 28 of 93 metropolitan areas varied by more than 5 percentage points from CSI scores for the same areas. For 16 of the 28 areas, the CSI ratings were higher than the EXFC ratings. For 8 of these same 28 metropolitan areas, the difference between the EXFC and CSI ratings was 10 or more percentage points. For five of these eight, the CSI ratings were higher than the EXFC ratings. (See fig. 2.3) For the remaining three metropolitan areas (Chicago, IL; Queens, NY; and Washington, DC), the CSI ratings were lower than the EXFC ratings in postal quarter 4, 1994. Of the 170 metropolitan areas for which CSI results are reported, these 3 areas were the 3 lowest ranked in quarter 4. In all three of these metropolitan areas, the CSI ratings had dropped below the EXFC rating during 1994. The EXFC results improved in all three areas during the year, but the CSI ratings for all three were still well below their EXFC ratings at year’s end. (See figs. 2.4-2.6.) As indicated above, for some areas, customer perception of overall performance remained relatively low after overnight First-Class delivery performance improved. Because of such differences, it is important that the results of the CSI and EXFC systems be presented in a way that makes clear that they represent two very different measures of the Service’s performance. A Postal Service manager responsible for CSI data analysis said that the Service does not expect a “match” between CSI and EXFC results, either overall or by specific CSI question or service attribute. He said there is a tenuous relationship between internally driven commitments, e.g., overnight delivery service, and customer expectations. He said that responses to CSI question 1a are affected by many factors, such as the Service’s announcements of postage rate increases and adverse publicity in the news media, and that on-time delivery explains about one-half of the question 1a results. After reviewing a draft of this report, the Vice President/Consumer Advocate agreed with the manager’s comments summarized above. She said, however, that the impact of adverse publicity on CSI ratings is short-lived and does not affect the ratings in every metropolitan area across the nation. We believe that the above comments by the manager and Vice President are all good reasons why CSI and other performance data need to be analyzed and presented to Congress in a way that provides as complete and accurate a picture as possible of both the Service’s delivery performance and customer perceptions of its performance. In presenting CSI results to Congress, the Postal Service could break out the results to show more clearly how satisfied, in terms of specific response categories, customers are with the Service’s overall performance. As stated previously, in some earlier reports to Congress such breakouts were provided. Customers can rate the Service’s performance as excellent, very good, good, fair, and poor. The Postal Service disclosed only what it termed “favorable” responses when presenting CSI information in quarterly pamphlets to the public. These responses were the sum of excellent, very good, and good responses for each of 170 metropolitan areas and the nation. If the Service disclosed the percentage of customers giving the higher ratings of excellent and very good combined, as it does for its internal reports, Congress would have a more precise picture of how customers’ perceptions have changed over time. For most of its internal purposes, including calculation of performance incentive payments for executives and employees (discussed in ch. 3), the Service uses excellent and very good ratings only. Most of its management reports use these two ratings alone or in combination with the overall favorable rating, which includes not only excellent and very good responses but also good responses. Disclosing excellent and very good responses is important because, as figure 2.7 shows, good responses alone accounted for almost one-third of all responses. Combining good responses with excellent and very good responses and reporting only the totals can mask shifts in customer satisfaction, and the changes can sometimes be statistically significant. This masking occurs when customers either increase their ratings from good to the higher ratings or drop ratings from excellent and very good to good. For example, for the San Francisco, CA, metropolitan area, the favorable rating increased by 4 percentage points, from 82 to 86, which the Service considers to be statistically significant, between quarter 4, 1993, and quarter 4, 1994. However, excellent and very good responses did not have a significant change, decreasing by 1 percentage point. Conversely, in 14 metropolitan areas, the percentage of excellent and very good responses together increased even though the overall favorable rating went down between these same 2 quarters. Service officials include good responses in the publicized CSI ratings and disclose that excellent, very good, and good responses are combined. However, the sum of excellent and very good ratings alone, or together with the good ratings, nationally and for each of the 10 postal areas and 170 metropolitan areas, would give Congress a more complete picture of the status of and changes in customer satisfaction. In addition, such presentation would be more consistent with the Postal Service’s internal reporting. This further breakout of customer responses would not appear to jeopardize the Service’s commercial interests because the favorable ratings are already available to the public. Along with providing more comprehensive CSI information to Congress, the Postal Service could potentially improve the usefulness internally of residential customer survey results. CSI results for some questions have lower levels of precision. While the Service has taken steps to inform CSI users of the level of precision, written reports distributed by the CSI contractor do not fully disclose the level of precision. Quarterly reports distributed by the CSI contractor to the Postal Service contain extensive CSI data, including various satisfaction percentages for 38 questions detailed to 3-digit ZIP code areas and metropolitan areas. The reports indicate which of these percentages are significantly higher or lower than the national results. However, the usefulness of some of the percentages is limited by the lower levels of precision. While the Service had provided some guidance to users on this data limitation, users of the contractor-generated reports may not be sufficiently aware of how to use those percentages having lower levels of precision. The reports give little guidance on how to interpret CSI data that are not as precise as some other CSI data in the same reports. The Postal Service requires that the contractor survey enough customers in each metropolitan area each quarter to provide a margin of “sampling error” associated with the responses for each CSI question that is to be no 3 percentage points for question 1a on the CSI survey questionnaire. Our review of response rates for the 11 postal quarters from postal quarter 1, 1992, through postal quarter 3, 1994, showed that the Postal Service obtained the number of responses necessary to provide this required precision each quarter. However, CSI results for some questions sometimes have sampling errors that are much greater than – 3 percent. This occurs because customers who have not used a particular service are instructed not to answer questions about that service. Because of this, the number of responses for such questions, 22 of 38 in total, can be much lower, and the sampling error much higher, than for question 1a. For example, customers who do not have any of their household’s mail delivered to a post office box are instructed not to answer the two questions on this service. In one metropolitan area, satisfaction with delivery of mail to the correct post office box was 69 percent in 1 quarter and 79 percent in another metropolitan area for the same quarter. However, both ratings were based on a small number of responses: 29 responses in 1 metropolitan area and 36 responses in the other. The small number of responses results in a large margin of sampling error. (Details on sampling errors for metropolitan areas are included in app. III.) Postal managers and employees are expected to use all CSI reports for tracking progress in improving customer service and analyzing processes at post offices and processing plants that affect customer satisfaction. However, high rates of sampling error for some questions can result in inappropriate inferences if users of CSI results compare one metropolitan area with another. To illustrate how this can happen, we will use the above example involving post office box services. After sampling errors are considered in this case, the rating is between 53 and 85 (69 percent, 16 percentage points) for one area and between 63 and 95 percent 16 percentage points) for the other area. Thus, an inference that the 79 rating indicates higher satisfaction than the 69 rating may be inappropriate because the difference could be due to sampling error. Consumer Advocate officials we contacted were aware that CSI results for some questions do not have the same degree of precision as the overall ratings that are published quarterly using question 1a responses. They said that they inform field personnel of this imprecision during all briefings and that the Corporate Information System shows whether changes in responses to each question are statistically significant. They believe that even the less precise results for some CSI questions can still be useful to managers, particularly when combined with other data and when the results are compared for several postal quarters or several years. We agree that the CSI results can be useful but also believe that users of written CSI reports could be provided with additional information on how sampling errors limit the precision of some CSI data. Because such errors can vary depending on the question, users might benefit from additional information in the reports on the sampling error for each CSI question. This might help to ensure more informed comparisons between metropolitan areas and over time. Although the Service disseminated CSI results for residential customers, these customers represent a small portion of the Service’s mail volume. Under a contract with the Service, The Gallup Organization, Inc., gathers data on the satisfaction of business customers, who account for the vast majority of the mail. However, the Service has not disseminated the results within the Service or shared any results with Congress. The Service is concerned that the data might be made public if brought into the organization, thereby jeopardizing its competitive interests. It is important that the Service regularly obtain, analyze, and use BCSI results, which are currently being generated for each postal quarter, because business customers account for most of the Service’s mail volume. Further, Postal Service studies show that these customers are more likely than residential customers to switch to another supplier of mail services. Most mail is a consequence of business transactions, including billings, payments, advertising, and other economically motivated activities. Studies by the Postal Service show that 59 percent of the total mail stream originates outside households and is sent to households. An additional 30 percent originates outside households and is sent to nonhouseholds. Overall, the flow of mail from nonhousehold customers accounts for almost 90 percent of the total mail. (See fig. 2.8.) Similarly, almost 90 percent of the Service’s revenue, totaling $58 billion in fiscal year 1994, is generated by business mailers. The level of satisfaction of these mailers and their continued use of the U.S. mail service are critical to the Postal Service’s financial viability. Residential customers have limited alternatives for letter mail service because of the Private Express Statutes requiring the delivery of nonurgent letters by the Postal Service. But business customers often can and do use other private carriers because of urgent delivery requirements, which are exempted from the Private Express Statutes, and because their mailings are not what the Service defines as letters for Private Express Statutes purposes. The Service began planning for quarterly surveys of business customer satisfaction in 1991 and awarded a contract for the surveys to Gallup in April 1993. Under the contract, the contractor is required to provide information for use by Postal Service management from the national level down to the performance cluster level. Management was to use the information to determine the allocation of resources needed to maximize customer satisfaction and analyze how to better understand customer expectations and improve service. During the first year of the contract, Gallup was to conduct research and do a pilot test of the measurement system. The results of the first BCSI survey were available to the Postal Service in April 1994. Through June 1995, Gallup said it had completed five quarterly surveys, but the Service had not obtained and disseminated data from any of the surveys for use in improving customer satisfaction. According to the former Consumer Advocate, who left the Service in December 1994, some top Postal Service officials were briefed at least once by the contractor on the BCSI results in 1994. Subsequently, the contractor was directed not to provide the quarterly BCSI results to the Postal Service. She said that the Gallup surveys had produced “rich” data on business customer satisfaction, which she believed postal management could use to improve customer service. Under the contract, Gallup was to provide quarterly BCSI reports to all 170 performance clusters, 10 area offices, and the Service headquarters. Postal Service officials confirmed that these required reports were not being submitted by Gallup. Service officials believe that the indiscriminate sharing of customer satisfaction information with Congress and the public can be self-defeating. We agree that the Service’s commercial interests could be harmed by indiscriminate sharing where there is competition for its services. While most of the Service’s mail volume is protected by the Private Express Statutes, private companies compete with the Service to provide certain mail services, particularly expedited and parcel delivery, to residential and business customers. Competitors might use customer satisfaction and other performance data, which the Service had gathered to improve its service and become more competitive, to gain a competitive advantage over the Service. The 1970 act allows the Service to withhold from the public data that are of a commercial nature. In particular, competitors might benefit from detailed CSI results showing specific aspects of service and particular geographic areas of the country where the Postal Service is not meeting customer expectations. Competitors could target their market development efforts to these areas. We previously reported that the Service’s decision to publicly report only overall residential CSI ratings, but not ratings of specific services, is permitted under the Postal Reorganization Act of 1970. Although the Postal Service is covered by the Freedom of Information Act (FOIA) (5 U.S.C. 552), the 1970 act does not require it to disclose information of a commercial nature, including trade secrets, that under good business practice would not be disclosed publicly. In our earlier report, we also discussed the practices followed by some of the Service’s competitors in measuring and reporting customer satisfaction. The four competitors we contacted (Federal Express, United Parcel Service, Associated Mail and Parcel Centers, and Tribune Alternative Delivery) used independent contractors to assess customer satisfaction. Their goal was to achieve 100 percent customer satisfaction for the specialized services they offered. In the highly competitive overnight and parcel business, only a customer rating of “completely satisfied” (very good and excellent) was acceptable to private carriers. These companies did not release detailed information on their customer satisfaction surveys because they believed the information would be used to the advantage of their competitors. One of the companies, Federal Express, was a 1990 Malcolm Baldrige National Quality Award winner. It released overall information on customer satisfaction, and we reported in 1992 that 94 percent of Federal Express’ customers contacted were completely satisfied with the overall service. Unlike its competitors, however, the 1970 act established the Postal Service as an executive branch establishment accountable to Congress. Since that time, it has increasingly functioned as a businesslike entity, competing with technology and private companies to deliver certain services in a competitive marketplace. However, the duel objectives of operating as both a public and private entity require that the Service balance the protection of its competitive interests with the potential value to the Service and Congress of using the data, with appropriate safeguards, to help assess and improve customer service. We did not review the business customer satisfaction data compiled by Gallup, but such data would seem useful to Postal Service management for improving customer service and to Congress for its oversight activities. Both the Service and Congress have found similar data on residential customer satisfaction useful. By not receiving any data on business customer satisfaction from the contractor, the Service and its customers are denied potential benefits of the Service using the data to improve customer service. The data are accumulated by Gallup at considerable cost (projected at $11.9 million over 4 years). Meanwhile, as discussed in chapter 3, the Service is developing plans and has begun numerous national and local service improvement initiatives. This is being done without analyzing and using BCSI results to identify the aspects of service and the geographic areas indicating the greatest business customer dissatisfaction. Disseminating BCSI results to postal management and providing some of the results, with appropriate safeguards, to Congress would appear to require little additional cost. The limited release of some customer satisfaction data to Congress, such as was done earlier for residential customers, would not seem to harm the Service’s commercial interests. Given its experience with the external distribution of residential data, it appears that the Service may be able to similarly share some BCSI results with Congress. This could perhaps be done by presenting indicators of business customer satisfaction nationally, for broad customer groupings, and/or for larger geographic areas. Where it is determined that release of the data might hurt the Service, the data could be made available to appropriate congressional Committees using appropriate safeguards, such as an agreement with the Committee not to release the data to the public because it could jeopardize the Service’s commercial interests. Congressional oversight Committees for the Postal Service could use BCSI and other performance data for a variety of purposes, including ongoing postal oversight activities and consideration of changes to laws and regulations affecting the Service’s performance. In this regard, the Postmaster General has said that changes are needed in aspects of the legislative and regulatory framework that constrain the Service in pricing its services, introducing new products, and managing its employees. Further, legislative proposals are now pending in Congress to fundamentally change the Service’s governmental status and its responsibilities relating to universal mail service. Concerning its plans to distribute BCSI data, Service officials told us that an officer-level team had been chartered to develop an overall plan and recommendation for the deployment of both internal and external measurements used to determine customer satisfaction and improve customer service. As part of this effort, the Postal Service said that it would determine the most effective disposition of the BCSI. No date was provided on when the effort would be completed and whether any BCSI results would be disseminated within the Service or provided to Congress. Consequently, the Service did not have a plan and timetable for using business customer satisfaction data internally, disseminating the data as appropriate to congressional oversight Committees, and designing safeguards to protect against the improper release of sensitive data to competitors. In commenting on a draft of this report, the Vice President and Consumer Advocate said that plans were under way to identify which managers will receive BCSI results and how frequently they will be distributed. She also said that the competitive nature of this information requires that great care be exercised in making the information dissemination decisions. The Postal Service’s residential customer surveys have provided valuable data for potential use within the Service and by Congress. Postal leadership, particularly the Consumer Advocate, has made significant progress in disseminating CSI results within the Service and promoting greater CSI use. However, opportunities exist to improve the dissemination and use of CSI results and mail delivery performance data. Perhaps most important is the need for postal management and Congress to have some indication of how well business customers perceive the quality of mail service because these customers represent the vast majority of the Service’s business. Postal leadership is developing plans, allocating resources, and implementing new service initiatives without analyzing and using business satisfaction data. By not using both business and residential customer satisfaction data, management attention and resources could be directed disproportionately at improving those processes that are not of the greatest importance to overall customer satisfaction and, ultimately, the Service’s success. It is not reasonable to expect the Service to disclose data on specific aspects of its services or particular geographic areas that could jeopardize its competitive interests. However, the Service’s divergent roles as both a public entity and a business dictate that it strike a better balance between (1) obtaining and using business customer satisfaction data to identify and respond to areas of customer dissatisfaction and providing information to Congress and (2) protecting business interests by safeguarding against the release of sensitive, proprietary information. More general measures of business and residential customer satisfaction, along with other performance data such as EXFC ratings, can provide useful yardsticks for Congress to use in its routine oversight activities and consideration of legislative proposals that relate to the Postal Service. Such data are already compiled and with appropriate safeguards could be included in the reports that the Service files annually with oversight and appropriation Committees. Because of the Postal Service’s investment in national CSI surveys and the importance of the results to its overall service improvement efforts, it is important that field offices know both the strengths and limitations of CSI results and are committed to using the results as intended by postal headquarters. CSI reports generated by the contractor can more fully disclose the level of precision and usefulness of data. Users of the reports need to be aware of the different levels of precision to avoid reaching unwarranted conclusions, particularly when comparing one organizational component or geographic area with another or making comparisons over time. To improve the dissemination and potential use of CSI data, we recommend that the Postmaster General take the following steps: • As part of the Service’s ongoing performance data study, establish a plan, safeguards, and timetable for distributing business customer satisfaction results to all appropriate management levels of the Postal Service for use in improving customer service. • Consult with appropriate congressional committees to determine what analyses of business and residential CSI data and other available performance data would be useful to them and, using appropriate safeguards, provide that data in periodic reports to and other communications with Congress for its use. • Provide more information in the detailed internal CSI reports provided by the contractor, including the sampling errors for CSI questions and explanations to users on the level of precision and usefulness of customer data on certain questions. The Service said that our report presents a generally accurate picture of what the Service was doing to measure customer satisfaction and delivery performance at the time of our review and how the Service could better use the resulting data to improve service quality. The Service did not comment specifically on each of our recommendations but rather said that it had recently undertaken an extensive, systematic review of all of its functions and processes. The Service said that based on criteria and guidelines of the Malcolm Baldrige National Quality Award (which we discuss in chapter 3), the review helped the Service to identify and organize actions necessary to make its goal of a customer-driven, customer-oriented, and customer-responsive organization a reality. The assessment led to a program the Service calls CustomerPerfect!sm The Service said that our recommendations and concerns regarding information sharing will be addressed in that program. For example, the Service said that a team headed by the Consumer Advocate was studying the dissemination of customer satisfaction results, both for business and residential customers. The team was to develop a strategy for making survey results available to the public and Congress. The Service expressed some minor disagreement regarding our comparison of CSI and EXFC ratings. The Service inferred that we anticipated more of a connection between the CSI and EXFC ratings than the data actually show and explained that the two ratings are different. We agree that CSI and EXFC are very different measures, and we had no preconceived notion that the results of the CSI surveys would “match” or closely relate to measures of on-time delivery performance under EXFC. Rather, our purpose was to show how the ratings differ and emphasize how the Service could explain the extent and reasons for differences in customer perception of the Service’s performance, as measured in CSI, and its delivery performance, as measured in EXFC. To help clarify this point, we made some changes to the section of the report comparing CSI and EXFC ratings. The Service also said that its customer satisfaction and delivery systems are useful as measurement tools but less useful for diagnostic purposes. The Service wants to improve the systems to provide more precise and immediate feedback for making real-time improvements in service quality. The Service has had many innovative and promising service improvement efforts under way but, as discussed in chapter 2, it has not used business customer satisfaction data as part of these efforts. Available residential CSI data show that the level of customer satisfaction remained about the same in 1994 as in 1991. Despite its many initiatives, most of which began in 1990 and 1991, the Service has not implemented at the performance cluster level a corporatewide strategy for improving customer satisfaction and focusing all field offices on the most significant underlying cause of customer dissatisfaction; namely, unreliable mail delivery. The Service’s performance incentive plans for managers and employees did not include available measures of delivery service reliability, such as EXFC data. Further, postal headquarters did not follow a systematic approach for (1) monitoring field offices’ progress in improvement initiatives and (2) sharing information among all field offices on the best customer service practices. Using residential CSI data and other performance indicators, the Postal Service has begun numerous efforts to improve customer service and reduce significant levels of customer dissatisfaction. These efforts have included (1) encouraging, training, and rewarding employees to better serve customers; and (2) setting new policies and standards to focus greater corporatewide attention on customer service. In line with national customer service goals, field offices have pursued a broad array of efforts to improve service. The Service’s employee-related efforts are designed to better recognize the importance of postal employees to substantial and sustained improvements to customer satisfaction. The influence of postal employees on customer satisfaction can be seen in CSI results. Customers indicate in the residential CSI surveys whether they have visited, phoned, or complained to their local post offices during the quarter covered by each survey. Analyses of CSI data done by the Office of Consumer Advocate and the Postal Inspection Service show that the more contact a customer had with the Postal Service, the lower the customer rated its overall performance. For example, the Inspection Service reported in December 1994 that customers who had not gone into a post office in the 3 months preceding the CSI survey gave the Service higher marks than those who had visited a post office during the same period. The Postal Service, acting unilaterally in some cases and in cooperation with the unions in other cases, has taken numerous steps to stimulate greater employee commitment to serving customers. Its initiatives since 1990 include the following. • Employee opinion surveys (EOS) are done annually to obtain and track over time the views of employees at all organizational levels regarding their jobs, the organization, customers, and other topics. • New incentive payment plans were implemented to base employee rewards, in part, on the Postal Service’s performance in improving customer satisfaction and meeting financial goals. • As part of a corporatewide “Quality First!” initiative, training was provided to thousands of headquarters and field office employees to promote a total quality approach uniformly throughout the Postal Service. Subsequently, in lieu of the Quality First! initiative, the Service adopted the Malcolm Baldrige National Quality Award criteria for encouraging, facilitating, and measuring the Postal Service’s commitment to improving customer satisfaction. • Courtesy and sales training was provided for both craft and management employees involved in retail operations to improve skills and the motivation that leads to greater customer satisfaction and revenue generation. The Service has also adopted new corporatewide retail policies and standards and, at the time of our review, it was acquiring new retail equipment and facilities to improve responsiveness to customer needs and expectations. These efforts, begun or expanded since 1990, include the following. • The Service increased customer convenience by expanding an “Easy Stamp” program to allow customers to buy stamps by phone, mail, a computer network, and automatic teller machines. • Debit and credit cards are accepted for the purchase of stamps and certain other transactions. • A national standard of “Service In Five Minutes or Less” was adopted to reduce customers’ waiting time in line at some post offices, and post office hours were adjusted to better meet customer needs. • New retail service equipment was acquired, such as stamp vending machines, terminals for use by window clerks, and postage validation machines. • A new postal retail store design was approved for post offices to be constructed or renovated, and a new design for lobbies in some existing post offices was also approved. Both efforts were intended to provide interior appearances more appealing to customers than traditional post offices and make services more readily accessible. • Customer advisory councils were formed to solicit customer feedback from local community residents. • “Customer care centers” were established to help improve receipt and handling of customer calls, and a 1-800 toll-free service was set up for resolving the complaints of customers who continued to have problems after contacting local post offices. Our review, and two related reviews done by the Postal Inspection Service, revealed a wide array of imaginative and potentially successful efforts under way at some field offices. Following are brief summaries of some efforts that were under way in one or more of the six districts that we visited. The Postmaster of Springfield, MA, undertook a box call project at his main post office to enable post office box customers to call a central phone number to determine whether they had mail in their boxes. The basic premise for the project was that customers would appreciate saving a trip to the post office if they did not have mail. Post office employees input into a hand-held device the box numbers that have no mail. These numbers are then downloaded into a personal computer. Customers access the computer by telephone, key in their box numbers, and are told whether they have mail. The New York, NY, District initiated a program to place parcel lockers in apartment buildings. When a tenant who is not at home receives a parcel, the parcel is put into one of the lockers and the key to the locker is put into the tenant’s mail box. After the tenant inserts the key into the parcel box to retrieve the package, the key has to be removed with the carrier’s master key. One program objective was to serve senior citizens and disabled persons who may have difficulty getting to the post office or carrying heavy packages back to their homes. It also allows other customers who are not home during the day to obtain parcels conveniently without delay, and without having to wait in post office window service lines. Criteria the Service used in deciding whether to install a parcel locker inside an apartment house included the number of undelivered packages on a carrier route and how far the post office was from the apartment building. As of August 1993, the District had more than 2,000 parcel boxes in 172 high-rise buildings in Manhattan. Recognizing the need to improve CSI scores for telephone assistance, the New York, NY, District established a Mystery Caller Program in 1993 under its Customer Services Support group. The aim was to ensure quicker response time, improve the accuracy of answers to customers, and improve clerk courtesy. During each 2-week period, 4 calls were placed to each of the district’s 117 stations. To achieve satisfactory performance, a station must score 26 out of 32 possible points, or a score of 80 percent. The program was nicknamed the “100 Club” to encourage stations to respond enthusiastically and to seek a perfect score. Any station receiving a perfect score of 100 for six consecutive rating periods receives a bronze plaque recognizing that accomplishment. Silver, gold, and platinum plaques are presented when a station receives a perfect score for two, three, and four consecutive rating periods, respectively. Plaques are displayed in the station lobby. On the other hand, a station receiving a score of 80 or less must submit a plan to improve its rating. Retail units can accept change of address notices from customers who move so that First-Class mail arriving at their former addresses is forwarded to them for a period of 12 months. The Postal Service employs computerized forwarding sites (CFS) for keeping track of forwarding addresses and applying new address labels to mail to be forwarded. However, if a post office sends mail to a CFS for forwarding but the CFS finds no forwarding data in the computer, the CFS returns the mail to the post office as “no-record” mail. Mail is frequently returned when the customer’s mail forwarding date has expired. In February 1993, the Bellevue, NE, (Central Plains District) post office had a no-record rate of 17 percent (5 percent or below is considered good). The Bellevue Postmaster agreed with the post office operations manager to whom he reports to reduce the post office’s rate. A task force composed of management and craft employees was set up to work toward reducing the rate of no-record mail. The task force met to establish project objectives, develop an action plan, and set time frames. Subsequently, the task force visited the CFS to observe the processing of Bellevue’s CFS mail. The CFS processed Bellevue’s no-record mail during the visit and sorted it by carrier route so the task force could speak with the individual carrier about his or her mail. Later, the postmaster developed a procedure to double-check mail before it is sent to the CFS. The task force evaluated the project through weekly reports from the CFS. It also planned an ongoing dialog with the CFS supervisor to correct future problems. As of June 1993, the no-record mail percentage at Bellevue had been reduced to the Postmaster’s goal of about 10 percent. Appendix V provides information on other improvement initiatives that we identified in the six districts that we visited. Many of the Service’s improvement initiatives were still being implemented at the time of our review. Further, the Service believes that many more years of concentrated effort at all levels of the organization will be required before breakthrough improvements in customer satisfaction can be expected. The Service’s measures of residential customer satisfaction and its delivery performance support this notion. Through 1994, residential CSI data and other performance data show that the Postal Service is having little sustained success in its efforts to reduce customer dissatisfaction by improving customer service. In November 1993, the Postmaster General announced a favorable CSI rating (excellent, very good, and good responses) of 89 percent—the highest ever achieved. He said that actions were under way to improve that rating by 2 percentage points. Since that time, however, the favorable rating dropped to 85 percent in postal quarter 4, 1994 (May 28, 1994, to September 16, 1994). This was the same rating reported for the first quarter in which all 170 metropolitan areas were measured by CSI in 1991. During the 14 postal quarters through September 1994, the favorable ratings ranged from 85 to 89 percent nationally, and the excellent and very good ratings ranged from 51 to 60 percent, with a rating of 52 percent reported for postal quarter 4, 1994. (See fig. 3.1.) The 85 percent CSI rating meant that on the basis of the Service’s survey of a representative sample of about 90 million households, 13.5 million households rated the Postal Service’s performance as fair or poor—an increase of about 3.6 million from a year earlier. The national CSI ratings for postal quarters 1, 2, and 3, 1995, were 85 percent, 85 percent, and 86 percent, respectively. National CSI ratings differ from those for many metropolitan areas. The ratings for quarter 4, 1994, for some metropolitan areas were up to 6 percentage points above the national average. For some other areas, the ratings were as much as 34 percentage points below the national average. As discussed in chapter 2, neither we nor postal management have access to similar data on levels and trends of business customer satisfaction gathered by an independent contractor. As a result, we could not determine whether the satisfaction of these customers is better or worse than residential customers and if business customer satisfaction has improved since it was first measured in early 1994. EXFC data show that the national rating for on-time delivery has yet to exceed 90 percent, even though the Service’s goal is to deliver all First-Class mail on time 95 percent of the time. As indicated in figure 3.2, EXFC ratings have ranged from 79 to 84 percent nationally for the 14 quarters ended September 1994, with a rating of 83 percent reported for postal quarter 4, 1994. The national EXFC rating was 87 percent for postal quarter 4, 1995, ending in May 1995. This was the highest national EXFC rating ever reported by the Postal Service. We also reviewed CSI and EXFC data to determine the number of metropolitan areas that had higher and lower ratings during postal quarter 4, 1994, compared with the same period 3 years earlier in fiscal year 1991. We identified those metropolitan areas with CSI and EXFC changes of more than 3 percentage points during this period because percentage changes of less than this could be due to chance. For most metropolitan areas, customer satisfaction and on-time delivery performance dropped. Specifically, CSI ratings dropped for 20 of 31 metropolitan areas and increased by more than 3 percentage points for the remaining 11 during the 3-year period. EXFC ratings dropped for 27 of 43 areas and increased for 16. “Right now the only way to mail a First-Class letter is through the U.S. Postal Service, but if there were another mail service which you could use to mail a letter at the same price, would you switch to another service?” According to the Inspection Service report, many customers who rated the Service’s overall performance as excellent, very good, and good are at risk of shifting to another service. Postal Service management officials also said that many residential customers might switch to another service. They said that over 40 percent of the residential customer market is vulnerable to competition from another service, assuming that the postage charged is the same as that of the Postal Service. Our review, and related reviews done by the Postal Inspection Service, show that the use of CSI data and the development of related improvement initiatives have not followed an overall national strategy for focusing field offices’ attention on the principal causes of customer dissatisfaction. To a large extent, the improvement efforts initiated on the basis of CSI results have focused on post office operations, such as window and lobby services. The efforts did not always encompass employees and operations in mail processing plants or focus on specific aspects of service, such as the consistency and reliability of mail delivery, that CSI results indicate offer the greatest opportunity to improve customer satisfaction. Residential CSI data can be analyzed to identify aspects of service causing the greatest customer dissatisfaction. Such analyses show that improving the reliability (i.e., on-time delivery rates) of mail service offers the greatest potential for the Postal Service to improve customer satisfaction. Each quarter, detailed CSI reports prepared by the contractors rank responses to 37 questions in terms of their relative importance as “drivers” of customer satisfaction. The rankings represent the level of improvement potential calculated on the basis of the number of customers responding to each question and the number of good, fair, and poor responses to each. Of the 37 questions on specific aspects of service, those on the reliability of delivery time for local and nonlocal mail represented the greatest opportunity for the Postal Service to improve customer satisfaction. Those aspects of service that offer the least potential for improvement are under the control of postmasters and include window and lobby services offered at post offices, mail forwarding, and telephone service. While the Service has not obtained and analyzed BCSI data, as discussed in chapter 2, other data show that reliable delivery service is of greatest importance to all of the Service’s customers, both business and residential. According to Consumer Advocate data, customers complained more about late and missent mail than any other aspect of the Postal Service’s performance in fiscal year 1994 and other recent years. Moreover, we previously reported that the Postal Service has lost overnight and parcel delivery service, involving primarily business customers, to competitors, in part because those competitors offered faster and more reliable delivery. How mail processing plants operate can significantly affect customer satisfaction, but the plants have been less involved than customer service districts and post offices in using CSI data to improve customer satisfaction. The work done at processing plants can have a major influence on the reliability of mail delivery, the most important aspect of service to customer satisfaction. Postal Inspection Service reports on CSI issued in September 1992 and December 1994 showed that postal management had tended to focus improvement initiatives on processes and employees in customer service districts. In its December 1994 report, the Inspection Service reported that while all aspects of customer service require continuing attention, processing plants continued to be minimally involved in analyzing CSI data and planning and implementing activities to increase customer satisfaction. Customer service districts had taken the lead in using CSI data, and their actions generally included only post office and carrier services and not the operations at processing plants. The Inspection Service also reported that the districts tended to direct efforts at “quick fix” categories of CSI questions, such as complaint handling and telephone service, that have relatively low potential for improving customer service. Managers in mail processing did, however, use EXFC data to emphasize timely processing of mail. They also used other performance indicators, such as volumes of mail left at plants at the end of processing cycles. Some of our earlier reviews showed that the Postal Service’s principal improvement initiative in processing plants has been the automation of mail sorting, which began in 1982. In 1993, the Service began to automate the sorting of letter mail to each home and business address to relieve carriers of this workload. The Service’s automation goal has primarily been to reduce work hours and employees, not to improve delivery service by reducing mail cycle times. However, in December 1994, Postal Service officials did report to the Board of Governors for the first time that certain barcoded mail, which can be sorted automatically, was delivered faster to customers than nonbarcoded letters. “A mailhandler pulling a container of trayed mail to the dock for dispatch was asked how he affected customer satisfaction. He replied he doesn’t see or deal with customers. It was pointed out to him if a carrier makes a misdelivery, that carrier has affected one, maybe two customers, but if a mail handler places a container of mail on the wrong truck, he may affect 50,000 customers in a detrimental way.” Several of the Service’s initiatives, such as the EOS and Quality First! initiatives, did encompass managers and employees in mail processing plants. In providing Quality First! training, the Service instructed field employees on use of CSI and EXFC data in improving the reliability of mail service. Most of the employees at mail processing plants are members of the American Postal Workers Union (APWU), which in the past has not participated in the Service’s initiatives to involve employees in service improvement efforts. In this regard, we recently reported that breakthrough improvements in customer service cannot be achieved unless the Postal Service and labor unions representing postal employees resolve long-standing workfloor problems. Postal management has had difficulty getting labor unions to agree on the involvement of employees with each other and with management in solving customer service and other problems. For example, APWU is the largest postal union and did not participate in initiatives, such as Striving for Excellence Together (SET), Employee Involvement, and Quality of Work Life, which are described in our earlier reports. Neither APWU nor the National Association of Letter Carriers, which together represent about 85 percent of the total number of craft employees, participate in the SET program. As we reported earlier, a lack of labor-management cooperation has been a serious limitation on the Service’s ability to make significant, sustained improvements in customer satisfaction. As of July 1995, the Service and three of its four major unions (the rural letter carrier union being the exception) had not agreed to meet and begin developing new approaches to involve employees with union and management leaders in improving the processing and delivery functions of the Postal Service. Although it had numerous improvement efforts under way, the Postal Service did not have at the time of our review an overall plan to guide and integrate all of its CSI-related improvement efforts at post offices and processing plants. During our review, the Postal Inspection Service issued its December 1994 report and recommended that the Service develop a plan involving all field offices in the use of residential CSI data to improve customer satisfaction. In response to that recommendation, the Vice President for Work Force Planning and Service Management said that a corporate service plan would be developed, with emphasis on the role of processing and distribution as well as customer service in jointly improving service levels, as measured by CSI and other systems. The development of the plan was to begin in January 1995, and implementation was to begin within 120 days after the plan was finalized. We did some follow-up after completing our field work to determine the status of the plan and were told that some effort had been made to develop a plan. This included identifying 108 separate headquarters service improvement efforts, relating to business and/or residential customers, under way in early 1995. However, this effort to integrate all of the Service’s CSI-related initiatives was discontinued as such. In June 1995, the Vice President responsible for developing the plan advised the Postal Inspection Service that major changes had occurred in the corporate approach to improving customer satisfaction. One such change was a decision to apply the Malcolm Baldrige National Quality Award Criteria to the Postal Service, mentioned earlier. According to Service officials, the Baldrige initiative was started in 1994, with the guidance of a new Vice President for Quality and outside consulting services. In this initiative, the Service had set up 10 teams, including a team of senior leaders headed by the Postmaster General and an information and analysis team headed by the Vice President for Work Force Planning and Service Management. The functions of these teams included those described in seven categories of Baldrige criteria. As a first step, an outside consulting firm worked with the 10 area offices and 10 of the 85 performance clusters to assess current conditions against the criteria and provide baseline data for future assessments. In March 1995, the 10 teams were created and began developing actions plans for applying the criteria to the particular deficiencies identified by the Baldrige assessment. The Service’s plan to apply the Baldrige criteria, in what it refers to as CustomerPerfect!, appears to be another innovative and promising initiative that could make a difference in future levels of customer satisfaction. However, as with some of the Service’s past initiatives, labor unions representing postal employees are not a part of this new initiative. According to the Vice President for Work Force Planning and Service Management, leaders of major unions discontinued their participation in meetings with postal leadership when contract negotiations began in August 1994. He said that the unions are not represented on any of the 10 teams set up to implement the Baldrige criteria. Along with developing an overall plan and pursuing other service improvement efforts, the Postal Service has continued to reward certain employees for their performance partly on the basis of CSI results. These performance incentives, which we consider an innovative approach to linking employee pay more closely to organizational performance, are used to focus greater management and employee attention on customer service. The incentive payments are based on residential customers’ perceptions of the Service’s overall performance and other measures relating to financial performance and employee relations. However, it seems to us that the plans may be more effective if they also incorporate some of the key measures of service reliability, such as EXFC delivery performance data. Moreover, because the Service has not yet obtained and used BCSI results, the incentive plans do not incorporate levels and changes of satisfaction among the business customers, representing about 90 percent of the Service’s business. Some craft employees, all supervisors and managers, and most executives are all rewarded, in part, on the basis of CSI results. As part of union contract negotiations in 1990, the Service and 2 unions agreed to use 2 factors, CSI results by performance cluster and Service-wide financial (budget) performance, to make annual performance incentive payments to certain craft employees (92,852 employees, or about 15 percent of the craft work force in 1994) under the SET program. Subsequently, in consultation with the management associations, the Service extended the performance incentive plan to all supervisors and managers, i.e., those covered by the Service’s Executive and Administrative Schedule (EAS). In addition, the Service later began to base incentive payments to most executives (about 950) in the Postal Career Executive Service (PCES) on local and national CSI results, financial performance, and EOS results. Customers’ perception of the Service’s overall performance, as indicated by responses to question 1a in the residential CSI surveys and reported by performance cluster, dictate the difference in incentive payments to those craft employees, supervisors, and managers covered by the incentive plans. Payments based on financial performance are the same for all of these employees. Incentive payments to executives vary depending on question 1a results for groups of performance clusters or nationally and national financial performance. In fiscal year 1994, the Service incorporated EOS survey results into the incentive program for PCES-I employees. (App. IV provides additional details on the incentive pay plans.) None of the incentive plans include available EXFC data and other available delivery measures (e.g., measures of second- and third-class on-time delivery). We believe that the recognition of such delivery measures in the incentive plans is important because, as discussed previously, CSI data analysis shows that improving service reliability offers the greatest opportunity for improving customer satisfaction. Moreover, CSI and EXFC data show that a wide gap often exists between customers’ perceptions of the Service’s performance and its actual delivery performance. Consequently, in using only the overall CSI rating as one component in determining performance payments, the Service rewards employees on the basis of factors that are less under their control, i.e., perceptions of the Postal Service, than some other factors that are more under their control, i.e., mail collection, transportation, sorting, and delivery. If CSI and EXFC ratings generally were consistent with each other, the use of CSI alone would not be nearly as consequential. However, the Service’s delivery performance often differs significantly from customers’ perceptions of its overall performance for many metropolitan areas. We visited the Springfield, MA, and Chicago, IL, metropolitan areas because the former was among the highest-ranking of all clusters in CSI ratings and the latter was among the lowest. We found that management in both areas were using CSI results to emphasize the need to improve customer service and had a number of initiatives under way to improve service. Moreover, the differences in EXFC ratings for the two areas (shown in figure 3.3) were much smaller than differences in their CSI ratings (shown in figure 3.4). Our analysis showed that the relationship of EXFC and CSI ratings for some other metropolitan areas was similar to the above two areas. Further, many metropolitan areas having the highest CSI ratings in 1994 also had similarly high ratings at the time of the first CSI survey in 1991, before the Service began many of its current improvement initiatives. In contrast, some areas having the lowest ratings experienced significant change in CSI scores over the same period, as figures 3.5 and 3.6 show for selected high-ranking and low-ranking areas. Customer perception of the Service’s overall performance is only one indicator of its performance and service quality. Other indicators that are available to the Postal Service include not only the results of the Service’s independent measurements of First-Class, second-class, and third-class mail delivery service, but also customers’ responses to some of the specific CSI questions. CSI surveys include questions that relate directly to delivery performance; customers are asked about their satisfaction with both local and nonlocal mail delivery. Including the Service’s available delivery performance measures, broken out by performance cluster, in the calculation of incentive payments would appear to provide a more direct link between the incentive payments and both (1) mail delivery processes that are most under performance cluster employees’ control and (2) the factors that are most important to customer satisfaction. Postal officials administering the incentive plan said that the incentive payments for some employees were developed as a result of 1990 contract negotiations with unions, before the first CSI and EXFC results were available. These officials said that the objectives at that time were to get the plan adopted and to keep it simple. The Service’s Vice President for Quality, who came to the Postal Service in July 1994, said that he was concerned about the Service’s heavy reliance on customer perception as a single or principal performance measure. The Vice President said the Service is reexamining its collection and use of all externally generated data. He also said that several new efforts, including the application of Baldrige criteria, are under way to focus greater attention on those processes that employees can improve. In September 1995, after reviewing a draft of this report, the Vice Presidents for Quality and Human Resources said that as part of CustomerPerfect!, the Service will be aligning the PCES compensation system for PCES employees. The Vice President for Quality said that this new compensation alignment will consider EXFC, CSI, and BCSI measurements. He also said that a similar alignment will be proposed in the next round of consultations with management associations and negotiations with the unions. After starting some national improvement initiatives in 1992, postal headquarters did not regularly follow up to determine the extent to which the initiatives were implemented and if they improved customer satisfaction. Such follow-up would allow headquarters to assess field offices’ progress in implementing national initiatives in a timely manner and share with other field offices the best practices of post offices and processing plants in serving customers. The Postal Service followed a decentralized approach to implementing new initiatives. Its approach encouraged employees in post offices and processing plants to be innovative in working together and with customers to solve service problems. This approach recognizes that the field structure is large and complex—hundreds of mail processing facilities and more than 40,000 post offices, branches, and stations. For example, the number of postal employees assigned to the 6 customer service districts that we visited ranged from 2,300 in Billings, MT, to 10,800 in New York, NY. (See app. II for additional information on the relative size of the six districts.) Post offices also operate in a variety of environments to meet a broad array of customer needs. For example, postal officials in Billings, MT, had relatively little concern about the security of postal customers, employees, and equipment, allowing them to provide convenient access to window and lobby services. In contrast, physical security was of great concern to some post offices in the New York City area. There, bullet-proof glass protected clerks from the public, and lobbies were locked after certain hours. Without changing its decentralized approach to implementing improvement initiatives, Postal Service headquarters could use a more systematic and uniform approach for tracking field offices’ implementation of national initiatives and reporting the impact of the initiatives on CSI ratings and revenue. As indicated above, field offices were pursuing numerous retail initiatives. The time projected for completing the initiatives spans many years, and a number of postal headquarters’ offices were overseeing the initiatives. The tracking of national initiatives that we reviewed varied among headquarters offices, with procedures and data on some initiatives being more extensive than for others. The Office of Consumer Affairs had gathered fairly extensive data for monitoring the status and results of efforts to improve telephone service. For example, for the centralized call centers, the Office had set time standards for resolving customer complaints and keeping customers informed. Each customer complaint was to be logged, and a case history and caller profile were to be developed so that the complaint could be tracked until final resolution. The office’s data showed that about 60 percent of complaints received at the centralized call centers through December 1994 were resolved by employees at the center. The remaining calls required assistance from district or post office employees. The Office of Consumer Affairs was also monitoring the use of Consumer Advisory Councils. Postmasters were to decide when they wanted to set up a council, and through December 1994, relatively few post offices had formed councils. The first council was established by the Honolulu, HI, district in 1988, and 16 additional councils had been formed by the end of that year. By December 1994, 1,572 councils were operating nationwide. Some other headquarters units were still developing procedures to track the implementation and results of national initiatives under their responsibility. For example, a retail support group under the Vice President for Marketing was responsible for overseeing several initiatives to be implemented by post offices. In June 1993, the group requested area and district offices to provide data on post offices that had announced the service in 5 minutes or less standard and that had adjusted window hours. However, the data provided were incomplete. Of the 85 customer service districts, only 56 districts responded to the request. The 56 districts reported that of the approximately 40,000 post offices, branches, and stations nationwide, about 5,000 post offices had posted the 5 minutes or less service standard. The retail support group did not have data to show if those post offices serving large numbers of customers each day, and thus possibly having the greatest difficulty providing service, had announced the standard and adjusted hours. Postal retail officials said that no further effort had been made to obtain data on the two initiatives. The group was planning to track changes in CSI ratings as field offices implemented the 5 minute or less standard and expanded window hours. In addition, the group was considering different methods for determining whether post offices were meeting the service within the 5 minutes or less standard, and the group was considering methods for measuring the impact on postal revenue of adjusted window hours. Headquarters staff were also developing a plan to track the implementation and results of the new retail store initiative. Evaluations were to include customer and employee responses to the new design as well as revenue analysis. The evaluations were expected to include a breakdown of revenue sources (e.g., packaging products and vending machines) and cost studies of various implementation approaches, contractor performance, and ease of implementation. Although we obtained information mainly on initiatives of customer service districts, the recent planning efforts by the Vice President for Work Force Planning and Service Management discussed earlier showed that postal headquarters was not systematically tracking other national customer satisfaction initiatives. That office did a one-time survey of ongoing initiatives and identified 108 projects under way in early 1995. However, no further steps were taken at that time to integrate and assess the projects because, as mentioned previously, that effort was superseded by other broader headquarters initiatives. Our review and Postal Inspection Service reviews revealed a wide array of efforts under way at post offices and districts. Although many of these efforts were innovative, postal headquarters had no systematic way of sharing the results of successful efforts. Two mechanisms to facilitate information-sharing across the organization had been developed but were not in use at the time of our review. • An Innovations Network, a computerized database, was set up to allow certain employee groups to share information on successful initiatives. Coordinators at headquarters and in the field were to identify successful initiatives and submit descriptions of them for recording in the database. • A Customer Advisory Council Newsletter, published by the Consumer Affairs Department, was designed as a networking tool to be used by headquarters, field offices, and customer advisory councils that some post offices had established. One purpose of the newsletter was to share the results of successful improvement efforts. According to postal headquarters officials, both of these mechanisms were discontinued after the 1992 downsizing of the Postal Service. They said that after the downsizing, not enough employees were available to maintain and promote these information-sharing efforts. In addition, the officials responsible for the Innovations Network said that the procedures for maintaining and accessing the database were cumbersome and that coordinators did not always update the system to show new innovations. We recognize that developing and maintaining any system of sharing information on innovative approaches to improving customer service will require resources. The cost of sharing such information would need to be weighed against the benefits of giving all field offices the opportunity to implement proven techniques for improving customer satisfaction. In our discussions with the former Vice President for Customer Services, he said that a “clearinghouse” for new ideas and projects was needed. However, neither he nor other headquarters officials had assigned responsibility for developing procedures to share information on successful local initiatives. In April 1993, the Postal Inspection Service reported a need for better communication within and among district offices on successful initiatives to improve customer services. The report said that it was not uncommon to find that some post offices had not shared information on their improvement efforts with other post offices, often in the same district. The inspectors recommended in the April 1993 report that the Service take steps to permit sharing of such information among post offices and districts. They repeated the recommendation in a December 1994 report, suggesting that postal headquarters communicate CSI successes to offices nationwide via an electronic message system. In responding to the latter report, the Vice President for Work Force Planning and Service Management acknowledged that creating a bulletin board for CSI users would potentially be useful. Subsequently, his office provided some information in the Service’s automated information system on best practices. The system now identifies those metropolitan areas with the highest average CSI ratings for specific attributes, such as convenience of window service hours and waiting time in line. For example, 17 metropolitan areas were listed for postal quarter 1, 1995, as having the highest average rating for convenience of window service hours. Users of the system are advised that these areas are presumed to have put into place the best practices for consistently meeting the needs of customers for this service attribute. The purpose of the information is to give those interested in improving performance in particular attributes an idea of where to go and whom to talk with about benchmark procedures related to improvement efforts. Although it appears that this procedure for sharing information can help, Service officials acknowledge that it falls short of fully sharing information across the organization on practices found to have worked best. For example, the automated system does not identify the practices followed by any of the metropolitan areas or recognize the specific work teams responsible for new and innovative practices that have proven successful. Although residential CSI results indicate that significant levels of customer dissatisfaction continue to exist, the Postal Service is taking the important first steps of adopting a policy of measuring customer satisfaction to improve service. Its numerous and promising initiatives currently under way indicate a serious commitment to overcoming policy, operational, and cultural barriers to improving customer satisfaction by improving customer service. Although poor union-management relations constrain the Postal Service, the development of a national strategy to focus all field offices, including mail processing plants, on improving the reliability of mail delivery service is a necessary step to addressing a key cause of customer dissatisfaction. Similarly, the current performance incentive plans, which are innovative and a move in the right direction, can be refined to give more emphasis to encouraging prompt and timely mail delivery—what customers have said that they want most from the Postal Service. The Service could do this by using measures of service reliability from EXFC and other systems. Because such data are already available, the added cost of using these measures might be justified by potential benefits of stronger focus by employees and management on improving service reliability. However, we recognize that the changes cannot be made unilaterally for some employees. For craft employees covered by SET, changing the basis for the incentive payments would require agreement with unions; for some other employees, the change would require consultation with management associations. Many of the Service’s national initiatives were relatively new, and postal headquarters needs to know whether its initiatives are being implemented and whether they are being done so in a timely manner. Without some system of tracking field offices’ progress in implementing such initiatives, headquarters officials cannot be sure that field offices understand and are committed to the initiatives. Nor can officials systematically identify those offices most in need of assistance and those adopting best practices and demonstrating exceptional performance in implementing national initiatives. The Service would need to weigh the cost of implementing and maintaining a system of sharing such information against the potential benefits of improving customer satisfaction through better customer service. As part of the development of the Postal Service’s national service improvement strategy, and to achieve the greatest improvement in customer satisfaction, we recommend that the Postmaster General take the following steps: Incorporate BCSI results in the Service’s initiatives and ongoing efforts to improve its performance and service quality, using safeguards as appropriate. • Determine, in cooperation with unions and management associations, the feasibility of incorporating available measures of mail delivery service, along with CSI and other performance data, into employee pay incentive plans to encourage a stronger commitment to prompt and reliable mail delivery and, as appropriate, use these performance data in incentive plans. Implement cost-effective procedures for headquarters units to use in monitoring and reporting the implementation and results of national service improvement initiatives to ensure that they are implemented as intended. Implement cost-effective procedures for (1) regularly recognizing at the national level the best practices and successes of field offices and employees in improving customer satisfaction and (2) sharing information on such efforts across the organization. In commenting on a draft of this report, the Service said that it believed that our recommendations and concerns regarding employee performance incentives, systematic implementation and monitoring of improvements, and sharing best practices will be addressed in its recently begun CustomerPerfect! program. To explain how that program will address our recommendations, the Service discussed its approach to identifying and sharing best practices. It said that in the past “best” had been a matter more of intuition than measurement, and a team is looking at how to develop systems that will identify possible best practices and validate their effectiveness by measuring their results. According to the Service, once a practice is determined to be truly a best practice, it will be shared with the field, possibly through electronic bulletin boards and presentations at national or area-wide managers’ meetings. The Service’s CustomerPerfect! initiative appears to be a reasonable approach to addressing our findings and recommendations. Moreover, it is clear that the initiative has the commitment of the top-level Postal Service leadership. The program was just getting started at the conclusion of our review, and it was too early to determine how it will be implemented at lower management levels and by various employee groups and how the program might affect delivery performance and customer satisfaction. As noted in this report and our earlier report on labor-management relations, the success of some earlier Service initiatives that were designed to affect pay, duties, and management-employee relationships of craft employees was limited by a lack of support from the unions representing those employees. At the time of our review, the Service had not obtained the involvement and commitment of labor union leaders in the CustomerPerfect! initiative. On the basis of the Service’s experience with similar past initiatives, we believe that this involvement and commitment will be necessary to implement aspects of the new initiative affecting craft employees and to address our recommendation relating to the use of CSI and other performance data, such as EXFC, in employee pay incentive plans.
Pursuant to a congressional request, GAO reviewed the U.S. Postal Service's (USPS) efforts to measure, report and improve customer satisfaction, focusing on: (1) the extent to which USPS distributes customer satisfaction data for use internally and by Congress; (2) whether USPS can improve the distribution of such data; (3) how USPS is using customer satisfaction and other data to improve customer service; and (4) additional steps USPS could take to improve customer satisfaction. GAO found that: (1) USPS widely distributes residential customer satisfaction data for internal use to improve customer service, but has reduced the amount of performance data it provides to Congress; (2) USPS does not distribute business customer satisfaction data internally or externally because USPS fears compromising its position in an increasingly competitive market; (3) adequate distribution of business customer data could help USPS assess and improve business customer service; (4) although USPS has initiated numerous innovative efforts to improve customer service and satisfaction since 1990, residential customer satisfaction has remained relatively constant; (5) USPS has not developed a well-coordinated overall national strategy for improving customer service or focused on areas causing the most customer dissatisfaction; and (6) USPS has not always made the best use of customer satisfaction and other performance data to evaluate its initiatives, but it is studying ways to improve the use of all of its performance measures to improve customer satisfaction.
Homeland Security Presidential Directive 3 (HSPD-3) established the Homeland Security Advisory System in March 2002. Through the creation of the Homeland Security Advisory System, HSPD-3 sought to produce a common vocabulary, context, and structure for an ongoing discussion about the nature of threats that confront the nation and the appropriate measures that should be taken in response to those threats. Additionally, HSPD-3 established the Homeland Security Advisory System as a mechanism to inform and facilitate decisions related to securing the homeland among various levels of government, the private sector, and American citizens. The Homeland Security Advisory System is comprised of five color-coded threat conditions as described below, which represent levels of risk related to potential terror attack. Code-red or severe alert—severe risk of terrorist attacks. Code-orange or high alert—high risk of terrorist attacks. Code-yellow or elevated alert—significant risk of terrorist attacks. Code-blue or guarded alert—general risk of terrorist attacks. Code-green or low alert—low risk of terrorist attacks. As defined in HSPD-3, risk includes both the probability of an attack occurring and its potential gravity. Since its establishment in March 2002, the Homeland Security Advisory System national threat level has remained at elevated alert—code-yellow— except for five periods during which the administration raised it to high alert—code-orange. The periods of code-orange alert follow: September 10 to 24, 2002; February 7 to 27, 2003; March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004. The Homeland Security Advisory System is binding on the executive branch. HSPD-3 directs all federal departments, agencies, and offices, other than military facilities, to conform their existing threat advisory systems to the Homeland Security Advisory System. These agencies are responsible for ensuring their systems are consistently implemented in accordance with national threat levels as defined by the Homeland Security Advisory System. Additionally, federal departments and agency heads are responsible for developing protective measures and other antiterrorism or self-protection and continuity plans in response to the various threat levels and operating and maintaining these plans. While HSPD-3 encourages other levels of government and the private sector to conform to the system, their compliance is voluntary. When HSPD-3 first established the Homeland Security Advisory System, it provided the Attorney General with responsibility for administering the Homeland Security Advisory System, including assigning threat conditions in consultation with members of the Homeland Security Council, except in exigent circumstances. As such, the Attorney General could assign threat levels for the entire nation, for particular geographic areas, or for specific industrial sectors. Upon its issuance, HSPD-3 also assigned responsibility to the Attorney General for establishing a process and a system for conveying relevant threat information expeditiously to federal, state, and local government officials, law enforcement authorities, and the private sector. In November 2002, Congress passed the Homeland Security Act of 2002, P.L. 107-296, which established the Department of Homeland Security. Under the Homeland Security Act of 2002, the DHS Under Secretary for Information Analysis and Infrastructure Protection (IAIP) is responsible for administering the Homeland Security Advisory System. As such, the Under Secretary for IAIP is primarily responsible for issuing public threat advisories and providing specific warning information to state and local governments and to the private sector. The act also charges the Under Secretary for IAIP with providing advice about appropriate protective actions and countermeasures. In February 2003, in accordance with the Homeland Security Act, the administration issued Homeland Security Presidential Directive 5 (HSPD- 5), which amended HSPD-3 by transferring authority for assigning threat conditions and conveying relevant information from the Attorney General to the Secretary of Homeland Security. HSPD-5 directs the Secretary of Homeland Security to consult with the Attorney General and other federal agency heads the Secretary deems appropriate, including other members of the Homeland Security Council, when determining the threat level, except in exigent circumstances. In implementing the Homeland Security Advisory System, DHS assigns national threat levels and assesses the threat condition of specific geographic locations and industrial sectors. While the national threat level has been raised and lowered for five periods, DHS officials told us that the department has not yet assigned a threat level for an industrial sector or geographic location. However, DHS officials said that the department has encouraged specific sectors and regions to operate at heightened levels of security. According to DHS officials, decisions to change the national threat level and to encourage specific sectors and regions to operate at heightened levels of security involve both analysis and sharing of threat information, as well as an assessment of the vulnerability of national critical infrastructure assets that are potential targets of terrorist threats. DHS officials told us they use the criteria in HSPD-3 in determining whether to raise the national threat level or whether to suggest that certain regions or sectors operate at heightened security levels. These criteria include: the credibility of threat information; whether threat information is corroborated; the degree to which the threat is specific and/or imminent; and the gravity of the potential consequences of the threat. In determining whether these criteria are met and whether to raise the national threat level, DHS considers intelligence information and the vulnerability of potential targets, among other things. DHS officials told us that they use a flexible, “all relevant factors” approach to decide whether to raise or lower the national threat level or whether to suggest that certain regions or sectors operate at heightened security levels. They said that analysis of available threat information and determination of national threat levels and regional and sector threat conditions are specific for each time period and situation. According to these officials, given the nature of the data available for analysis, the process and analyses used to determine whether to raise or lower the national threat level or suggest that specific regions or sectors heighten their protective measures are inherently judgmental and subjective. DHS officials said that the intelligence community continuously gathers and analyzes information regarding potential terrorist activity. This includes information from such agencies as DHS, the Central Intelligence Agency, the Federal Bureau of Investigation (FBI), and the Terrorist Threat Integration Center, as well as from state and local law enforcement officials. DHS officials also noted that analyses from these and other agencies are shared with DHS’s IAIP, which is engaged in constant communication with intelligence agencies to assess potential homeland security threats. DHS also considers the vulnerability of potential targets when determining the national threat level. For example, DHS officials explained that they hold discussions with state and local officials to determine whether potential targets specified by threat information require additional security to prevent a terrorist attack or minimize the potential gravity of an attack. According to these officials, if the target is determined to be vulnerable, then DHS will consider raising the threat level. Last, DHS determines whether there is a nationwide threat of terrorist attack or if the threat is limited to a specific geographic location or a specific industrial sector. DHS officials said that, in general, upon assessment of the above criteria, if there appears to be a threat of terrorist attack nationwide, then IAIP recommends to the Secretary of Homeland Security that the national threat level should be raised. The Secretary of Homeland Security then consults with the other members of the Homeland Security Council on whether the national threat level should be changed. DHS officials told us that if the Homeland Security Council members could not agree on whether to change the national threat level, the President would make the decision. DHS officials also told us that when deciding whether to lower the national threat level, they consider whether the time period in which the potential threat was to occur has passed and whether protective measures in place for the code-orange alerts have been effective in mitigating the threats. DHS officials told us that if a credible threat against specific industrial sectors or geographic locations exists, DHS may suggest that these sectors or locations operate at a heightened level of security, rather than raising the national threat level. For example, for the third code-orange alert period in our review from December 31, 2003, to January 9, 2004, threat information raised concerns of potential terrorist activity for specific industrial sectors and geographic locations. In response, DHS officials said that they encouraged those responsible for securing chemical and nuclear power plants, transit systems, and aircraft, as well as certain cities, to maintain a heightened level of security, even after the national threat level was lowered to code-yellow. However, DHS officials noted that they did not assign a threat level to these sectors and regions at that time. DHS officials further indicated that these sectors and locations were not operating at code-orange alert levels. Rather, they operated at heightened levels of security. According to DHS officials, when encouraging specific sectors or regions to continue to operate at heightened levels of security, DHS may suggest that these sectors or regions (1) implement security measures that are in addition to those implemented during a code-orange alert period, (2) continue the measures implemented during the code-orange alert period, or (3) continue selected measures implemented during a code-orange alert period. DHS officials told us that not all threats to specific regions or sectors are communicated to the public or to officials in all regions or sectors. Rather, if intelligence information suggests a targeted threat to specific regions or industrial sectors, DHS officials said that they inform officials in the specific regions or sectors that are responsible for implementing protective measures to mitigate the terrorist threat. To inform these officials, DHS issues threat advisories or information bulletins. The threat advisories we reviewed contained actionable information about threats targeting critical national networks, infrastructures, or key assets such as transit systems. These products may suggest a change in readiness posture, protective actions, or response that should be implemented in a timely manner. If the threat is less urgent, DHS may issue information bulletins, which communicate risk and vulnerabilities of potential targets. In a February 2004 testimony, the Deputy Secretary of Homeland Security said that because threat advisories and information bulletins are derived from intelligence, they are generally communicated on a need-to-know basis to a targeted audience. The threat advisories and bulletins we reviewed also included advice on protective measures to be implemented by law enforcement agencies or the owners and operators of national critical infrastructure assets in response to the specific threat. DHS officials told us that they have not yet officially documented protocols for communicating information about changes in the national threat level to federal agencies and states. To ensure early and comprehensive information sharing and allow for informed decision making, risk communication experts suggest that threat warnings should include the following principles: (1) communication through multiple methods, (2) timely notification, and (3) specific information on the nature, location, and timing of threats as well as guidance on actions to take in response to threats. These principles can be applied to threat information shared with federal agencies and states through the Homeland Security Advisory System. DHS used multiple methods to notify federal agencies and states of changes in the national threat level. However, many federal agencies and states responding to our questionnaires indicated that they heard about threat level changes from media sources before being notified by DHS. Federal agencies and states also reported that they did not receive specific threat information and guidance for the three code-orange alert periods from March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004. Documentation of communication protocols can assist DHS in better managing the expectations of federal agencies and states regarding the methods, timing, and content of guidance and threat information they receive when the national threat level is raised to code-orange. DHS officials told us that they have not yet officially documented protocols for notifying federal agencies and states of changes in the national threat level, but are working to do so. They noted that it is has been difficult to develop protocols that provide sufficient flexibility for sharing information in a variety of situations. Thus, while attempts have been made to officially document protocols for notifying federal agencies and states of national threat level changes, DHS officials said that they have not made much progress in doing so and could not provide a specific target date for completing this effort. Without documented communication protocols, recipients of threat level notifications are uncertain as to how, when, and from what entity, such as which DHS agency, they will be notified of threat level changes and the content and extent of guidance and threat information they may receive. Communication protocols would, among other things, help foster clear understanding and transparency regarding DHS’s priorities and operations. Moreover, protocols could help ensure that DHS interacts with federal, state, local, and other entities using clearly defined and consistently applied policies and procedures. Risk communication is the exchange of information among individuals and groups regarding the nature of risk, reactions to risk messages, and legal and institutional approaches to risk management. Risk communication experts have identified the following as important principles for communicating risks to individuals and groups: Threat information should be consistent, accurate, clear, and provided repeatedly through multiple methods. Threat information should be provided in a timely fashion. To the greatest extent possible, threat information should be specific about the potential threat, including: the nature of the threat, when and where it is likely to occur, and guidance on protective measures to take to prevent or respond to the threat. These risk communication principles have been used in a variety of warning contexts, from alerting the public about severe weather or providing traffic advisories, to less commonplace warnings of infectious disease outbreaks or potential dangers from hazardous materials or toxic contamination. However, warnings about terrorist threats differ from these relatively more familiar warnings. For example, specific terrorist threat warnings to the public may allow terrorists to alter tactics or targets in response to the issuance of warnings. Warnings of terrorist threats may also increase general anxiety for populations clearly not at risk. Moreover, government agencies may not always have specific information on terrorist threats, or may not be able to publicly share specific information in threat warnings. Yet, despite these differences, the purpose of warnings, regardless of the threat, is to provide information to citizens and groups that allows them to make informed decisions about actions to take to prevent and respond to threats. Thus, risk communication principles should be applicable to communicating terrorist threat information to federal agencies, states, and localities through the Homeland Security Advisory System. According to risk communication principles, threat information should be provided through multiple methods to ensure that dissemination of the information is comprehensive and that people receive the information regardless of their level of access to information. In addition, HSPD-3 states that the Homeland Security Advisory System should provide a comprehensive and effective means to disseminate information regarding the risk of terrorist acts to federal, state, and local authorities. One means of disseminating threat information is through notifications to federal agencies and states of changes in the national threat level. DHS officials told us that with each increase in the national threat level, they apply lessons learned from previous alerts to improve their notification and information sharing processes regarding threat level changes. Based on federal agencies’ and states’ responses to our questionnaires, it appears that DHS is making progress in expanding the scope of its notification process, which is consistent with HSPD-3. As shown in table 1, more federal agencies reported receiving direct notification from DHS for the third code-orange alert period than for the other two code-orange alert periods in our review. Similarly, more federal agencies reported receiving notification from DHS via multiple methods for the third code-orange alert period than for the other two code-orange alert periods. DHS used the following methods, among others, to notify entities of changes in the national threat level, according to federal agencies’ and states’ responses to our questionnaires and discussions with DHS and local government officials: Conference calls between the Secretary of Homeland Security and state governors and/or state homeland security officials. Telephone calls from Federal Protective Service (FPS) officials to federal agencies. E-mail, telephone, or electronic communications from Homeland Security Operations Center (HSOC) representatives to the federal, state, or local agencies they represent. FBI electronic systems, such as the National Law Enforcement Telecommunications System. E-mail and/or telephone communications with federal agencies’ chief of staff and public affairs offices. E-mail and/or telephone communications to local government associations such as the National Governors Association and the U.S. Conference of Mayors. Risk communication experts suggest that threat information should be provided in a timely fashion to prevent unofficial sources, such as the media, from reporting information before official sources, including government agencies, do so. These principles suggest that lack of early and open information sharing from official entities can undermine these entities’ credibility. HSPD-3, as amended by HSPD-5, states that the Secretary of Homeland Security should establish a process and a system for conveying relevant information regarding terrorist threats expeditiously. In addition, for an entity to control its operations, it must have relevant, reliable, and timely communications relating to internal and external events. Many federal agencies and some states responding to our questionnaires expressed concerns that they learned about national threat level changes from media sources before being notified by DHS. Specifically, 16 of 24 federal agencies indicated that they learned about threat level changes via media sources prior to being notified by DHS for at least one of the three code-orange alert periods. Likewise, 15 of 40 states reported learning about national threat level changes via media sources prior to being notified by DHS for at least one of the three code-orange alert periods. This raises questions about whether DHS is always conveying information regarding threat level changes to government entities expeditiously, as required by HSPD-3. Moreover, some states reported that their ability to provide credible information to state and local agencies and the public was hindered because they did not receive notification from DHS before the media reported on the threat level changes. For example, 6 states noted that when media sources reported national threat level changes before state and local emergency response officials were directly notified by DHS, these officials did not have sufficient time to prepare their response to the threat level change, including how they would respond to requests from the public for additional information on the threat level change. One other state reported that it would prefer to first learn about changes in the national threat level from DHS so that it has sufficient time to notify state agencies and localities of the change and so that these entities can prepare their responses before the public is notified of the change. Additionally, 8 localities from which we obtained information indicated that they first learned of threat level changes from media sources, and 4 of these localities would prefer to be notified of threat level changes prior to the public. Officials from some of these localities told us that after media sources reported the change, their agencies received requests for detailed information on the change from the public and other entities. They noted that their agencies appeared ineffective to the public and other entities because, without notification of the national threat level change before it was reported by media sources, they did not have time to prepare informed responses. DHS officials told us that they attempt to notify federal agencies and states of threat level changes before the media report on the changes. However, they noted that DHS has not established target time periods in which to notify these entities of the threat level changes. Furthermore, DHS officials indicated they were aware that the media sometimes reported threat level changes before DHS notified federal and state officials, and in the case of the second code-orange alert period in our review, before the decision to raise the threat level was even made. DHS officials told us that they send notifications/advisories to the media to inform them of impending press conferences and that the media may speculate about announcements of threat level changes that may be made at the press conferences. DHS officials indicated that the department is trying to determine the best approach for managing expectations created by this situation. Risk communication experts said that without specific information on the nature, location, and timing of threats and guidance on actions to take, citizens may not be able to determine whether they are at risk and make informed decisions about actions to take in response to threats, and thus may take inappropriate actions. According to HSPD-3, the Homeland Security Advisory System was established to inform and facilitate decisions appropriate to different levels of government regarding terrorist threats and measures to take in response to threats. However, federal agencies and states responding to our questionnaires generally indicated that they did not receive guidance and specific information on threats on the three occasions included in our review when the national threat level was raised to code-orange. These entities reported that insufficient information on the nature, location, and timing of threats and insufficient guidance on recommended measures hindered their ability to determine whether they were at risk as well as their ability to determine and implement protective measures. As shown in table 2, federal agencies and states responding to our questionnaires indicated that they generally did not receive specific information on threats with notification of increases in the national threat level for the three code-orange alert periods included in our review. Yet, as table 2 suggests, a greater number of federal agencies and states reported receiving more specific threat information for the third code-orange alert period than for the other two code-orange alert periods. As shown in tables 3 and 4, federal agencies and states responding to our questionnaires indicated that guidance and specific information on threats, if available, would have assisted them in determining their levels of risk and measures to take for the December 21, 2003, to January 9, 2004, code- orange alert period. Results for the other two code-orange alert periods are consistent with those reported in tables 3 and 4 for the third code-orange alert period. Furthermore, 13 localities reported to us that information on site-, area-, or event-specific threats would have been beneficial to them in responding to the code-orange alert periods. Six of the localities from which we obtained information reported that information on region- or sector-specific threats would have assisted them in determining their level of risk and measures to take in response to the three code-orange alerts in our review. When federal agencies and states perceive that they have not received sufficient guidance and threat information, these entities may not be able to determine whether they are at risk from possible threats or what measures to take in response to the threats. For example, 1 federal agency reported that DHS never notified the agency as to whether Washington, D.C., would remain at heightened security levels after the national threat level was lowered to code-yellow on January 9, 2004, which resulted in the agency maintaining code-orange alert measures for an additional week and incurring additional costs for doing so. Another federal agency reported that to respond to the code-orange alerts, it implemented measures at all facilities regardless of the specific location or risk involved, which spread resources across all facilities rather than focusing the measures on mitigating specific threats. Officials from 1 state and 1 locality noted that without specific threat information, these entities did not understand the true nature of the threat and what impact the threat may have on them. Federal agencies and states responding to our questionnaire also indicated that without guidance and specific threat information, they may not be able to effectively and efficiently target or enhance protective measures to respond to the code-orange alerts. Eighteen of the 25 federal agencies and 32 of the 41 states providing responses to the questions on operational challenges in our questionnaires reported that lack of sufficient threat information was a challenge they faced during the three code-orange alert periods. Moreover, in responding to our questionnaires, 16 federal agencies and 12 states noted that insufficient information on threats makes it difficult for these entities to focus resources on specific measures to respond to threats. At a February 2004 hearing, the Deputy Secretary of Homeland Security said that the department’s communications of national threat level changes are intended to provide specific information regarding the intelligence supporting the change in the threat level, and that protective measures are developed and communicated, along with the threat information, prior to a public announcement of the decision. DHS officials told us that they provide specific threat information, when available, to federal agencies, states, and localities at risk and with the authority to respond to threats. For example, the Deputy Secretary said that threat information that was shared by DHS regarding changes in the national threat level was primarily intended for security professionals at all levels of government and the private sector. Moreover, to provide more specific threat information and respond to sector- and location-specific security needs, DHS officials told us they have adjusted the system based on feedback from federal, state, local and private sector officials; tests of the system; and experience with previous periods of code-orange alert. For example, for the most recent code-orange alert from December 21, 2003, to January 9, 2004, the Deputy Secretary noted in his February 2004 testimony that DHS provided specific recommendations for protective measures to industrial sectors and for geographic areas in response to specific threat information. The majority of federal agencies responding to our questionnaire indicated that they maintain high security levels regardless of the national threat level and, as a result, they did not need to implement a substantial number of new or additional protective measures to respond to the three periods of code-orange alert from March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004. For the most part, these federal agencies reported enhancing existing protective measures to respond to the three code-orange alerts. To a lesser extent, federal agencies continued the use of existing measures, without enhancement, during the code- orange alert periods. On the other hand, states differed in the extent to which they enhanced or maintained existing measures or implemented additional protective measures solely in response to the code-orange alerts. Federal agencies and states reported benefits, such as a heightened sense of security among employees, from enhancing or implementing protective measures for the code-orange alert periods. However, federal agencies and states also indicated that taking such measures negatively affected their operations, for example, by redirecting resources from normal operations to code-orange alert duties. More than half of the federal agencies responding to our questionnaire indicated that they operate at high security levels, regardless of the national threat level. Thus, they did not need to implement a significant number of new or additional protective measures to respond to code- orange alerts. For example, in response to the third code-orange alert period in our review—December 21, 2003 to January 9, 2004—10 of 24 federal agencies indicated that they most commonly enhanced existing protective measures, such as increasing facility security patrols. During the same code-orange alert period, 8 federal agencies reported most often continuing protective measures at their pre-code-orange alert levels, for example, relying on continuing activation of monitoring systems and intrusion detection devices. For the remaining 6 federal agencies, there were slight differences among the number of protective measures they enhanced during the third code-orange alert period, those they maintained at pre-code orange alert levels, and those they implemented solely in response to the code-orange alert. For one of these agencies, three of the protective measures in place for the third code-orange alert period were maintained at their pre-code-orange alert levels, three of the protective measures were enhanced beyond their pre-code- orange alert levels, and four protective measures were implemented solely for the code-orange alert period. Results for the other two code-orange alert periods in our review are similar to those reported for the third code-orange alert period. For more information on protective measures federal agencies most commonly reported having in place for the three code-orange alert periods and their testing of such measures, see appendix IV. Overall, states differed in the extent to which they implemented additional protective measures for the three code-orange alert periods in our review. Based on our analysis of questionnaire responses from the 40 states that provided information on protective measures for the third code-orange alert period in our review, 16 states most often enhanced protective measures that were already in place prior to the code-orange alert period; 6 states most often implemented new protective measures for the code- 5 states most often maintained protective measures that were already in place at their pre-code-orange alert levels; and 13 states employed a varied response, enhancing measures, continuing existing measures, and/or implementing new measures in roughly equal proportion. Results for the other two code-orange alert periods in our review are similar to those reported for the third code-orange alert period. Various reasons influenced the extent to which states responding to our questionnaire enhanced, maintained, or implemented new protective measures. For example, some states reported that they already operated at heightened security levels and, therefore, did not need to implement additional protective measures in response to the code-orange alerts in our review; rather they enhanced measures already in place. Other states indicated that the extent to which they implemented protective measures for the code-orange alert periods in our review depended on specific threat information. For example, 1 state indicated that it did not enhance existing protective measures or implement new protective measures for the code- orange alert periods in our review because there were no specific threats to the state that required it to do so. Other states indicated that the extent to which they implemented protective measures for code-orange alert periods depended on the required level of security for their critical infrastructure sites. For example, 1 state reported that it implemented new protective measures for its nuclear power plants during code-orange alert periods, but for some other critical infrastructure assets, it enhanced security measures already in place. Some states also indicated that resource constraints determined the extent to which they enhanced or implemented new protective measures for code-orange alert periods. For example, 2 states indicated that they had to implement a substantial number of new protective measures for the three code-orange alert periods in our review because they could not afford to always operate at a high level of security. For more detailed information on protective measures that states most commonly reported having in place for code-orange alert periods and testing of these measures, see appendix IV. Additionally, 4 localities from which we obtained information reported that they did not enhance or implement a substantial number of protective measures to respond to the code-orange alerts because they did not receive specific threat information indicating that the localities were at risk. For example, 1 locality reported that because it did not receive specific threat information on possible targets, the locality did not take any measures to respond to the code-orange alerts. Additionally, another locality noted that its emergency response staff was not able to implement additional measures in response to the code-orange alerts because the staff was too busy with regular duties such as responding to 911 calls. Federal agencies and states responding to our questionnaires indicated that they benefited in various ways from the protective measures they enhanced or implemented during the code-orange alert periods, but also noted that they faced operational challenges in responding to the three code-orange alert periods in our review. For example, federal agencies and states reported that protective measures increased employees’ sense of security, promoted staff awareness, and provided visible deterrents to possible threats. However, federal agencies and states responding to our questionnaires also reported that their operations were negatively affected during code-orange alerts as a result of protective measures they enhanced or implemented. For example, 10 federal agencies and 13 states reported that they had to redirect resources from normal operations to enhance or implement protective measures for code-orange alerts. One locality also reported that its operations were negatively affected by the redirection of personnel, which resulted in delays of maintenance activities and preventative exercises as well as postponement of training. Additionally, 15 federal agencies noted delays for visitors and employees. Some of these federal agencies and states reported that maintaining a code-orange alert level of security for more than a few days at a time significantly drained their security resources—an effect federal agencies and states have identified as “code-orange alert fatigue.” Federal agencies and states also indicated that the lack of federal governmentwide coordination hindered their ability to respond to threats. Without coordination of information and intelligence sharing during code- orange alert periods, federal agencies and states responding to our questionnaires noted that they may not receive threat information needed to help them determine and implement their responses to code-orange alerts. For example, 1 federal agency reported that it received different requests from several federal agencies to deploy personnel to different locations. This federal agency noted that improved federal governmentwide coordination might result in more efficient assignment of resources. Similarly, 1 locality noted that because different government agencies notified different local agencies of changes in the national threat level, first responders and local officials could not effectively and efficiently coordinate and implement protective resources. On the other hand, 1 state official raised concerns about whether federal agencies were fully informed of the information DHS provided to states and had the information needed to implement appropriate local security measures. This official noted that persons from the Transportation Security Administration, the Coast Guard, and the U.S. Army Corps of Engineers called his state’s homeland security office for advisories and bulletins DHS had provided to the state. Because of these information requests, this official noted that the state was concerned that officials from these federal agencies did not receive information needed to implement security measures, especially at airports. In commenting on a draft of this report, DHS officials stated that the department is working to address this problem. Six states also indicated that insufficient information from DHS on national critical infrastructure assets made it difficult to effectively protect these assets during the code-orange alert periods. Three of these states indicated that DHS asked them to protect specific national critical infrastructure sites, some of which were no longer operational or others that were closed, such as shopping malls. Officials in one state indicated that DHS did not coordinate with the state when it initially developed this list of national critical infrastructure assets. DHS officials told us that the department developed a list of national critical infrastructure assets to assist states in determining protective measures to implement at their national critical infrastructure sites. According to the Deputy Director of the Protective Security Division of IAIP, DHS initially developed a list of 145 national critical infrastructure assets, including nuclear power plants, chemical facilities, and transportation systems, to ensure their security during Operation Liberty Shield. This official told us that DHS identified national critical infrastructure assets for the list based on intelligence information indicating possible assets at risk, the vulnerabilities of these assets, and possible consequences of an attack on assets, including health, safety, and economic impacts. DHS did not coordinate with states and localities in developing the national critical infrastructure assets list for Operation Liberty Shield because planning and timing of military operations for the war in Iraq and for Operation Liberty Shield were given the highest classification levels and discussed only at the federal level. The Deputy Director said that since Operation Liberty Shield, DHS has continually expanded and revised its national critical infrastructure assets list based on ongoing analysis of threat information and input from states. In reviewing a draft of this report, DHS officials told us that its Protective Security Division has given all states and territories opportunities to suggest assets to be included in the National Asset Database as well as to verify and validate information DHS maintains on such assets. To enhance standard security levels at national critical infrastructure sites, this DHS official said that the department is working with states to develop plans for protecting the immediate areas surrounding national critical infrastructure assets and reducing vulnerabilities in those areas. In particular, the Deputy Director told us that DHS provided guidance and information to states and local law enforcement agencies to develop protection plans for areas around national critical infrastructure assets. The majority of states responding to our questionnaire indicated that, for all three code-orange alert periods in our review, DHS requested some information on protective measures taken by states in response to the heightened threat levels. However, as shown in table 5, most states reported that DHS did not request information on the effectiveness of these security measures. An Office of State and Local Government Coordination official said that DHS maintains close contact with states and localities during code-orange alert periods and fosters information sharing about actions taken to increase security. For example, this official noted that DHS co-sponsored a February 2003 workshop with the FBI to encourage state-level implementation of the Homeland Security Advisory System and provide a forum for information exchange among state and local homeland security representatives. More recently, on April 19, 2004, DHS launched a new Web site (www.llis.gov) to provide a nationwide network of lessons learned and best practices for homeland security officials and emergency responders. For the most recent code-orange alert from December 21, 2003 to January 9, 2004, DHS officials noted that they contacted states to inquire about protective measures that were put in place. According to DHS officials, they made such inquiries to (1) monitor the extent to which states implemented protective measures that DHS recommended, (2) apprise the White House of actions taken in response to code-orange alerts, and (3) enhance DHS officials’ understanding of protective measures for which states may seek reimbursement. Sixteen of 26 federal agencies responding to our questionnaire reported additional costs for at least one of the code-orange alert periods in our review. We examined the cost information provided by these agencies for obvious errors and inconsistencies and examined agencies’ responses to the questionnaire regarding the development of the cost information. In doing so, we found that these federal agencies’ cost data were generated from various sources, such as financial accounting systems, credit card logs, and security contracts. Additionally, this cost information is not precise, nor do the costs likely represent all additional costs incurred during code-orange alert periods. In some cases, we have concerns about the reliability of the data sources used to develop the costs reported to us. For example, 6 of the 16 federal agencies reported that they extracted some of the code-orange alert cost data from their agencies’ financial accounting systems. However, as reported in the fiscal year 2005 President’s Budget, 5 of these agencies’ financial management performance had serious flaws as of December 31, 2003. Despite these limitations, we believe the cost data to be sufficiently reliable as indicators of general ranges of cost and overall trends. However, the data should not be used to determine the cumulative costs incurred across all federal agencies. Based on the information provided by federal agencies, total additional costs reported by federal agencies responding to our questionnaire for the March 17 to April 16, 2003, and May 20 to 30, 2003, code-orange alert periods were less than 1 percent of these agencies’ fiscal year 2003 homeland security funding, as reported to OMB. On the basis of this cost information, we determined additional average daily costs ranged from about $190 to about $3.7 million across all three code-orange alert periods in our review. Based on information reported by these agencies, the additional average daily costs incurred across code-orange alert periods have declined over time. Some of these federal agencies attribute this decline to continued enhancement of standard levels of security. Some federal agencies reported that they did not have any additional costs during code-orange alert periods, as they either did not implement any additional protective measures or they redirected already existing resources to implement additional code-orange alert measures rather than employ additional resources. Although federal agencies may not have reported additional costs directly as a result of implementing protective measures for code- orange alerts, actions taken such as redirecting resources from normal operations would have resulted in indirect costs. Sixteen of 26 federal agencies responding to our questionnaire reported additional costs for the first code-orange alert period in our review—March 17 to April 16, 2003. Fifteen of these agencies also reported additional costs for the second code-orange alert period in our review—May 20 to 30, 2003. For 13 of the 15 federal agencies that reported additional costs for both the first and second code-orange alert periods in our review, we calculated that the additional costs reported by these agencies were less than 1 percent of these agencies’ fiscal year 2003 homeland security funding. This calculation is based on OMB’s 2003 Report to Congress on Combating Terrorism, which presented information federal agencies reported to OMB on the amount of homeland security funding authorized to federal agencies in fiscal year 2003. For the 16 federal agencies responding to our questionnaire that reported additional costs during the first code-orange alert period in our review, we calculated average daily additional costs, which ranged from about $190 to about $848,000. A cabinet level agency with security responsibilities limited to protecting its facilities and employees reported the least additional costs for this code-orange alert period, while another cabinet level agency that, in addition to securing its facilities, is responsible for the protection of national critical infrastructure assets reported the most additional costs. For the 15 federal agencies that reported additional costs for the second and third—December 21, 2003, to January 9, 2004—code- orange alert periods in our review, we calculated additional average daily costs that ranged from about $240 to about $3.7 million and from about $190 to about $1 million, respectively. The agency that reported the least additional costs for the second and third code-orange alert periods in our review was an independent agency, while the agencies that reported the most additional costs for these code-orange alert periods were generally cabinet agencies that are responsible for the protection of national critical infrastructure assets. Most of the additional costs federal agencies reported were personnel costs, such as overtime wages or costs for additional security personnel. Appendix V provides additional information regarding cost information submitted by federal agencies. Based on the cost information provided by federal agencies, we determined that there was a decline in the additional average daily costs incurred by these federal agencies over the three code-orange alert periods in our review. Of the 15 federal agencies that reported additional costs for the first and third code-orange alert periods, 11 federal agencies experienced an overall decline in the additional average daily costs across the code-orange alert periods. Five of these 11 federal agencies indicated that their additional costs declined across the three code-orange alert periods in our review because they were consistently enhancing their baseline levels of security, which, in turn, required fewer additional protective measures during subsequent code-orange alert periods. Three agencies indicated that the decline in additional average daily costs was due to a reduction in the number of protective measures they had in place. One of these agencies explained that rather than implementing general protective measures with the receipt of specific threat information for the third code-orange alert in our review, it was able to determine the most appropriate protective measures to put in place. Six federal agencies reported that they did not have additional costs for at least two of the code-orange alert periods in our review. Four of these agencies indicated that they did not have additional costs because they redirected already existing resources to implement additional protective measures for the code-orange alert periods rather than employ additional resources. For example, 2 agencies said that they were able to increase the frequency of their facility patrols without hiring additional guards or requiring guards to work overtime by closing one of the facility’s entrances during code-orange alert. Therefore, the guards who would normally secure that entrance were assigned to conduct additional roaming facility patrols. Furthermore, 2 of the 4 agencies indicated that they planned their protective measures with the specific intent that the agency would not incur any additional security-related costs during code-orange alert periods. The remaining 2 agencies reported that they did not have any additional costs because they did not implement additional protective measures for the code-orange alert periods. One agency explained that it did not implement any protective measures during code-yellow or code- orange alert periods because it is located in a privately owned building and is not responsible for the security of its facility, nor does this agency have field offices for which it is responsible. Although these federal agencies reported that they did not directly incur additional costs to implement protective measures for code-orange alert periods, the consequences of implementing such protective measures may have resulted in indirect costs for these agencies. Some federal agencies responding to our questionnaire indicated they could not quantify these indirect costs. However, federal agencies provided examples of redirection of resources that may have caused them to incur such costs. For example, 13 federal agencies noted that in order to implement code-orange alert measures, they had to redirect existing resources from normal operations. Furthermore, one agency indicated that redirecting resources in response to code-orange alerts prevented the agency from performing mission- related activities, such as deterrence of criminal activity other than terrorism. Additionally, 16 federal agencies said that as a result of implementing measures for code-orange alerts, there were delays for employees and visitors entering facilities, which may have resulted in loss of productivity among employees and a delay in provision of services. Seven federal agencies indicated that as part of their response to code- orange alerts, they postponed or cancelled agency-sponsored activities such as training for staff development. DHS has collected limited information on costs incurred by states and localities during code-orange alert periods through its State Homeland Security Grant Program – Part II and the Urban Areas Security Initiative – Part II. States must submit information to DHS to be reimbursed for costs incurred as a result of actions taken to increase critical infrastructure protection during code-orange alerts. However, this cost information does not represent all costs incurred by states and their localities during code- orange alert periods. Therefore, it cannot be used to assess the financial impact of code-orange alerts on states and localities. The U.S. Conference of Mayors also collected information and reported estimates of costs localities incurred in response to code-orange alerts. Moreover, a Director with the Center for Strategic and International Studies estimated and reported costs incurred by federal agencies during code-orange alerts. However, because of limitations in the scope and methodologies used in these estimates, the cost information they reported may not be adequate for making generalizations regarding additional costs states and localities incurred in response to code-orange alerts. DHS issued an information bulletin to states on March 21, 2003, advising them to capture additional costs incurred by the state and its localities during the March 17 to April 16, 2003, code-orange alert period for the protection of critical infrastructure, in the event that funds became available to reimburse states and localities for these additional costs. Through the fiscal year 2003 State Homeland Security Grant Program – Part II (SHSGP II), DHS made a total of $200 million available to states and local communities to mitigate costs of critical infrastructure protection during the period of hostilities with Iraq and future periods of heightened threat. According to SHSGP II guidelines, SHSGP II funds can be used for public safety agency overtime costs, contract security personnel costs, and state-ordered National Guard deployments required to augment security at critical infrastructure. Additionally, at least 50 percent of a state’s award must be allocated to local communities. Through the fiscal year 2003 Urban Areas Security Initiative – Part II (UASI II), DHS made approximately $125 million available to reimburse select urban areas for costs incurred during the February 7 to February 27, 2003; March 17, 2003, to April 16, 2003; and May 20 to 30, 2003, code-orange alert periods. Specifically, UASI guidelines allowed for the reimbursement of costs associated with overtime and critical infrastructure protection. On January 23, 2004, DHS issued a memorandum to state officials indicating that SHSGP II and UASI II funding could also be used to reimburse states and localities for additional costs incurred for protection of critical infrastructure for the December 21, 2003, to January 9, 2004, code-orange alert period. For the March 17 to April 16, 2003, and May 20 to 30, 2003, code-orange alert periods, DHS required states to submit budget detail worksheets, including the name of the state agency or local jurisdiction that incurred the additional critical infrastructure protection costs and the amount the agency or locality requested for reimbursement. For the December 21, 2003, to January 9, 2004, code-orange alert period, DHS provided a more detailed template for the budget detail worksheet, which asked states to identify the critical infrastructure site protected and the amount of costs incurred and personnel deployed for each of the following categories: contract security personnel, and emergency operations center overtime. Additionally, for all three code-orange alert periods in our review, DHS asked states to distinguish between state-level and local-level costs. Through SHSGP II and UASI II, states were awarded a specified amount from which they could draw down over a period of 2 years to reimburse them for additional costs incurred during code-orange alert periods. According to the grant guidelines, DHS must approve the budget detail worksheet before states and localities can obligate, expend, or draw down these grant funds. DHS, in monitoring these grant programs, takes steps to validate critical infrastructure protection costs. Additionally, amounts that states and localities expend in excess of $300,000 are subject to an external audit which, when completed, provides assurance regarding the reliability of the cost data. Based on the following, it is unlikely that the cost information submitted by states to DHS represents all additional costs incurred by states and localities during code-orange alert periods: States have up to 2 years from the time when the grant is awarded to submit requests for reimbursement of additional code-orange alert costs. Some states are still in the process of validating proposed costs incurred by the state and its localities. For example, one state estimated that its state agencies and localities incurred an additional $3.7 million for the March 17 to April 16, 2003, code-orange alert period. However, the state could only validate additional code-orange alert costs of about $1.3 million, and thus could only report this amount as eligible for DHS reimbursement. The cost information submitted by states does not include additional costs for training or the purchase of equipment and materials during code-orange alert periods. Additionally, DHS officials told us that not all states and localities that incurred additional costs have requested reimbursement; therefore, not all states and localities have submitted information to DHS on additional code- orange alert costs. Since the cost information does not include all costs incurred by states and localities for code-orange alerts, it should not be used to reach conclusions about the financial impact of these alerts on states and localities. According to the cost information collected by DHS, as of April 14, 2004, 40 states provided cost information to DHS in order to draw down funds to reimburse additional costs incurred during the March 17 to April 16, 2003, and May 20 to 30, 2003, code-orange alert periods. Based on this cost information, the reported state share of additional code-orange alert costs ranged from about $7,900 to about $8 million for both the first and second code-orange alert periods, which lasted a total of 40 days. The locality share of additional costs incurred during these two code-orange alert periods ranged from about $2,800 to about $28 million. As of April 14, 2004, 33 states provided information on additional costs incurred during the December 21, 2003, to January 9, 2004, code-orange alert period to DHS. Based on this information, additional costs incurred by state agencies for this code-orange alert period, which lasted 19 days, ranged from about $2,000 to about $7 million. Additional costs incurred by localities during this code-orange alert period ranged from about $3,000 to about $4 million. In general, the states that have numerous critical infrastructure sites, as identified by DHS, were the ones that reported the most additional code-orange alert costs collectively for the state and its localities. Additionally, DHS officials noted that overtime costs for law enforcement or security personnel appear to be the primary expense incurred by states and localities. You also requested that we determine the extent to which DHS analyzes available cost data related to code-orange alerts and the role that OMB plays in providing guidance to DHS on capturing such costs. Though not required to do so, DHS has not analyzed the cost data collected to identify trends or assess the financial impact code-orange alerts have on states and localities. DHS has not tallied individual or overall state and local costs for any of the increased threat alert periods. However, as cost information submitted by states for reimbursement through SHSGP II and UASI II does not include all costs incurred by states and localities during code-orange alert periods, such analysis may not be appropriate using these data. According to an OMB representative, OMB has not provided specific guidance to DHS in capturing and totaling additional costs that states and localities incurred during periods of heightened national threat levels, nor is it required to do so. However, the representative noted that OMB is concerned about the funds that the federal government expends on these programs and activities. Prior to this report, the U.S. Conference of Mayors and a Director with the Center for Strategic and International Studies have been the only organization or official to attempt to report estimates of costs incurred by various governmental entities in response to code-orange alerts. However, despite their efforts, the information reported by the U.S. Conference of Mayors and a Director at the Center for Strategic and International Studies Homeland Security Initiatives, may not be adequate to draw conclusions regarding the extent to which responding to code-orange alerts imposes a financial burden on governmental entities. On March 27, 2003, the Conference of Mayors published a report that estimated localities within the United States were spending $69.5 million per week in response to the March 17 to April 16, 2003, code-orange alert period. However, the U.S. Conference of Mayors’ estimate may not provide an adequate basis for drawing conclusions regarding the financial impact of code-orange alerts on localities due to several factors such as: lack of guidance to localities for developing their estimates, the absence of independent verification or confirmation of amounts reported to the Conference of Mayors. For example, the Conference of Mayors surveyed its membership asking them to report, “What are you spending extra per week?” However, according to a Conference of Mayors’ official, members were not provided guidance on how to develop their costs. Thus, localities could have used different methodologies potentially resulting in the inclusion of certain costs in one locality’s estimate that may not be included in another locality’s estimate. Additionally, only 145 cities out of the U.S. Conference of Mayors total membership of 1,185 responded to the survey, representing a 12 percent response rate. The scope for this study was also somewhat limited in that the U.S. Conference of Mayors issued its report and estimates prior to the conclusion of the March 17 to April 16, 2003, code- orange alert period. Thus, the estimate may not represent all costs incurred during that time period. Finally, the U.S. Conference of Mayors’ staff did not take any additional steps to verify the validity of the estimates provided in response to its survey nor did they request that localities provide information to assist them in corroborating the localities’ responses. Similarly, a December 21, 2003, news release from the Center for Strategic and International Studies cited remarks by one of its directors who independently estimated that it costs the nation $1 billion a week to implement protective measures in response to a code-orange alert. According to the official that generated this estimate, it was an informal calculation based primarily on the funds appropriated by Congress to federal agencies for Operation Liberty Shield. The director divided the total amount of federal appropriations related to Operation Liberty Shield by the number of weeks that Operation Liberty Shield lasted. However, appropriated funds are not accurate representations of expenditures or costs incurred by federal agencies. Additionally, Operational Liberty Shield was a comprehensive national plan to increase protection for America’s citizens and the nation’s infrastructure during the war with Iraq. Thus, federal agencies may have taken additional protective measures in relation to the war that are not normally associated with code-orange alerts. As a result, this estimate may not be an accurate reflection of costs incurred by federal agencies in relation to other code-orange alerts. DHS’s implementation of the Homeland Security Advisory System is evolving, and the responses to our questionnaires from federal agencies and states suggest that DHS has made progress in providing more specific information to federal agencies and states and localities regarding the specific threats and risks they may face. However, DHS has not yet officially documented its protocols for communicating changes in the national threat level, as well as guidance and threat information, to federal agencies and states. The responses we received to our questionnaires indicated continuing confusion on the part of federal agencies and states and localities regarding the process and methods that DHS uses to communicate changes in the national threat level or recommendations for heightened security measures in specific regions or sectors. Without clearly defined and consistently applied communication policies and procedures, DHS may have difficulty managing the communication expectations of federal agencies and states and effectively communicating the methods, timing, and content of guidance and information—including information on protective measures and potential threats—the department provides to federal agencies and states. We believe that risk communication principles are applicable to the Homeland Security Advisory System and should be applied in DHS communications with federal agencies, states, and localities. Risk communication experts suggest that warnings should include the following principles to provide for early, open, and comprehensive information dissemination and for informed decision making: (1) communication through multiple methods, (2) timely notification, and (3) specific information about the nature, location, and timing of the threat and guidance on actions to take. To the extent that DHS does not communicate specific threat information and guidance on actions to take, federal agencies, states, and localities may not be able to effectively determine their levels of risk, the appropriate protective measures to implement in response to threats, and how to effectively and efficiently focus their limited resources on implementing those appropriate protective measures. Finally, it is important to note that although periods of code-orange alert do result in some additional costs for many federal agencies, states, and localities, the available cost data have many limitations, are not precise or complete, and thus, any conclusions based on these data must reflect those limitations. We recommend that the Secretary of Homeland Security direct the Under Secretary for Information Analysis and Infrastructure Protection to take the following two actions: (1) document communication protocols for notifying federal agencies and states of changes in the national threat level and for providing guidance and threat information to these entities, including methods and time periods for sharing information, to better manage these entities’ expectations regarding the methods, timing, and content of information shared; and (2) incorporate risk communication principles into the Homeland Security Advisory System to assist in determining and documenting information to provide to federal agencies and states, including, to the extent possible, information on the nature, location, and time periods of threats and guidance on protective measures to take in response to those threats. We provided a draft copy of this report to DHS for comment. DHS generally concurred with the findings and recommendations in the report and provided formal written comments, which are presented in appendix IX. In commenting on the draft report, DHS expressed concern that we generalize examples cited in the report across all states and localities, rather than characterizing the examples as isolated experiences. As previously discussed, we surveyed 56 states and territories to obtain information on their experiences related to national threat level changes. We discuss the results of this questionnaire throughout the report, including information on the number of states that provided similar responses. After citing the number of states that provide a similar response related to a code-orange alert issue, we frequently use examples to illustrate those perspectives. We did not cite all examples we received, but rather those that most effectively illustrate our message. Thus, we believe the report accurately portrays state perspectives on code-orange alerts. In regard to DHS’s comments on the examples we discuss related to localities, we believe that we appropriately cautioned the reader in the report’s introduction and scope and methodology sections that information from the 16 localities we visited or from which we received questionnaire responses were used only as anecdotal examples and cannot be generalized across all localities in the United States. DHS also provided technical comments, which we have incorporated as appropriate. We plan no further distribution of this report until 14 days after the date of this report. At that time, we will send copies of this report to the Subcommittee on Terrorism, Technology, and Homeland Security, Senate Committee on the Judiciary; the Subcommittee on National Security, Emerging Threats, and International Relations, House Committee on Government Reform; the Secretary of Homeland Security; the Director, Office of Management and Budget; and other interested parties. Copies will be made available to others on request. In addition, this report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-8777 or by e-mail at jenkinswo@gao.gov. Other GAO contacts and key contributors are listed in appendix X. William O. Jenkins, Jr. Some federal agencies, states, localities, and foreign countries had threat advisory systems in place prior to the implementation of the Homeland Security Advisory System in March 2002, while others have since developed such systems. Some of these advisory systems were generally similar to the Homeland Security Advisory System, identifying different threat levels and requiring or suggesting certain protective actions be taken at each threat level. However, other systems differed in terms of structural and operational characteristics—such as the number of threat levels, the issuance of local or regional alerts, and the dissemination of threat advisories to the public. Seven of the 25 federal agencies responding to our questionnaire reported that they operated their own threat advisory systems prior to the establishment of the Homeland Security Advisory System in March 2002. One agency, for example, indicated that it developed its own five-level alert system 8 years ago to ensure protection of critical national security assets. These seven agencies currently follow the Homeland Security Advisory System as well as their own agency advisory system that conforms to the Homeland Security Advisory System. Three of these agencies also reported they could independently raise agency threat levels in response to threats or events that specifically affect their operations, regardless of whether the national threat level is raised at the same time. However, they generally cannot lower a facility threat level below that specified by the agency head or other designated agency authority. Further, although these agencies can operate at a threat level that is higher than the Homeland Security Advisory System national threat level (e.g., at code-orange when the national threat level is code-yellow), they generally cannot operate at a lower threat level. Unlike federal civilian agencies, Department of Defense (DOD) military installations are exempt from following the Homeland Security Advisory System. Accordingly, DOD operates under its own terrorist threat advisory system—known as the Force Protection Condition system. According to DOD officials, this system has five threat conditions—normal, alpha, bravo, charlie, delta—indicating increasing threats of a terrorist attack, and the system prescribes mandatory minimum protective measures for all units and installations for each condition level. Each of the nine DOD Unified Combatant Commands—for example, Central Command (Middle East and Asia)—establishes a force protection condition for the entire Command, based on a variety of information including threat and vulnerability assessments from such sources as the Department of Homeland Security (DHS), the Federal Bureau of Investigation (FBI), and the Defense Intelligence Agency. Beyond this level of protection, installation and unit commanders may then require additional protective measures, also based on intelligence assessments. This system, therefore, provides flexibility to base commanders to set protective measures based on local threat conditions. In contrast to civilian federal agencies, changes in the Homeland Security Advisory System do not necessarily result in changes in DOD’s force protection condition. When the national threat level is raised to code-orange, DOD reviews and analyzes the same intelligence used by DHS to decide to raise the national threat level. Based on this analysis, DOD military commanders then decide whether any change is warranted in their own force protection condition. As discussed earlier in the report, the Homeland Security Advisory System is not binding on states or localities and they are not required to conform their advisory systems to the Homeland Security Advisory System. However, 42 of the 43 states responding to our questionnaire indicated they currently followed the Homeland Security Advisory System, an equivalent state system, or both. Eight of the states responding to our questionnaire and 1 locality we visited indicated that they had implemented their threat advisory systems prior to the Homeland Security Advisory System. For example, 1 state told us that it amended its state emergency response plan and implemented a state advisory system in 1998, in response to the bombing of the federal building in Oklahoma City. Twenty-two states responding to our questionnaire and 5 localities we visited indicated that they currently operate their own advisory systems. Most of these states reported that their systems provided information about the type or location of the threat, notified other governmental entities, and identified protective measures to be taken—while most localities indicated their systems conformed to the Homeland Security Advisory System. Some state and local advisory systems are similar to the Homeland Security Advisory System, but contain a different number of threat levels than the Homeland Security Advisory System. For example: One state uses a numbered, 4-level threat advisory system, which is similar to the Homeland Security Advisory System but combines the levels of Blue and Yellow into a single threat level. One locality uses a 4-level advisory system, which does not include the Homeland Security Advisory System Blue threat level. State and local advisory systems typically identified actions or protective measures that were to be taken at each threat level. For example, one state advisory system identified state, county, and local government actions, as well as specific security recommendations; while another identified actions for law enforcement agencies, non-law enforcement agencies, businesses, and citizens. One locality advisory system identified general security recommendations, as well as specific agency action checklists identifying a minimum level of response by agencies and departments within the locality. Some states and localities can raise their systems’ threat levels based on specific threats or events independent of changes in the national threat level. One state, for example, raised its state threat level in early February 2003 (prior to the February 2003 code-orange alert) in response to the crash of the space shuttle Columbia. One locality could change its threat level locally based on information and coordination with the local FBI office, the state’s department of public safety, and the locality’s police department. The United Kingdom has a developed threat advisory system and processes for communicating threat information that are similar to the Homeland Security Advisory System but, unlike the U.S. system, it does not require that terrorism threat alerts be issued to the public. According to United Kingdom officials, under the United Kingdom’s threat advisory system, (1) threat levels are assigned nationwide, as well as to specific regions and economic sectors; and (2) these levels and related changes are communicated to government and law enforcement agencies and private sector entities with responsibility for critical infrastructure protection, but not to the public. The United Kingdom does not publicly announce these threat warnings because it wants to protect its intelligence sources and avoid alerting terrorists that the government is aware of the threat. If terrorists know that the government is aware of their planned attack, the terrorists may change their plans and modes of operation, allowing them to carry out attacks that are even more lethal. Additionally, the United Kingdom is concerned about causing public anxiety regarding possible threats, when, in most cases, the public cannot do anything to mitigate the threat. However, United Kingdom officials noted that if warnings are necessary to protect public safety from specific and credible threats, the United Kingdom will issue public warnings. Additionally, the United Kingdom instituted a public campaign to encourage public vigilance regarding potential terrorist activity. The campaign included posters warning the public to alert the police to unattended baggage, as shown in figure 1. Unlike the United Kingdom system, the Australian threat advisory system places a greater emphasis on publicizing changes in national threat levels. Australia implemented its current four-level, national counter-terrorist threat alert system in June 2003. As in the United States, a threat level condition is publicly announced and defined. Under the Australian system, each level of alert is defined as follows: Low—no information to suggest a terrorist attack in Australia. Medium—medium risk of a terrorist attack in Australia. High—high risk of a terrorist attack in Australia. Extreme—terrorist attack is imminent or has occurred. According to the Australian Attorney-General’s Department, the system was not introduced as a reaction to any particular threat, but rather as an arrangement to help inform national preparation and planning and provide greater flexibility for responding to threats. Accordingly, should any intelligence information come to light which causes the government to change the assessed level of threat, the public is to be advised immediately. Conversely, Norway does not have a nationwide threat advisory system. According to Norwegian officials, the Norwegian Police Security Service conducts threat assessments—which are graded into levels of low, medium, and high—and these are issued to government agencies with responsibilities for preventing and responding to threats within their jurisdictions. Unlike the Homeland Security Advisory System, there are no routines in place for communicating these threat assessments directly to local governments, private sector entities or the general public, but a decision to do so can be made depending on the situation. National government agencies and county governors can be instructed to take action to address various types of emergencies. However, municipalities, private sector entities, and the general public cannot be instructed to take specific action, except in situations where such instructions are warranted by law. Like Norway, Germany does not have a uniform nationwide system of threat levels or requirements that specific actions be taken by governmental entities in response to different types of emergencies, including terror attacks. However, Germany does have a single, central 24- hour communication center. For natural disasters and other threats, this center collects, screens, and processes the incoming information for subsequent forwarding to other government agencies regarding actions to take. According to German officials, the central communication center is concerned primarily with information management, rather than with controlling and warning functions. After receiving the threat assessments from the central communication center, governmental entities at the German federal and state level are each responsible for deciding which measures are to be taken, based on the threat. According to German officials, threat information is communicated to affected persons, individual institutions, the business community, and the general public by law enforcement agencies, the state governments or the German federal government, according to the nature of the threat or danger concerned and the underlying situation. For example, the federal Criminal Police Office informs the business community on a regular basis as to the current assessment of the situation regarding Islamic terrorism. Additionally, Germany has a satellite-based warning system that enables official warnings to be broadcast to the public. Government agencies and emergency situation centers are linked via satellite and are able to relay warnings and information on prevailing dangers to the connected media in a matter of seconds. To determine the process that the Department of Homeland Security (DHS) used to make decisions about changes in the national threat level, we met with and obtained information from DHS officials. We examined this information to identify DHS’s processes for determining whether to raise or lower the national threat level and for issuing threat products to federal agencies, states, localities, and private sector entities. We also analyzed DHS threat products to determine the type of threat information and guidance on protective measures that DHS included in the products. To determine guidance and information provided to federal agencies, states, and localities; protective measures these entities implemented in response to the three code-orange alerts from March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004; and additional costs these entities reported for the code-orange alert periods, we sent questionnaires to (1) 28 federal agencies; and (2) the homeland security or emergency management offices in the 50 states, the District of Columbia, American Samoa, Guam, the Northern Islands, Puerto Rico, and the U.S. Virgin Islands. We selected the 28 federal agencies because they reported receiving homeland security funding for fiscal year 2003 to the Office of Management and Budget (OMB) and are Chief Financial Officers Act agencies. We sent the questionnaire to the 25 federal agencies that reported homeland security funding for fiscal year 2003 to OMB and to 3 other federal agencies that are Chief Financial Officers Act agencies but did not report homeland security funding for fiscal year 2003. To develop the questionnaires, we met with and obtained information from 8 federal agencies, 4 states, the District of Columbia, and 9 localities. Overall, the questionnaires sent to federal agencies and states were very similar. We obtained comments on draft versions of the federal questionnaire from the 8 federal agencies. We adapted the final version of the federal questionnaire to create the state questionnaire. We pretested the questionnaires with 4 federal agencies and 3 states and made relevant changes to the questions based on these pretests. See appendixes VII and VIII for the federal and state questionnaires. As of April 20, 2004, we received questionnaire responses from 26 federal agencies, which account for about 99 percent of total fiscal year 2003 nondefense homeland security funding as reported to OMB, and 43 states, for a 77 percent response rate. We made extensive efforts to encourage federal agencies and states to complete and return the questionnaires, such as contacting all nonrespondents on multiple occasions and sending additional copies of questionnaires when requested. We performed this work from October 2003 through May 2004. Because our surveys were not statistical sample surveys, but rather a survey of a nonprobability selection of federal agencies and a census of all states, there are no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, measurement errors are introduced if difficulties exist in how a particular question is interpreted or in the sources of information available to respondents in answering a question. In addition, coding errors may occur if mistakes are entered into a database. We took extensive steps in the development of the questionnaires, the collection of data, and the editing and analysis of data to minimize total survey error. As noted above, to reduce measurement error and ensure questions and response categories were interpreted in a consistent manner, we pretested the questionnaires with several federal agencies and states. We edited all completed surveys for consistency, such as ensuring that responses were provided for all appropriate questions, and, if necessary, contacted respondents to clarify responses. All questionnaire responses were double key-entered into our database (i.e., the entries were 100 percent verified), and random samples of the questionnaires were further verified for completeness and accuracy of data entry. Furthermore, all computer syntax was peer reviewed and verified by separate staff to ensure that the syntax was written and executed correctly. In addition to sending questionnaires to 28 federal agencies and 56 states, we conducted site visits at 12 localities (eight cities and four counties) and sent a questionnaire to another 8 localities. The 12 localities were Atlanta and Fulton County, Georgia; Denver, Colorado Springs, and Douglas County, Colorado; Norfolk, Virginia; Portland and Wasco County, Oregon; Chicago and Cook County, Illinois; and Boston and Fitchburg, Massachusetts. We selected these localities based on the following criteria: the locality’s receipt of urban area grants from DHS, geographic location, topography (e.g., inland, border, or seaport), and type of locality (e.g., metropolitan or nonmetropolitan area). We selected four cities and one county that received grants from DHS and four cities and three counties that did not. We also selected cities and counties from different geographic regions and with different topographic characteristics, as well as some cities and counties located in metropolitan areas and some cities and counties located in nonmetropolitan areas. We used a structured data collection instrument to interview emergency management officials and first responders in these localities. We selected the 8 localities that we surveyed based on their populations and geographic locations. We received responses from 4 of these localities; 3 with populations of less than 40,000 – Helena, Montana; Mankato, Minnesota; and Rock Springs, Wyoming – and 1 with a population of greater than 40,000 – San Jose, California. To determine the extent to which risk communication principles could be incorporated into the Homeland Security Advisory System, we spoke with and obtained information from individuals and organizations with expertise in homeland security issues and risk communication. We analyzed reports and documents from the ANSER Institute for Homeland Security, Carnegie Mellon University, the Center for Strategic and International Studies, the Department of Health and Human Services, the Harvard Center for Risk Analysis, the Oak Ridge National Laboratory, and the Partnership for Public Warning. To assess the reliability of cost data provided by federal agencies on our questionnaire, we examined the cost information for obvious errors and inconsistencies and examined responses to the questionnaire items requesting information regarding the development of the cost data. If necessary, we contacted respondents to clarify responses and, when provided, reviewed documentation about the cost data. Federal agencies generated their cost data from various sources such as their financial accounting systems, credit card logs, and security services contracts. This cost information is not precise, nor do the costs likely represent all additional costs for the code-orange alert periods. In some cases, we have concerns about the reliability of the cost data source within particular agencies. For example, 6 of the 16 federal agencies reported that they extracted some of the code-orange alert cost data from their agencies’ financial accounting systems. As reported in the fiscal year 2005 President’s Budget, 5 of these agencies’ financial management performance had serious flaws as of December 31, 2003. Despite these limitations, we believe the cost data to be sufficiently reliable as indicators of general ranges of cost and overall trends. However, the data should not be used to determine the cumulative costs for all federal agencies for code-orange alert periods. See appendix V for additional information on cost information reported by federal agencies. To determine the extent to which DHS has collected information on costs reported by states and localities during periods of code-orange alert, we met with and obtained information from DHS officials on costs that states and their localities submitted to DHS for reimbursement for increased critical infrastructure asset protection during the three code-orange alert periods. We examined this information to identify the methods used by DHS to collect cost information from states and localities. We also met with and obtained information from representatives of OMB regarding the extent to which the office provided guidance to DHS for collecting cost information from states and localities. We reported these cost data that DHS collected from states and localities for the three code-orange alert periods only to illustrate the range of costs that states reported to DHS for reimbursement. Cost information submitted by states to DHS does not include all costs for states and localities during the code-orange alert periods. In particular, not all states submitted costs to DHS for reimbursement, and not all state agencies and localities in states that submitted cost information to DHS may have reported costs to their states for submission to DHS. In addition, the cost information submitted by states does not include additional costs for training or equipment and material purchases during code-orange alert periods because these costs are not reimbursable through the critical infrastructure protection grant programs. Moreover, some states have not finished validating costs they plan to submit for reimbursement. Despite these limitations, we believe the cost data to be sufficiently reliable as indicators of general ranges of costs that states submitted for reimbursement to DHS and overall trends. However, because this cost information from states and localities is not complete, it should not be used to reach conclusions about the financial impact of code-orange alerts on states and localities. To determine the methodologies used by other organizations to develop estimates of costs reported by federal agencies, states, and localities during code-orange alert periods, we spoke with and obtained information from officials at the U.S. Conference of Mayors and the Director of the Center for Strategic and International Studies Homeland Security Initiatives regarding how this organization and this individual developed their estimates. We evaluated methodologies used by the U.S. Conference of Mayors and the Director of the Center for Strategic and International Studies Homeland Security Initiatives based on their scopes, data collection methods, and analyses to assess the reliability of the cost estimates. Moreover, we examined costs estimates reported by other organizations, including the Council on Foreign Relations, the National League of Cities, the National Association of Counties, and the International Association of Emergency Managers. We did not include these organizations’ reports in our review because they did not specifically address costs associated with responses to increases in the national threat level. To obtain information on federal agencies’, states’, and localities’ threat advisory systems, we analyzed questionnaire responses and other documents to determine the number of federal agencies, states, and localities that had their own threat advisory systems in place prior to the establishment of the Homeland Security Advisory System as well as the number of federal agencies, states, and localities that follow their own threat advisory systems and the Homeland Security Advisory System. We reviewed documentation of the threat advisory systems that these federal agencies, states, and localities provided with their questionnaire responses to identify the characteristics of the systems, including systems’ threat levels and protective measures and conformance to the Homeland Security Advisory System. We also met with and received documents from the Department of Defense on its Force Protection Condition System. Furthermore, we spoke with and obtained information from officials of four foreign countries—Australia, Germany, Norway, and the United Kingdom—on these countries’ threat advisory systems and information sharing processes. We compared the characteristics of these systems with characteristics of the Homeland Security Advisory System to identify similarities and differences between the systems. We selected the four countries because they are democracies similar to the United States that have generally faced terrorist threats. Federal agencies and states responding to our questionnaire indicated that they used guidance from various sources, such as the Federal Emergency Management Agency (FEMA), the Federal Protective Service (FPS), the Department of Justice, and the White House, among other sources, to develop plans for responding to each Homeland Security Advisory System threat level. For example, 12 federal agencies reported using the Department of Justice’s Vulnerability Assessment of Federal Facilities that established security levels for various types of federal facilities and minimum-security standards for each security level. In addition, to develop their response plans, 8 federal agencies indicated that they used Homeland Security Presidential Directive 3, which established the Homeland Security Advisory System and suggested general protective measures for each advisory system threat level. Six states reported using terrorism alerts and guidelines from FEMA to develop their plans for protective measures for national threat levels. In addition to their response plans for national threat levels, federal agencies and states responding to our questionnaires reported using guidance and information from various sources to determine protective measures to implement or enhance in response to the three code-orange alert periods from March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004. As shown in tables 6 and 7, these federal agencies reported using guidance and information and intelligence from such sources as the Department of Homeland Security (DHS), the Federal Bureau of Investigation (FBI), and the White House to determine measures to take in response to the third code-orange alert period in our review. These federal agencies generally reported that this guidance and information and intelligence was useful and timely. Results for the other two code-orange alert periods – March 17 to April 16, 2003; and May 20 to 30, 2003 – were consistent with those reported in tables 6 and 7 for the third code-orange alert period. As shown in tables 8 and 9, states responding to our questionnaire also indicated that they used guidance and information from sources such as DHS, other federal entities, and state, territory, and local law enforcement agencies to determine actions to take in response to the third code-orange alert period. These states generally reported that this guidance and information and intelligence was useful and timely. Results for the other two code-orange alert periods in our review were similar to those reported in tables 8 and 9. Federal agencies and states responding to our questionnaires reported having a variety of protective measures in place for responding to the three code-orange alert periods from March 17 to April 16, 2003; May 20 to 30, 2003; and December 21, 2003, to January 9, 2004, regardless of whether the measures were most commonly enhanced, maintained at pre-code-orange alert levels, or implemented solely in response to the code-orange alerts. Table 10 provides examples of the protective measures that federal agencies most commonly reported having in place for the third code- orange alert period in our review. Results from the other two code-orange alert periods are consistent with those reported in the following table for the third code-orange alert period. Table 11 provides examples of the protective measures that states most commonly reported having in place during the third code-orange alert period in our review. Results for the other two code-orange alert periods in our review are similar to those reported in table 11. To ensure that protective measures operate as intended and are implemented as planned, most of the federal agencies and states responding to our questionnaires indicated that they had conducted tests or exercises on the functionality and reliability of protective measures within the past year. Table 12 provides examples of protective measures on which federal agencies and states reported conducting tests and exercises. In addition, most of the federal agencies and states responding to our questionnaires reported receiving confirmation from component entities, offices, or personnel that protective measures were actually enhanced or implemented during the three code-orange alert periods. Table 13 provides examples of the methods by which federal agencies and states reported receiving confirmation from their component entities, offices, and personnel, for the code-orange alert period from December 21, 2003, to January 9, 2004. Results for the other two code-orange alert periods from March 17 to April 16, 2003, and May 20 to 30, 2003, are consistent with those reported in table 13. In our questionnaire, we asked states to provide information on additional costs incurred by the state during the three code-orange alert periods in our review. However, of the 42 states that responded to our questionnaire and follow the Homeland Security Advisory System, only 6 reported additional costs incurred by state agencies during at least one of the three code- orange alerts in our review. Therefore, we did not collect sufficient cost information from our questionnaire to provide ranges or assess general trends in costs incurred by states during code-orange alert periods. Twenty-two of the 42 states that responded to our questionnaire and follow the Homeland Security Advisory System provided us with cost information they submitted to DHS in order to be reimbursed for state and local critical infrastructure protection costs through the State Homeland Security Grant Program – Part II and the Urban Areas Security Initiative – Part II. As discussed previously in this report, through these two grant programs, DHS offered financial assistance to reimburse costs incurred by state agencies and localities as a result of increased security measures at critical infrastructure sites during the period of hostilities with Iraq and for other periods of heightened alert. We obtained this critical infrastructure protection cost information from DHS for 40 states and their localities for the March 17 to April 16, 2003, and May 20 to 30, 2003, code-orange alert periods and for 33 states and their localities for the December 21, 2003, to January 9, 2004, code-orange alert period. However, this cost information does not represent all additional costs incurred by states and localities during code-orange alert periods. We also received information on additional code-orange alert costs from 14 select metropolitan and rural localities. However, information on localities’ costs is most appropriately used anecdotally, as these cities and counties represent a small, nonprobability sample of localities within the United States. The rural localities from which we obtained information indicated that they did not incur additional costs for any of the code-orange alert periods because they did not take significant action in response to the alert. These localities explained that they had insufficient resources to do so or did not perceive their localities to be at risk. We would like to acknowledge the time and effort made by agencies and governments that provided information by responding to questionnaires and talked with us during site visits. Atlanta, Ga. Fulton County, Ga. Denver, Colo. Douglas County, Colo. Colorado Springs, Colo. Portland, Ore. Wasco County, Ore. Boston, Mass. Fitchburg, Mass. Norfolk, Va. Chicago, Ill. Cook County, Ill. Mankato, Minn. Helena, Mont. San Jose, Calif. Rock Springs, Wyo. The U.S. General Accounting Office (GAO) has been requested by Congress to review federal agencies’ security-related protective measures, guidance, and costs for periods when the national threat level was raised from yellow (elevated) to orange (high). As part of this request, GAO is surveying 28 federal agencies that received homeland security funding in fiscal year 2003, as reported to the Office of Management and Budget, and/or are Chief Financial Officers Act agencies. Results from this survey will help GAO to inform Congress of (1) protective measures taken by federal agencies during periods of orange alert, specifically for the periods March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004; (2) guidance and other information used by federal agencies in implementing those measures; and (3) costs incurred by federal agencies as a result of protective measures implemented during those three orange alert periods. This questionnaire should be completed by the person(s) most knowledgeable about your agency’s security-related measures, guidance, and costs for the orange alerts from March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004, including your agency’s protective measures for threat levels; guidance and other information used by your agency in developing and implementing protective measures during those periods; your agency’s methods for tracking or collecting cost data and ensuring data reliability; your agency’s national threat level notification processes; and financial and operational challenges your agency faced during the three orange alert periods. If your agency, or certain of its facilities, remains on orange alert even though the national threat level has been lowered, please answer the questions about the most recent orange alert period considering your agency’s actions and costs through January 9, 2004. Most of the questions can be answered by marking boxes or filling in blanks. Space has been provided at the end of the survey for any additional comments, and we encourage you to provide whatever additional comments you think appropriate. In our report, the responses from your agency will be presented only after they have been aggregated with responses from other responding agencies. GAO will not release individual agency responses to any entity unless requested by Congress or compelled by law. In addition, GAO will take appropriate measures to safeguard any sensitive information provided by your agency, and, upon request, can provide security clearance information for staff reviewing survey responses. Please complete this questionnaire within 2 weeks of receipt. Your agency’s participation is important! A member of our staff will pick up your completed questionnaire. If you have any questions or when you are ready for your questionnaire and any accompanying materials to be picked up, please contact Dr. Jonathan Tumin on (202) 512-3595, Rebecca Gambler on (202) 512-6912, or Kristy Brown on (202) 512-8697. Please provide the name, title, agency, and telephone number of the primary person completing this questionnaire so that we may contact that person if we need to clarify any responses. Telephone number: (_____)___________________________________ We modified the format of this questionnaire slightly for inclusion in this report, but we did not change the content of the questionnaire. Definition of term "agency": Any entity within the executive branch, including federal departments, independent establishments, and government corporations. If the questionnaire is to be completed by a federal agency's components, then "agency" refers to the component entity rather than the department. Agency Protective Measures for National Threat Levels 1. According to Homeland Security Presidential Directive 3, issued in March 2002, federal agencies are responsible for developing their own protective measures and other antiterrorism or self-protection and continuity plans for national threat levels. (See highlighted passage on page 2 of the attachment.) Please provide a copy of the measures along with your completed questionnaire. Please provide a copy of the measures along with your completed questionnaire. If you answer #4, please skip to question 5; otherwise, please continue. GAO Survey on Threat Alerts 2. Did your agency use guidance and/or information from any of the following sources in developing your protective measures for national threat levels? (Please check one answer in each row.) a. Federal Emergency Management Agency (FEMA) b. Federal Protective Service (FPS) g. Vulnerability assessments for your agency h. Other sources (Please specify.) If you answered “yes” for any source in question 2, please answer: Please list the source and titles or topics of any written guidance used by your agency in developing your protective measures for national threat levels. GAO Survey on Threat Alerts 5. Homeland Security Presidential Directive 3 requires federal agencies to develop and submit to the President, through the Assistant to the President for Homeland Security, an annual written report on steps taken to develop and implement protective measures for national threat levels. (See highlighted passage on page 2 of the attachment.) Please provide a copy of the report along with your completed questionnaire. Please provide the name of your agency’s threat-advisory system and a copy of system documentation, if available, along with your completed questionnaire. Please provide the name of your agency’s threat-advisory threat–advisory system that does not system and a copy of system documentation, if available, along with your completed questionnaire. Please provide the name of your agency’s threat-advisory uses its own threat–advisory system system and a copy of system documentation, if available, along with your completed questionnaire. If you answered #4, please stop and return this questionnaire according to the instructions on page 1. 5. Agency does not follow the HSAS and does Please provide the name of the other threat-advisory not use its own threat–advisory system, but system used by your agency. uses another threat-level system (e.g., the Department of Defense’s Force Protection Condition system) If you answered #5, please stop and return this questionnaire according to the instructions on page 1. 6. Agency does not follow any threat-advisory system If you answered #6, please stop and return this questionnaire according to the instructions on page 1. Orange alerts, that is, measures implemented in addition to the measures used in the “Yes”, “No”, or “Not applicable-N/A” for preceding Code-Yellow alert period. (Please check “Implemented, Code-Orange only”, each measure.) Increased use of N/A or no change in s. Other measures in this category (Please specify.) Increased use of N/A or no change in j. Other measures in this category (Please specify.) Dec. 21, 2003–Jan. 9, 2004 3. Information collection, analysis, and dissemination If your agency did not implement any types of measures in category “3”, please check this box and skip to category “4”. Increased use of N/A or no change in i. Other measures in this category (Please specify.) Increased use of N/A or no change in i. Other measures in this category (Please specify.) Increased use of N/A or no change in 5. Other types of measures (Please specify.) Increased use of N/A or no change in measure GAO Survey on Threat Alerts 9. We would like to know about the guidance and/or information/intelligence your agency received and used to determine the protective measures implemented specifically in response to the Code-Orange alert from March 17 to April 16, 2003. 9a. In addition to your agency’s planned protective measures for national 9c. For each item you answer “yes” in threat levels, did your agency receive guidance and/or question 9b, please answer questions 9c and information/intelligence from any of the following sources to determine the 9d: How useful was the guidance and/or measures implemented specifically in response to the HSAS Code-Orange information/ intelligence from the source? alert from March 17 to April 16, 2003? (Please check one answer in each row under question 9a.) 9d. Was the guidance and/or information/ intelligence from the source timely? 9b. For each item you answer “yes” in question 9a, please answer: Did your agency use the guidance and/or information/intelligence received from the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from March 17 to April 16, 2003? Received? Used? Useful? Timely? “yes” “yes” 1. Guidance (e.g., a. DHS (including FPS and FEMA) b. Federal Bureau of Investigation (FBI) e. Other sources (Please specify.) 2. Information/Intelligence (e.g., region, sector or site- “yes” “yes” GAO Survey on Threat Alerts 10. We would like to know about the guidance and/or information/intelligence your agency received and used to determine the protective measures implemented specifically in response to the Code-Orange alert from May 20 to May 30, 2003. 10a. In addition to your agency’s planned protective measures for national 10c. For each item you answer “yes” in threat levels, did your agency receive guidance and/or information/intelligence question 10b, please answer questions 10c and from any of the following sources to determine the measures implemented 10d: How useful was the guidance and/or specifically in response to the HSAS Code-Orange alert from May 20 to May information/intelligence from the source? 30, 2003? (Please check one answer in each row under question 10a.) 10d. Was the guidance and/or information/ 10b. For each item you answer “yes” in question 10a, please answer: Did intelligence from the source timely? your agency use the guidance and/or information/intelligence received from the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from May 20 to May 30, 2003? Received? Used? Useful? Timely? “yes” “yes” 1. Guidance (e.g., a. DHS (including FPS and FEMA) e. Other sources (Please specify.) (e.g., region, sector or site- “yes” “yes” GAO Survey on Threat Alerts 11. We would like to know about the guidance and/or information/intelligence your agency received and used to determine the protective measures implemented specifically in response to the Code-Orange alert from December 21, 2003 to January 9, 2004. 11a. In addition to your agency’s planned protective measures for national 11c. For each item you answer “yes” in threat levels, did your agency receive guidance and/or information/intelligence from any of the following sources to determine the measures implemented question 11b, please answer questions 11c and 11d: How useful was the guidance and/or specifically in response to the HSAS Code-Orange alert from December 21, information/intelligence from the source? 2003 to January 9, 2004? (Please check one answer in each row under question 11a.) 11d. Was the guidance and/or information/ intelligence from the source timely? 11b. For each item you answer “yes” in question 11a, please answer: Did your agency use the guidance and/or information/intelligence received from the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from December 21, 2003 to January 9, 2004? Received? Used? Useful? Timely? “yes” “yes” 1. Guidance (e.g., a. DHS (including FPS and FEMA) e. Other sources (Please specify.) 2. Information/Intelligence (e.g., region, sector or site- “yes” “yes” GAO Survey on Threat Alerts 12. In addition to guidance and information indicated above, what other types of information, if any, would have been helpful to your agency in deciding what protective measures to implement specifically in response to the HSAS Code- Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK f. Other types of information (Please specify.) Yes No DK Yes No DK Yes No DK 13. Please describe examples of ways in which protective measures implemented during the Code-Orange alerts (March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004) benefited your agency. 14. Please describe examples of ways in which your agency’s operations were affected during the Code-Orange alerts (March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004), such as longer lines for visitors or shifting of resources from normal operations. ____________________________________________________________________________________________ GAO Survey on Threat Alerts 15. Did your agency receive confirmation from component entities, offices, and/or personnel that the additional protective measures indicated in questions 8b, 8c, and 8d (on pages 6 through 10) were actually implemented during the HSAS Code-Orange alert from: (Please check one answer in each row.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? If you answered “yes” for any of the three Code-Orange alert periods in question 15 (a, b, or c), please answer question 16; otherwise, skip to question 17: 16. How did your agency receive confirmation that the additional protective measures indicated in questions 8b, 8c, and 8d were actually implemented during the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK d. Other methods (Please specify.) 17. Does your agency have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from March 17 to April 16, 2003? (Please check only one answer.) 1. Yes (Continue with question 18.) 2. No (Skip to question 23.) To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from March 17 to April 16, 2003, please answer: 18. What were your agency’s total security-related costs for the HSAS Code-Yellow alert period from February 28 to March 16, 2003, that immediately preceded the HSAS Code-Orange alert period from March 17 to April 16, 2003? 19. What additional security-related costs, if any, did your agency incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from March 17 to April 16, 2003? (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know” costs for the category. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, March 17 to April 16, 2003 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) otherwise, skip to question 22: 20. Please describe how your agency determined the total and/or additional security-related costs for the HSAS Code- Yellow alert period from February 28 to March 16, 2003, and/or the HSAS Code-Orange alert period from March 17 to April 16, 2003 (e.g., financial accounting system, Microsoft Excel spreadsheet). 21. Please briefly list the procedures used by your agency to review and certify the reliability of this financial data (e.g., internal auditing procedures). ______________________________________________________________________________________________ GAO Survey on Threat Alerts If you provided data for estimated security-related costs in questions 18 and/or 19, please answer question 22; otherwise, skip to question 23: 22. Please briefly describe how your agency developed the estimates for total and/or additional security-related costs for the HSAS Code-Yellow alert period from February 28 to March 16, 2003, and/or the HSAS Code-Orange alert period from March 17 to April 16, 2003. 23. Does your agency have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from May 20 to May 30, 2003? (Please check only one answer.) 1. Yes (Continue with question 24.) 2. No (Skip to question 29.) To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from May 20 to May 30, 2003, please answer: 24. What were your agency’s total security-related costs for the HSAS Code-Yellow alert period from April 17 to May 19, 2003, that preceded the HSAS Code-Orange alert period from May 20 to May 30, 2003? 25. What additional security-related costs, if any, did your agency incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from May 20 to May 30, 2003? (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know” costs for the category. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, May 20 to May 30, 2003 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) GAO Survey on Threat Alerts If you provided data for actual security-related costs in questions 24 and/or 25, please answer questions 26 and 27; otherwise, skip to question 28: 26. Please describe how your agency determined the total and/or additional security-related costs for the HSAS Code- Yellow alert period from April 17 to May 19, 2003, and/or the HSAS Code-Orange alert period from May 20 to May 30, 2003 (e.g., financial accounting system, Microsoft Excel spreadsheet). 27. Please briefly list the procedures used by your agency to review and certify the reliability of this financial data (e.g., internal auditing procedures). If you provided data for estimated security-related costs in questions 24 and/or 25, please answer question 28; otherwise, skip to question 29: 28. Please briefly describe how your agency developed the estimates for total and/or additional security-related costs for the HSAS Code-Yellow alert period from April 17 to May 19, 2003, and/or the HSAS Code-Orange alert period from May 20 to May 30, 2003. ______________________________________________________________________________________________ GAO Survey on Threat Alerts 29. Does your agency have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? (Please check only one answer.) 1. Yes (Continue with question 30.) 2. No (Skip to question 35.) To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004, please answer: 30. What were your agency’s total security-related costs for the HSAS Code-Yellow alert period from May 31 to December 20, 2003, that preceded the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? 31. What additional security-related costs, if any, did your agency incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know” costs for the category. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, Dec. 21, 2003 to Jan. 9, 2004 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) Actual costs Estimated costs Don’t know costs Actual costs Estimated costs Don’t know costs Actual costs Estimated costs Don’t know costs Actual costs Estimated costs Don’t know costs $_______________ GAO Survey on Threat Alerts If you provided data for actual security-related costs in questions 30 and/or 31, please answer questions 32 and 33; otherwise, skip to question 34: 32. Please describe how your agency determined the total and/or additional security-related costs for the HSAS Code- Yellow alert period from May 31 to December 20, 2003, and/or the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004 (e.g., financial accounting system, Microsoft Excel spreadsheet). 33. Please briefly list the procedures used by your agency to review and certify the reliability of this financial data (e.g., internal auditing procedures). 35. How did your agency learn about the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Dec. 21, 2003 - a. Direct notification from DHS (not via media sources) Yes No DK Yes No DK Yes No DK (not via media sources) Yes No DK Yes No DK Yes No DK d. Other methods (Please specify.) Yes No DK Yes No DK Yes No DK If you answered “yes” that your agency received direct notification from DHS for any period in question 35 (Part A, Part B, or Part C) above, please answer questions 36 and 37; otherwise, skip to question 38: 36. How did DHS notify your agency about the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Warning Alert System (WAWAS) e. Other methods (Please specify.) GAO Survey on Threat Alerts 37. What type(s) of information was included in DHS’s official notification for the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK h. Other methods (Please specify.) GAO Survey on Threat Alerts 39. For each of the methods listed below, please indicate whether or not your agency would like to be notified of future changes in the national threat-level through this method. (Please check one answer in each row.) a. Through your agency representatives at the Homeland Security Operations Center b. Through a single official announcement to all federal agencies via telephone, E-mail, or fax c. Through an individual agency message via telephone, E-mail, or fax d. Through an electronic communications system such as the Washington Area Warning Alert System (WAWAS) e. Other methods (Please specify.) _________________________________________________ Financial and Operational Challenges in Implementing HSAS Code-Orange Alert Measures 40. What financial challenges, if any, did your agency face in responding to the HSAS Code-Orange alert from: (Please check one answer (“Yes”,” No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK d. Other methods (Please specify.) Yes No DK Yes No DK Yes No DK If you answered “yes” to any financial challenge in rows a through d in question 40 (Part A, Part B, or Part C) above, please answer question 41; otherwise, skip to question 42: 41. Briefly describe one or more examples of financial challenges faced during the alerts. ______________________________________________________________________________________________ GAO Survey on Threat Alerts 42. What operational challenges, if any, did your agency face in responding to the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK i. Other (Please specify.) Yes No DK Yes No DK Yes No DK If you answered “yes” to any operational challenge in rows a through i in question 42 (Part A, Part B, or Part C) above, please answer question 43; otherwise, skip to question 44: 43. Briefly describe one or more examples of operational challenges faced during the alerts. ______________________________________________________________________________________________ GAO Survey on Threat Alerts 44. If you have any comments regarding any of the issues covered in this questionnaire or have any other comments about protective measures, guidance, and costs for HSAS Code-Orange alerts, please use the space provided. Thank you for your assistance. Please return the questionnaire and, dependent on your answers to questions 1, 5, 6, or 7, any accompanying documentation according to the instructions on page 1. The U.S. General Accounting Office (GAO), an investigative arm of Congress, has been requested by the Congress to review states’ and U.S. territories’ security-related protective measures, guidance, and costs for periods when the national threat level was raised from yellow (elevated) to orange (high). As part of this request, GAO is surveying the 50 states, U.S. territories, and Washington, D.C. to determine (1) what, if any, protective measures were taken during periods of orange alert, specifically for periods March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004; (2) guidance and other information used by states and territories in implementing those measures; and (3) costs incurred by states and territories as a result of protective measures implemented during these three orange alert periods. To better inform the Congress on the Homeland Security Advisory System (HSAS) and identify potential improvements, GAO is also collecting data on (1) applicable threat alert systems used by states and territories prior to the establishment of the HSAS and (2) operational and financial challenges faced by states and territories as a result of responding to code-orange alerts. This questionnaire should be completed by the person(s) most knowledgeable about the guidance received and protective measures taken by your state or territory during the periods identified above, including the protective measures your jurisdiction developed to respond to national threat levels, any threat-advisory system you have in place, types of protective measures taken, costs incurred during these periods of orange alert, how your state or territory was notified of the increase in the national threat level, and financial and operational challenges your state or territory faced during these periods of orange alert. If your state or territory, or certain of its facilities, remains on orange alert even though the national threat level has been lowered, please answer the questions about the most recent orange alert period considering your state or territory’s actions and costs through January 9, 2004. Most of the questions can be answered by marking boxes or filling in blanks. Space has also been provided for comments and we encourage you to provide whatever additional comments you think appropriate; please feel free to type out these comments on a separate attachment (identified by question number) if you prefer. In our report, the responses from your state or territory will be presented only after they have been aggregated with responses from other states and territories. GAO will not release individual responses to any entity unless requested by Congress or compelled by law. In addition, GAO will take appropriate measures to safeguard any sensitive information you provide, and, upon request, can provide security clearance information for staff reviewing survey responses. Please complete this questionnaire and return it within 2 weeks of receipt. Your participation is important! A pre- addressed Federal Express envelope has been included to return this questionnaire. If you have any questions or misplace the return envelope, please contact Nancy Briggs at (202) 512-5703 or Gladys Toro at (202) 512-3047. Please provide the name, title, and telephone number of the primary person completing this questionnaire and your state or territory name so that we may contact that person if we need to clarify any responses. Telephone number: (______)__________________________________ State or Territory: ___________________________________________ We modified the format of this questionnaire slightly for inclusion in this report, but we did not change the content of the questionnaire. GAO State Survey on Terrorist Alerts Sections I and II of this questionnaire ask you to report on your state’s or territory’s protective measures and advisory systems for all levels of national alert; the remaining sections ask you specifically about Code-Orange alerts. When completing this questionnaire, please consider only actions taken and costs incurred at the state or territory level; do not include local level actions and costs. I. State and Territory Protective Measures for Responding to National Threat Levels 1. According to Homeland Security Presidential Directive 3, issued in March 2002, states and territories are encouraged to develop protective measures and other antiterrorism or self-protection and continuity plans for responding to national threat levels (see bolded passage on page 2 of the attachment). If you answered “yes” for any source in question 2, answer question 3; otherwise, skip to question 4. 3. Please list the source and titles or topics of any written guidance used by your state or territory in developing protective measures for responding to national threat levels? If you answered #4, #5, or #6 to question 9, please stop and return this questionnaire according to the instructions on page 1. Orange Alerts from March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 10. We would like to know about the guidance and/or information/intelligence your state or territory received and used to determine the protective measures to implement specifically in response to the Code-Orange alert from March 17 to April 16, 2003. 10a. Please indicate the protective measures your state or territory (or at least 10b, 10c, and 10d. Please indicate the protective measures one state or territory department) has in place for Code-Yellow alerts. your state or territory (or at least one state or territory (Please check “Yes”, “No”, “Not applicable-N/A”, or “Don’t Know-DK” department) implemented or increased the use of specifically for each measure). in response to the Code-Orange alerts, that is, measures implemented in addition to the measures used in the preceding Code-Yellow alert period. (Please check “Implemented, Code-Orange only”, “Increased use of”, “N/A or no change,” or “Don’t Know-DK” for each measure in each column) Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK s. Other measures in this category (Please specify.) Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK j. Other measures in this category (Please specify.) Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK i. Other measures in this category (Please specify.) Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK Implemented, Code-Orange only Increased use of N/A or no change DK i. Other measures in this category (Please specify.) Implemented, Code-Orange only Increased use of N/A or no change DK 5. Other types of measures (Please specify.) Implemented, Code-Orange only Increased use of N/A or no change DK GAO State Survey on Terrorist Alerts 11. We would like to know about the guidance and/or information/intelligence your state or territory received and used to determine the protective measures to implement specifically in response to the Code-Orange alert from March 17 to April 16, 2003. 11a. In addition to planned protective measures for national threat levels, did your 11c. For each item you answer “yes” in state or territory receive guidance and/or information/intelligence from any of the question 11b, please answer questions 11c following sources to determine the measures implemented specifically in response and 11d: How useful was the guidance and/or to the HSAS Code-Orange alert from March 17 to April 16, 2003? (Please check information/intelligence from the source? one answer in each row under question 11a.) 11d. Was the guidance and/or 11b. For each item you answer “yes” in question 11a, please answer: Did your state or territory use the guidance and/or information/intelligence received from the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from March 17 to April 16, 2003? Received? Used? Useful? Timely? 1. Guidance (e.g., “yes” “yes” a. DHS (including FEMA) b. Other federal agency, such as the FBI and its Joint (JTTF) (Please specify. c. Other state, territorial, or (Please specify.) e. Regional, state or local law g. Other sources (Please specify.) (e.g., region, sector or site- “yes” “yes” GAO State Survey on Terrorist Alerts 12. We would like to know about the guidance and/or information/intelligence your state or territory received and used to determine the protective measures to implement specifically in response to the Code-Orange alert from May 20 to May 30, 2003. 12a. In addition to planned protective measures for national threat levels, did your 12c. For each item you answer “yes” in state or territory receive guidance and/or information/intelligence from any of the question 12b, please answer questions 12c following sources to determine the measures implemented specifically in response and 12d: How useful was the guidance and/or to the HSAS Code-Orange alert from May 20 to May 30, 2003? (Please check information/intelligence from the source? one answer in each row under question 12a.) 12d. Was the guidance and/or 12b. For each item you answer “yes” in question 12a, please answer: Did your state or territory use the guidance and/or information/intelligence received from timely? the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from May 20 to May 30, 2003? Received? Used? Useful? Timely? 1. Guidance (e.g., “yes” “yes” a. DHS (including FEMA) b. Other federal agency, such as the FBI and its JTTF (Please specify.) c. Other state, territorial, or (Please specify.) e. Regional, state, or local law g. Other sources (Please specify.) (e.g., region, sector or site- “yes” “yes” GAO State Survey on Terrorist Alerts 13. We would like to know about the guidance and/or information/intelligence your state or territory received and used to determine the protective measures to implement specifically in response to the Code-Orange alert from December 21, 2003 to January 9, 2004. 13a. In addition to planned protective measures for national threat levels, did your 13c. For each item you answer “yes” in state or territory receive guidance and/or information/intelligence from any of the question 13b, please answer questions 13c and following sources to determine the measures implemented specifically in response 13d: How useful was the guidance and/or to the HSAS Code-Orange alert from December 21, 2003 to January 9, 2004? information/intelligence from the source? (Please check one answer in each row under question 13a.) 13d. Was the guidance and/or information/ 13b. For each item you answer “yes” in question 13a, please answer: Did your intelligence from the source timely? state or territory use the guidance and/or information/intelligence received from the source to determine the measures implemented specifically in response to the HSAS Code-Orange alert from Dec. 21, 2003 to Jan. 9, 2004? Received? Used? Useful? Timely? 1. Guidance (e.g., “yes” “yes” a. DHS (including FEMA) b. Other federal agency, such as the FBI and its JTTF (Please specify.) c. Other state, territorial, or (Please specify.) e. Regional, state or local law g. Other sources (Please specify.) (e.g., region, sector or site- “yes” “yes” GAO State Survey on Terrorist Alerts 14. In addition to guidance and information indicated above, what other types of information, if any, would have been helpful in deciding what protective measures to implement specifically in response to the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, “Don’t Know-DK”, or “Already Received”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Already received Yes No DK Already received Yes No DK Already received Yes No DK Already received Yes No DK Already received f. Other methods (Please specify.) Yes No DK Already received 15. Please describe examples of ways in which protective measures implemented during the Code-Orange alerts benefited your state or territory. 16. Please describe examples of ways in which your state or territory’s operations were affected during the Code-Orange alerts, such as, but not limited to, shifting resources from normal operations or reduced tourism. ____________________________________________________________________________________________ GAO State Survey on Terrorist Alerts 17. Did your state or territory receive confirmation from agencies, offices, and/or personnel within your jurisdiction that the additional protective measures indicated in questions 10b, 10c, and 10d (on pages 6 through 10) were actually implemented during the HSAS Code-Orange alert from: (Please check one answer in each row.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? If you answered “yes” or “some” for any of the three Code-Orange alert periods in question 17 (a, b, or c), please answer question 18; otherwise, skip to question 19: 18. How did your state or territory receive confirmation that the additional protective measures indicated in questions 10b, 10c, and 10d were actually implemented during the HSAS Code-Orange alert from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK d. Other methods (Please specify.) GAO State Survey on Terrorist Alerts 19. Has DHS requested any of the following types of information on protective measures taken in response to increased national threat levels? (Please check one answer in each row in each applicable column.) Yes No DK Yes No DK Yes No DK d. Other information (Please specify.) 20. Does your state or territory have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from March 17 to April 16, 2003? (Please check only one answer.) 1. Yes (Continue with question 21.) 2. No (Skip to question 26.) GAO State Survey on Terrorist Alerts To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from March 17 to April 16, 2003, we are assuming that the total security costs incurred during the HSAS Code-Yellow alert period from February 28 to March 16, 2003 will serve as your baseline. 21. What were your state or territory’s total security-related costs for the HSAS Code-Yellow alert period from February 28 to March 16, 2003, that immediately preceded the HSAS Code-Orange alert period from March 17 to April 16, 2003? (Please provide your answer in the table below.) 22. What additional security-related costs, if any, did your state or territory incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from March 17 to April 16, 2003? (Please provide your answer in the table below.) (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know”. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, March 17 to April 16, 2003 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) otherwise, skip to question 25: 23. Please describe how your state or territory determined the total and/or additional security-related costs for the HSAS Code-Yellow alert period from February 28 to March 16, 2003, and/or the HSAS Code-Orange alert period from March 17 to April 16, 2003 (e.g., financial accounting system, Microsoft Excel spreadsheet). 24. Please briefly list the procedures used by your state or territory to review and certify the reliability of this financial data (e.g., internal auditing procedures). ______________________________________________________________________________________________ GAO State Survey on Terrorist Alerts If you provided data for estimated security-related costs in questions 21 and/or 22, please answer question 25; otherwise, skip to question 26: 25. Please briefly describe how your state or territory developed the estimates for total and/or additional security-related costs for the HSAS Code-Yellow alert period from February 28 to March 16, 2003, and/or the HSAS Code-Orange alert period from March 17 to April 16, 2003. 26. If you have any other data on costs incurred during the HSAS Code-Orange alert period from March 17 to April 16, that are not reported above, please briefly describe. 27. Does your state or territory have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from May 20 to May 30, 2003? (Please check only one answer.) 1. Yes (Continue with question 28.) 2. No (Skip to question 33.) GAO State Survey on Terrorist Alerts To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from May 20 to May 30, 2003, we are assuming that the total security costs incurred during the HSAS Code-Yellow alert period from April 17 to May 19, 2003 will serve as your baseline. 28. What were your state or territory’s total security-related costs for the HSAS Code-Yellow alert period from April 17 to May 19, 2003, that preceded the HSAS Code-Orange alert period from May 20 to May 30, 2003? (Please provide your answer in the table below.) 29. What additional security-related costs, if any, did your state or territory incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from May 20 to May 30, 2003? (Please provide your answer in the table below.) (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know”. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, May 20 to May 30, 2003 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) otherwise, skip to question 32: 30. Please describe how your state or territory determined the total and/or additional security-related costs for the HSAS Code-Yellow alert period from April 17 to May 19, 2003, and/or the HSAS Code-Orange alert period from May 20 to May 30, 2003 (e.g., financial accounting system, Microsoft Excel spreadsheet). 31. Please briefly list the procedures used by your state or territory to review and certify the reliability of this financial data (e.g., internal auditing procedures). ______________________________________________________________________________________________ GAO State Survey on Terrorist Alerts If you provided data for estimated security-related costs in questions 28 and/or 29, please answer question 32; otherwise, skip to question 33: 32. Please briefly describe how your state or territory developed the estimates for total and/or additional security-related costs for the HSAS Code-Yellow alert period from April 17 to May 19, 2003, and/or the HSAS Code-Orange alert period from May 20 to May 30, 2003. 33. If you have any other data on costs incurred during the HSAS Code-Orange alert period from May 20 to May 30, that are not reported above, please briefly describe. 34. Does your state or territory have any data on actual or estimated additional security-related costs incurred during the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? (Please check only one answer.) 1. Yes (Continue with question 35.) 2. No (Skip to question 40.) GAO State Survey on Terrorist Alerts To provide a context for assessing additional costs incurred during the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004, we are assuming that the total security costs incurred during the HSAS Code-Yellow alert period from May 31 to December 20, 2003 will serve as your baseline. 35. What were your state or territory’s total security-related costs for the HSAS Code-Yellow alert period from May 31 to December 20, 2003, that preceded the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? (Please provide your answer in the table below.) 36. What additional security-related costs, if any, did your state or territory incur for protective measures implemented specifically in response to the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004? (Please provide your answer in the table below.) (NOTE: For each category, please indicate whether the costs provided are actual or estimated, or if you “Don’t Know”. For categories where no costs were incurred, please list costs as $0. If costs by category cannot be provided, give “Grand total costs”.) Orange alert, Dec. 21, 2003 to Jan. 9, 2004 a. Personnel (e.g., security personnel, overtime) b. Equipment/materials (e.g., screening materials, patrol vehicles) c. Other costs (e.g., travel, training) d. Grand total costs (add items a, b, c from above) otherwise, skip to question 39: 37. Please describe how your state or territory determined the total and/or additional security-related costs for the HSAS Code-Yellow alert period from May 31 to December 20, 2003, and/or the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004 (e.g., financial accounting system, Microsoft Excel spreadsheet). 38. Please briefly list the procedures used by your state or territory to review and certify the reliability of this financial data (e.g., internal auditing procedures). ______________________________________________________________________________________________ GAO State Survey on Terrorist Alerts If you provided data for estimated security-related costs in questions 35 and/or 36, please answer question 39; otherwise, skip to question 40: 39. Please briefly describe how your state or territory developed the estimates for total and/or additional security-related costs for the HSAS Code-Yellow alert period from May 31 to December 20, 2003, and/or the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004. 40. If you have any other data on costs incurred during the HSAS Code-Orange alert period from December 21, 2003 to January 9, 2004 that are not reported above, please briefly describe. 41. What guidance, if any, did your state or territory use to track costs incurred in response to the Code-Orange alerts of March 17 to April 16, 2003, May 20 to May 30, 2003, and/or December 21, 2003 to January 9, 2004? (Please provide the source and the title or topic of the guidance.) If you answer “yes” to question 42, please answer question 43; otherwise skip to question 44. 43. How useful were the following types of guidance from DHS on how or whether to track costs, if provided? (Please check one answer in each row.) a. Tracking total costs incurred in response to elevated threat b. Tracking costs incurred that are eligible for federal c. Other methods (Please specify.) 44. Has DHS requested any data on additional costs incurred in response to the Code-Orange alert periods March 17 to April 16, 2003, May 20 to May 30, 2003, and December 21, 2003 to January 9, 2004 from your state or territory? (Please check one answer in each row.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? 45. Have you submitted any security-related costs for reimbursement to DHS for any of the following Code-Orange alert periods? (Please check one answer in each row.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? 46. Have you used any grant funds to reimburse your Code-Orange alert costs? (Please check only one answer.) Please provide the name of the grant(s) and amount(s). 47. In which of the following ways did your state or territory learn about the HSAS Code-Orange alerts for: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK as the FBI (not via media sources) Yes No DK Yes No DK Yes No DK e. Other methods (Please specify.) Yes No DK Yes No DK Yes No DK If you answered “yes” that your state or territory received direct notification from DHS for any period in question 47 (Part A, Part B, or Part C) above, please answer questions 48 and 49; otherwise, skip to question 50: 48. How did DHS notify your state or territory about the HSAS Code-Orange alerts from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Telecommunications System (NLETS) e. Other methods (Please specify.) GAO State Survey on Terrorist Alerts 49. What type(s) of information was included in DHS’s official notification for the HSAS Code-Orange alerts from: (Please check one answer (“Yes”, “No”, or “Don’t Know-DK”) in each row in each applicable column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK h. Other methods (Please specify.) GAO State Survey on Terrorist Alerts 51. For each of the methods listed below, please indicate whether or not your state or territory would like to be notified of future changes in the national threat-level through this method. (Please check one answer in each row.) a. Through your representatives at the Department of Homeland Security Operations b. Through a single official announcement to all state and territories via telephone, c. Through an individual message via telephone, E-mail, or fax d. Through an electronic communications system, such as the National Law Enforcement Telecommunications System (NLETS) e. Other methods (Please specify.) VI. Financial and Operational Challenges in Implementing HSAS Code-Orange Alert Measures 52. What financial challenges, if any, did your state or territory face in responding to the HSAS Code-Orange alerts from: (Please check one answer (“Yes”,” No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK d. Other challenges (Please specify.) GAO State Survey on Terrorist Alerts 53. Briefly describe examples of financial challenges and the consequences of these faced during the alerts. 54. What operational challenges, if any, did your state or territory face in responding to the HSAS Code-Orange alerts from: (Please check one answer (“Yes”,” No”, or “Don’t Know-DK”) in each row in each column.) a. March 17 to April 16, 2003? b. May 20 to May 30, 2003? c. December 21, 2003 to January 9, 2004? Yes No DK Yes No DK Yes No DK j. Other challenges (Please specify.) GAO State Survey on Terrorist Alerts 55. Briefly describe examples of operational challenges and the consequences of these faced during the alerts. 56. During the HSAS Code-Orange alerts, was your state prevented from taking any specific protective measure because of a legal prohibition(s)? 1. Yes, state or territory was prevented from taking specific protective measures because of a legal prohibition(s) 2. No, state or territory was not prevented from taking specific protective measures because of a legal prohibition(s) If you answered “yes” to question 56, please continue with question 57; otherwise, skip to question 58. 57. Please identify the specific HSAS Code-Orange alert(s), the protective measure involved, and the prohibition(s) that prevented its implementation, along with any relevant legal citation(s). ______________________________________________________________________________________________ GAO State Survey on Terrorist Alerts 58. If you have any comments regarding any of the issues covered in this questionnaire or have any other comments about protective measures, guidance, and costs for HSAS Code-Orange alerts, please use the space provided. Thank you for your assistance. Please return the questionnaire according to the instructions on page 1. In addition to the individuals named above, David P. Alexander, Fredrick D. Berry, Nancy A. Briggs, Kristy N. Brown, Philip D. Caramia, Christine F. Davis, Michele Fejfar, Rebecca Gambler, Catherine M. Hurley, Gladys Toro, Jonathan R. Tumin, Tamika S. Weerasingha, and Kathryn G. Young made key contributions to this report. Dory, Amanda. “American Civil Security: The U.S. Public and Homeland Security.” The Washington Quarterly, vol. 27, no. 1 (2003-2004) 37-52. Fischoff, Baruch. “Assessing and Communicating the Risks of Terrorism.” Science and Technology in a Vulnerable World. eds. Albert H. Teich, Stephen D. Nelson, and Stephen J. Lita. Washington, D.C.: American Association for the Advancement of Science, 2003: 51-64. Fischoff, Baruch, Roxana M. Gonzalez, Deborah A. Small, and Jennifer S. Lerner. “Evaluating the Success of Terror Risk Communications.” Biosecurity and Bioterrorism: Biodenfense Strategy, Practice, and Science, vol. 1, no. 4 (2003) 255-258. Gray, George M., and David P. Ropeik. “Dealing with the Dangers of Fear: The Role of Risk Communication.” Health Affairs, vol. 21, no. 6 (2002) 106- 116. Mileti, Dennis S., and John H. Sorensen. Communication of Emergency Public Warnings: A Social Science Perspective and State-of-the-Art Assessment, a report prepared for the Federal Emergency Management Agency, August 1990. Mitchell, Charles, and Chris Decker. “Apply Risk-Based Decision-Making Methods and Tools to U.S. Navy Antiterrorism Capabilities.” Journal of Homeland Security. February 2004. National Research Council. Improving Risk Communication. Washington, D.C.: National Academy Press, 1989. Partnership for Public Warning. A National Strategy for Integrated Public Warning Policy and Capability. McLean, VA: May 16, 2003. Partnership for Public Warning. Developing a Unified All-Hazard Public Warning System. Emmitsburg, MD: Nov. 25, 2002. Ropeik, David, and Paul Slovic. “Risk Communication: A Neglected Tool in Protecting Public Health.” Risk in Perspective, vol. 11, no. 2 (2003) 1-4. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
Established in March 2002, the Homeland Security Advisory System was designed to disseminate information on the risk of terrorist acts to federal agencies, states, localities, and the public. However, these entities have raised questions about the threat information they receive from the Department of Homeland Security (DHS) and the costs they incurred as a result of responding to heightened alerts. This report examines (1) the decision making process for changing the advisory system national threat level; (2) information sharing with federal agencies, states, and localities, including the applicability of risk communication principles; (3) protective measures federal agencies, states, and localities implemented during high (codeorange) alert periods; (4) costs federal agencies reported for those periods; and (5) state and local cost information collected by DHS. DHS assigns threat levels for the entire nation and assesses threat conditions for geographic regions and industrial sectors based on analyses of threat information and vulnerability of potential terrorist targets. DHS has not yet officially documented its protocols for communicating threat level changes and related threat information to federal agencies and states. Such protocols could assist DHS to better manage these entities' expectations about the methods, timing, and content of information received from DHS. To ensure early, open, and comprehensive information dissemination and allow for informed decisionmaking, risk communication experts suggest that warnings should include (1) multiple communication methods, (2) timely notification, and (3) specific threat information and guidance on actions to take. Federal agencies and states responding to GAO's questionnaires sent to 28 federal agencies and 56 states and territories generally indicated that they did not receive specific threat information and guidance, which they believe hindered their ability to determine and implement protective measures. The majority of federal agencies reported operating at heightened security levels regardless of the threat level, and thus, did not need to implement a substantial number of additional measures to respond to code-orange alerts. States reported that they varied in their actions during code-orange alerts. The costs reported by federal agencies, states, and selected localities are imprecise and may be incomplete, but provide a general indication of costs that may have been incurred. Additional costs reported by federal agencies responding to GAO's questionnaire were generally less than 1 percent of the agencies' fiscal year 2003 homeland security funding. DHS collected information on costs incurred by states and localities for critical infrastructure protection during periods of code-orange alert. However, this information does not represent all additional costs incurred by these entities during the code-orange alert periods.
NASA and its international partners (Japan, Canada, the European Space Agency, and Russia) are building the space station as a permanently orbiting laboratory to conduct materials and life sciences research and earth observation and provide for commercial utilization and related uses under nearly weightless conditions. Each partner is providing station hardware and crew members and each is expected to share operating costs and use of the station. Russia became a partner in 1993. As a partner, Russia agreed to provide hardware, such as the Service Module to provide station propulsion, supply vehicles, and related launch services throughout the station’s life. However, Russia’s funding problems delayed the launch of the Service Module by more than 2 years and raised questions about Russia’s ability to support the station during and after assembly. Shortly after Russia came into the program, NASA began studying ways to provide the required propulsion using existing designs and hardware. Later, in response to continuing problems in the Russian space program, such as declining launch rates and funding shortages, NASA initiated additional studies at the Marshall Space Flight Center in Alabama. In the spring of 1995, it focused on satisfying the space station’s command and control and propulsion requirements. In 1996, Marshall proposed building a propulsion module in-house, and in 1997, NASA considered using existing Russian hardware to provide the needed propulsion. In 1997, in response to continuing concerns over Russia’s ability to fulfill its station commitments, NASA developed a contingency plan, which included a strategy to mitigate the risk of both further delay on the Service Module propulsion capabilities and Russia not being able to meet station propulsion needs. Key elements of the plan involved developing an interim control module for near-term needs and a propulsion module to provide a permanent U.S. capability. In 1998, Boeing Reusable Space Systems proposed a propulsion module concept that was to rely heavily on existing shuttle hardware, provide for on-orbit refueling, and cost about $330 million. This proposal coincided with renewed concern that Russia would not be able to fulfil its commitment to provide station propulsion capability. NASA decided to move ahead with the proposal based on a strategy that included refining the design during subsequent requirements and design reviews. It adopted this strategy based on its assumption that a propulsion module would be required by early 2002 if Russia failed to launch the Service Module. In July 2000, Russia successfully launched the Service Module, thus mitigating NASA’s immediate concern that the space station would not have adequate propulsion capability. However, the agency still proceeded with the propulsion module project because of long-term concerns with Russia’s ability to fulfil its commitments. NASA proceeded with Boeing’s proposal without following fundamental processes involving project planning and execution. Specifically, project management never finalized its project plan or operational concept and did not receive timely approval for its risk management plan. The design ultimately required substantial changes. To prudently manage the project, NASA should have prepared and completed a number of planning documents and established baseline goals. Specifically, NASA did not do the following: Complete a project plan. Documented project plans help to define realistic time frames for system acquisition and development and identify responsibilities for key tasks, resources, and performance measures. Without them, there is no yardstick by which to measure the progress of the developmental effort. Fully develop a concept of operations document. This document describes the range of operational scenarios under which project hardware will have to function. This document is necessary to define requirements, operational plans, and associated training. The project began with a rudimentary concept that was continually refined during the course of the project. Complete an approved risk management plan in a timely manner. A formal risk management plan helps management identify, assess, and document risk associated with the cost, resource, schedule, and technical aspects of the project. Without such a plan, organizations do not have a disciplined means to predict and mitigate risks. Develop realistic cost and schedule estimates for the life of the project. NASA guidance states that life-cycle cost estimates shall be developed as a part of the project’s planning activity. Because of its concerns that Russia would be unable to provide space station propulsion capability, NASA approached the effort with a sense of urgency. Its analysis indicated that the U.S. propulsion capability would be needed by early 2002 if Russia did not meet its commitment. Given the initial estimate of the time that would be needed to develop and launch the propulsion module, NASA believed it was necessary to expedite the project. As a result, NASA chose to simultaneously plan and execute the project, thereby inhibiting use of fundamental planning documents during project formulation. According to NASA officials, the absence of approved planning documentation and the urgency NASA perceived in executing the project made it difficult to effectively guide the project or measure its progress. For example, roles and responsibilities continued to change, impeding the flow of information. In addition, the absence of accurate technical, cost, and schedule estimates early in the project made it difficult for NASA to track cost variances. As a result, NASA officials told us that the estimated $265 million cost increase announced just before the program was suspended came as a surprise. They also stated that, had more analytical rigor been applied, they would have determined earlier in the program that the Boeing proposal would not meet project goals. This procurement strategy also caused NASA to purchase long-lead items before the project’s requirements, concept of operations, and costs were fully understood. According to NASA, prior to the decision to cancel the project, it had obligated about $40 million for the purchase of various long- lead items. Some of these items could be used on the space station or other NASA projects. However, other items were unique to the propulsion module project. Our findings on lapses in project planning are consistent with results from a NASA independent assessment team, which reviewed the propulsion module project between September 1999 and March 2000. This team concluded that the project was at high risk due, in part, to the fact that these critical project management processes were not followed. Specifically, the team concluded that (1) the project would not be ready to proceed through the design reviews until the project plan was fully developed and approved, (2) a well-integrated risk management program was not in place, and (3) the project could not be completed within the budget or achieve its planned delivery date. NASA proceeded to implement Boeing’s proposal before it determined whether the design would fully meet the project’s technical requirements. The following top-level requirements were established at the beginning of the program: Provide reboost and attitude control capability. Provide up to 50 percent of total on-orbit space station propulsion needs. Provide 12-year on-orbit life. Maintain orbiter compatibility and transfer capability (pressurized transfer tunnel for crew and supplies). Provide capability to be launched and returned by the shuttle. Conform to NASA safety provisions. Even though the top-level requirements provided a framework to guide propulsion module development, NASA’s reviews of the module’s detailed technical requirements identified major concerns. For example, an April 1999 systems requirements review found that NASA did not have detailed analyses to quantify the amount of propulsion capability that would be required. NASA space station program personnel later determined the definition of what propulsive capability was required; however, this definition was not available until a few months before the initial design review and could not, therefore, be used to judge the design’s suitability. Subsequent reviews found deficiencies with the design elements of the module itself. Although technical requirements were never finalized and continued to change, NASA accepted and began to implement Boeing’s design. Typically, technical requirements are determined prior to selecting a design to ensure that it can satisfy established technical and safety needs. NASA accepted Boeing’s proposed design and began to implement it because the agency believed (1) the design was stable and mature because some of the proposed hardware had been used on the space shuttle and (2) costs were essentially fixed because the required development activities were understood and would not change. As NASA implemented the design—establishing the organization for and responsibilities of the project office, purchasing long-lead items, etc.—it discovered a number of unexpected technical complexities and other obstacles in the design. These problems put into question the ability of the design to meet the technical requirements, as indicated in the following examples. A central requirement was for the propulsion module to be refueled while in orbit. NASA began to question Boeing’s ability to incorporate on-orbit fuel transfer into the design, citing significant cost, safety, operational, and system design issues. Ultimately, NASA eliminated this requirement and reduced the module’s on-orbit life expectancy from 12 to 6 years. Eliminating these requirements meant that the propulsion module had to return to earth for refueling; the concept of returnability had not been fully analyzed; thus, a new design team had to be established to assess these impacts. The design also proposed a tunnel diameter that proved too small to accommodate crew operations and did not meet space station minimum diameter requirements. In addition to crew passage, the tunnel served as the primary path for equipment/supply movement from the shuttle to the station. The tunnel size was later increased based on NASA’s concerns. The design made extensive use of existing shuttle flight hardware that had not been designed for a 12-year application, and Boeing assumed NASA would accept the hardware based on prior shuttle experience. However, NASA assessed the hardware and found that much of it could not meet a 12-year life requirement. In addition, the development specification did not fully address testing requirements because Boeing assumed a simplified level of testing. However, testing requirements were later expanded. The propulsion module project failed to successfully complete its preliminary design review in December 1999, despite the fact that it had been considered a mature design. The review concluded, in part, that the initial propulsion system design did not meet the space station and space shuttle safety requirements and that another review of propulsion related issues was needed. In March 2000, NASA’s independent assessment team concluded that the design was not mature, requirements were not adequately defined, and major design impacts were likely. The process by which NASA and Boeing attempted to execute the project resulted in design changes, added effort, schedule slippage, and purchase of long-lead items before the design was fully understood. As a result, the project’s total cost estimate increased significantly. In February 1999, Boeing estimated that total program cost would be $479 million and maintained that estimate until April 2000. At that time, Boeing increased its estimate by $265 million—from $479 million to $744 million. Over this period, the scheduled launch date slipped by almost 2 years—from August 2002 to July 2004. Based on Boeing’s revised estimates, NASA began to question the project’s viability, and in July 2000 it informed Boeing that it would not authorize any additional work on the project. Prior to abandoning its original propulsion module design, NASA established an Alternative Propulsion Module Assessment Team in May 2000 to review design concepts for their potential to meet the space station program’s propulsion requirements. According to NASA officials, this effort brought early analytical rigor to requirements definition, which NASA had failed to do in the initial project. During the preliminary phase of the assessment, team members considered many diverse options. These options varied in design factors such as module location, number of propulsion elements, and propellant systems. Each option also had to meet the basic top-level requirements. Specifically, the alternative propulsion module had to provide space station attitude and orbit maneuver control, be located on the U.S. segment of the station, leave two ports available for other vehicles to dock to the station, meet space station safety requirements, and initially provide 50 percent of the space station’s propulsion needs. In addition, the design had to be adaptable to eventually provide 100 percent of the station’s propulsion needs. The assessment team identified five potential concepts, including two modified versions of Boeing’s baseline propulsion module design; the Z1 truss option, which attached to the station’s truss system; the split element option with separate propulsion and avionics elements; and the Node X option that had the propulsion elements attached to the Node 1 structural test article. The team designated a subteam to refine each option’s design. In addition, the subteams consulted with a cost assessment group to develop cost estimates for each option. The cost assessment group considered both initial capability costs, such as development and integration, and additional life-cycle cost elements, such as shuttle launches, labor, and spare parts. Using the subteams’ analyses, the assessment team ranked the propulsion module alternatives in three categories—programmatic (composed of schedule, cost, and risk), technical (including safety, design, and performance), and integration issues (such as International Space Station and shuttle impacts and logistics issues). The team weighted programmatic issues the highest at 60 percent and technical and integration issues at 20 percent each. NASA officials told us that these weightings, typical for this kind of analysis, were approved by the space station program management. The assessment team concluded that the Z1 truss option was the best choice. This option did not require the construction of a pressurized element and was estimated to cost $515 million to develop. The next best alternative was the Node X option, with an estimated cost of $700 million to develop. According to NASA, this option was already well understood because Boeing had already integrated a similar structure, Node 1, into the space station. Although the assessment team found the Z1 truss option superior, it recommended a follow-on study because issues associated with this option’s integration into the space station were not well understood. Consequently, in July 2000, a joint NASA and Boeing integration evaluation team examined integration risks and identified possible design improvements for the Z1 truss and Node X options. NASA believed Boeing’s involvement was important because as the prime contractor, it would be responsible for integrating the alternative propulsion module into the space station. The methodology that the integration evaluation team used was similar to that used by NASA’s assessment team in reviewing the propulsion module options. The integration team designated individual teams to evaluate the ZI truss and Node X options from various functional perspectives, such as power; structures and mechanisms; guidance, navigation, and control; and contamination. The functional teams developed criteria for their particular discipline and evaluated the two options accordingly. For example, the structures and mechanisms team evaluated the two options for peak loads and structural fatigue, and the power team for average and peak power consumption. Based on its evaluation, each team recommended a preferred option. The integration team’s project manager then led an effort to compile and analyze the functional teams’ recommendations. Based on this analysis, the team selected the Node X option, which had the highest overall mission suitability and lowest integration risk. In contrast, the Z1 truss option created structural stress, station controllability, and propellant inefficiency issues. The integration team then concluded that Node X was the preferred choice as a follow-on effort to the initial propulsion module project. Figure 1 depicts the Node X propulsion module configuration. The cost assessment group incorporated the results of the integration evaluation team into a new cost analysis for the ZI truss and Node X options. According to the new cost analysis, the Z1 truss option’s integration issues increased its estimated cost to $729 million. In contrast, the cost estimate for the Node X option decreased to $675 million, primarily because the outfitting costs for the structural test article were lower than expected. NASA accepted the integration team’s findings and issued a request for proposal on the Node X option in January 2001. However, 2 months later, NASA canceled the follow-on effort because of cost increases in the space station program as a whole. In addition, NASA believed that the risk of Russian nonperformance was reduced because of the Service Module’s deployment. NASA acknowledged that problems with the management of the initial propulsion module project contributed to its unsuccessful conclusion, and it is undertaking lessons learned efforts to help avoid similar problems in managing future programs. These assessments include top down and systems engineering reviews at the Marshall Space Flight Center and an assessment by an engineer at the Johnson Space Center in Texas related specifically to the on-orbit fuel transfer component of the module. According to NASA officials, drafts of the Marshall assessments identified the lack of early systems analysis and good teamwork as contributing to project failure. For example, key components of the design—use of existing hardware, on-orbit fuel transfer, and tunnel size—were never tested for feasibility, partly because Boeing believed that NASA had fully accepted the assumptions inherent in its design. Later, when these assumptions became invalidated or retracted, it became apparent that the original concept was no longer feasible. The top down assessment also cited a lack of cooperation between NASA and Boeing as inhibiting the timely completion of required design and risk management analyses. In many cases, when NASA and Boeing teams tried to work together, they became confrontational and nonproductive. It concluded, in part, that, in the future, NASA should ensure that (1) early planning documents define what roles the various project teams have and how they interact, (2) government and contractor counterparts are established to encourage collaboration, and (3) NASA and contractor management monitor the interaction of the overall project team and intercede if conflicts interfere with the project’s success. Another lessons learned assessment was performed on the on-orbit fuel transfer component of the Boeing proposal, a basic requirement of the proposed design. This assessment concluded that, while some project communication was good, internal contractor communication on this part was less than desirable. For example, center-to-center communication was aided by daily conversations between the on-orbit fuel transfer and propulsion module project managers. However, contractor participation in working group activities was not supported. The assessment also cited NASA’s lack of systems analysis early in the program and its difficulty in establishing requirements, estimating cost and schedule, and providing human capital resources as contributing to the on-orbit fuel transfer project failure. The themes cited in NASA’s propulsion module project lessons learned studies are consistent with those cited in previous program failure assessments. In December 2000, NASA issued a report synthesizing the findings and recommendations from earlier reports, arriving at five themes it considered necessary for sound project management. The five themes were developing and supporting exceptional people and teams, delivering advanced technology, understanding and controlling risk, ensuring formulation rigor and implementation discipline, and improving communication. In commenting on a draft of this report, NASA stated that while it was in agreement with the findings of the report, the project’s urgency necessitated its management approach. However, NASA acknowledges that its project execution could have been improved and that it will now strive to apply lessons learned from the propulsion module project experience. Even though NASA perceived a schedule urgency in starting and completing the project, it should have followed sound management practices. The early analytical rigor NASA was applying to the follow-on propulsion module effort would have served the agency well in its execution of the initial project. To assess the adequacy of project planning, we reviewed, analyzed, and compared internal NASA guidance and project planning documents. We also discussed planning requirements with cognizant project officials and independent assessment team officials to obtain their views. To assess the extent to which NASA had defined the technical requirements for the propulsion module, we reviewed the results of requirements meetings and approved requirements lists to gain an understanding of the evolution of requirements determinations. We also held discussions with NASA and Boeing officials to obtain their perspectives on the validity of the technical requirements, as well as reasons for requirements changes over the course of the program. To describe NASA’s process for reviewing alternative designs, we reviewed NASA briefing materials and other products related to the establishment of the Alternative Propulsion Module Assessment Team and others. We also discussed the teams’ charter, methodology, and results with team members and other cognizant officials. To describe lessons learned by NASA from the initial program, we reviewed the results of NASA’s efforts and discussed their significance with cognizant officials. We conducted our review from July 2000 to April 2001 in accordance with generally accepted government auditing standards. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days from its issue date. At that time, we will send copies to the NASA Administrator; the Director, Office of Management and Budget; and other interested parties. We will also make copies available to others on request. Please contact me at (202) 512-4841 if you or your staff have any questions about this report. Other key contributors to this report are acknowledged in appendix II. Jerry Herley, John Gilchrist, James Beard, Fred Felder, Belinda LaValle, Vijay Barnabas, Diane Berry, Rick Eiserman, Susan Woodward, and Cristina Chaplain made key contributions to this report.
This report discusses the National Aeronautics and Space Administration's (NASA) contract with Boeing Reusable Space Systems to build the now-canceled follow-on propulsion module for the International Space Station. GAO found that the initial propulsion module project did not meet performance, cost, and schedule goals largely because NASA proceeded with Boeing's proposal without following fundamental processes involving project planning and execution. Once it was determined that Boeing's proposal was inadequate, NASA began to assess alternatives to the Boeing-proposed propulsion module. The assessment team defined mission success criteria, identified key design assumptions, and performed comparative analysis on competing designs. On the basis of its analyses, the team recommended a follow-on design. NASA acknowledged that its initial approach to developing a propulsion module was inadequate and contributed to the project's unsuccessful conclusion. NASA officials sought to learn lessons from the project in order to avoid similar problems in managing future programs.
Prior to the early 1970s, the federal government provided affordable multifamily housing for low- and moderate-income households by subsidizing the production of either privately owned housing or government-owned public housing. Under production programs, the subsidy is tied to the unit (project-based), and tenants benefit from reduced rents while living in the subsidized unit. HUD’s mortgage financing programs include: Section 202 Elderly and Disabled Housing Direct Loan, which provided below-market interest rates on up to 40-year mortgages to developers of rental housing for low-income elderly and persons with disabilities from 1959 to 1991. Congress changed Section 202 to a grant program in 1990. Section 221(d)(3) Below-Market Interest Rate (BMIR), which provided subsidized financing on private 40-year mortgages to developers of rental housing from 1961 to 1968. Section 236, which provided monthly subsidies to effectively reduce interest rates on private 40-year mortgages for rental housing from 1968 to 1973. Sections 221(d)(3) and 221(d)(4), which insured private mortgages to developers of rental housing from 1961. Section 231, which insured private mortgages to developers of rental housing for the elderly from 1959. In order to reach lower-income tenants, a portion of the units in many properties developed under these production programs were further subsidized by provision of rental assistance, under programs such as Rent Supplement, Rental Assistance Payments (RAP), and project-based Section 8. In the early 1970s, questions about the production programs’ effectiveness led the Congress to explore options for using existing housing to shelter low-income tenants. The Housing and Community Development Act of 1974 included both approaches—a project-based new construction and substantial rehabilitation program and a tenant-based rent certificate program for use in existing housing (currently named the Housing Choice Voucher program)—all referred to as Section 8 housing. Project-based and tenant-based Section 8 assistance is targeted to tenants with incomes no greater than 80 percent of area median income, and tenants generally pay rent equal to 30 percent of adjusted household income. The project- based Section 8 program also provides rental assistance to owners of properties that were not financed with HUD mortgages. Beginning in the late 1980s, owners of some subsidized properties began to be eligible to leave HUD programs by prepaying their mortgages or opting out of their project-based Section 8 rental assistance contracts. Once these owners removed their properties from HUD programs, they were no longer obligated to maintain low rents or accept rental assistance payments. In response, in 1996, Congress created a special type of voucher, known as an enhanced voucher, to protect tenants from rent increases in these properties. Not all property owners repay mortgages as originally scheduled. For example, an owner may refinance the mortgage to pay for improvements to the property. Other owners may experience financial difficulties and default on their mortgages. From January 1993 through December 2002, HUD data show that the agency terminated the insurance on 231 mortgages. About 14 percent were due to mortgages that matured; other reasons included owners’ mortgage prepayment (37 percent) and foreclosure (22 percent). Nationwide, 21 percent of subsidized properties with HUD mortgages have mortgages that are scheduled to mature from 2003 through 2013, but the percentage varies significantly by state. Nearly all of these properties were financed under the Section 236, Section 221(d)(3) BMIR, and Section 221(d)(3) programs. Of the 11,267 subsidized properties (containing 914,441 units) with HUD mortgages, 21 percent (2,328 properties containing 236,650 units) have mortgages that are scheduled to mature from 2003 through 2013. The remaining 79 percent of these mortgages (on over 8,900 properties) are scheduled to reach maturity outside of the 10-year period. Additionally, the bulk of these mortgages (about 75 percent) are scheduled to mature in the latter three years of the 10-year period (see fig. 1). This concentration in the latter part of the 10-year period is attributable to the 40-year Section 221(d)(3) BMIR and Section 236 mortgages that HUD helped finance in the late 1960s and 1970s, respectively. As table 1 shows, about 57 percent of the properties with mortgages scheduled to mature in the 10-year period were financed under Section 236, 22 percent under Section 221(d)(3) BMIR, and 19 percent under Section 221(d)(3). Section 202, Section 221(d)(4), and Section 231 accounted for only 3 percent of these properties. No mortgage was scheduled to mature in this period. Less than 1 percent. Since properties with noninsured rent supplement do not carry a HUD mortgage, HUD does not track mortgage-level data on these properties. The number of mortgages scheduled to mature through 2013 varies greatly by state (see fig. 2). Although the average is 46 mortgages per state (including the District of Columbia), the number ranges from a high of 273 maturing mortgages in California to 3 in Vermont. The states also vary considerably in terms of the percentage of their respective HUD mortgages on subsidized properties that are scheduled to mature through 2013, ranging from 7 percent in Alabama to 53 percent in South Dakota. Over the next 10 years, low-income tenants in over 101,000 units may have to pay higher rents or move when HUD-subsidized mortgages reach maturity. This is because no statutory requirement exists to protect tenants from increases in rent when HUD mortgages mature and rent restrictions are lifted. A number of factors may affect owners’ decisions regarding the continued affordability of their properties after mortgages mature, including neighborhood incomes, physical condition of the property, and owners’ missions. There is no statutory authority that requires HUD to offer tenants special protections, such as enhanced vouchers, when a HUD mortgage matures. However, tenants who receive rental assistance in properties with maturing mortgages would be eligible for enhanced vouchers under rental assistance programs, such as project-based Section 8. Of the 2,328 subsidized properties with mortgages scheduled to mature through 2013, 480—containing 45,011 units—do not have rental assistance contracts (see table 2). While the remaining 1,848 properties are subsidized with rental assistance, not all units within the properties are covered. According to HUD data, about 30 percent of the units in these properties are not covered—a total of 57,552 units with tenants who do not receive rental assistance. Altogether, the tenants in a total of 102,563 units are not protected under the rental assistance programs. Of these, 101,730 units—most of them in properties with mortgages under the Section 221(d)(3) BMIR and Section 236 programs—could face higher rents after mortgage maturity when the rent restrictions under these programs are lifted. Table 2. Subsidized Properties with HUD Mortgages Scheduled to Mature through 2013, by Rental Assistance Program Section 221(d)(3) BMIR Section 221(d)(3) Section 221(d)(4) 3% <1% Since properties with noninsured rent supplement do not carry a HUD mortgage, HUD does not track mortgage-level data on these properties. According to a HUD study, tenants in properties with mortgages under the Section 221(d)(3) BMIR and Section 236 programs have an average household income somewhat greater than that for tenants who receive rental assistance; thus, they may be somewhat more able to afford higher rents. Properties financed under the Section 221(d)(3) BMIR program allow tenants with incomes of up to 95 percent of area median income; in comparison, project-based Section 8 does not serve tenants earning more than 80 percent of area median income. Tenants in units covered by a rental assistance program—there are about 134,087 such units in the properties with HUD mortgages scheduled to mature through 2013—will continue to benefit from affordable rents, regardless of when the mortgage matures, as long as the rental assistance contract is in force. When long-term rental assistance contracts expire, HUD may renew them. Currently, HUD generally renews expiring long- term contracts on an annual basis but may go as long as 5 years, and in some cases, 20 years. According to HUD, during the late 1990s, about 90 percent of the property owners renewed their contracts, thereby continuing to provide affordable housing. The extent to which the trend continues will depend on the availability of program funding and housing market conditions. If a rental assistance contract expires prior to mortgage maturity and the owner opts not to renew it, assisted tenants would be eligible for enhanced vouchers. Tenants could potentially be affected by the length of time given to them to adjust to rent increases as well as by the amount of the increase. Property owners are not required to notify tenants when they pay off their mortgage at mortgage maturity. In contrast, property owners electing to opt out of the Section 8 project-based program must notify tenants 1 year in advance of the contract expiration. Owners electing to prepay their mortgages under the Section 236 or Section 221(d)(3) BMIR programs must notify tenants at least 150, but not more than 270, days prior to prepayment. Many factors could influence an owner’s decision to keep a property in the affordable inventory or convert to market rate rents upon mortgage maturity. For a profit-motivated owner, the decision may be influenced by the condition of the property and the income levels in the surrounding neighborhood. If the property can be upgraded at a reasonable cost, it may be more profitable to turn the building into condominiums or rental units for higher income tenants. If repair costs are substantial or if high-income residents are not present in the surrounding area, it may be more profitable to leave the property in the affordable inventory. Tools and incentives offered by state and local agencies may also influence this decision. In addition, because most of these owners have had the right to prepay their mortgages and opt out of their Section 8 contracts for a number of years, the economic factors that drive a decision to convert to market rate are not unique to mortgage maturity. HUD data show that nonprofit organizations own about 38 percent of the properties with mortgages scheduled to mature in the next 10 years. For a nonprofit owner, the decision would likely be motivated by cash flow considerations since, in theory, these owners are not primarily motivated by economic returns. Since mortgage maturity results in an improvement in property cash flow, reaching mortgage maturity by itself would not necessarily trigger removal from the affordable inventory. For example, the property manager at one of the 16 properties (nonprofit ownership) whose mortgage matured in the past 10 years and who does not currently have project-based Section 8 assistance told us that no longer having to pay the mortgage left money for repairs needed to keep the units affordable for their low-income senior tenants. Additionally, a nonprofit organization would be more likely to keep the property affordable to low- income tenants because to do otherwise could conflict with its basic mission of providing affordable housing. Another factor is the loss of the interest rate subsidy that occurs when the mortgage matures. When interest rate subsidies were first paid to properties built in the 1960s and 1970s, they represented substantial assistance to property owners. Over time, inflation has reduced the value of this subsidy. For example, the average interest rate subsidy payment for a Section 236 property with a mortgage maturing in the next 10 years is $66 per unit per month. Price levels have roughly quadrupled since 1970, so to have the same purchasing power would require about $260 in today’s dollars. Section 8 and similar project-based rental assistance now provide the bulk of the assistance to these subsidized properties—75 percent of the assistance versus about 25 percent that derives from the Section 236 interest-rate subsidy. Furthermore, inflation will continue to erode the value of the interest-rate subsidy until mortgage maturity, while the rental assistance subsidy is adjusted annually to account for increases in operating costs. Our review of HUD’s data showed that HUD-insured mortgages at 32 properties matured between January 1, 1993, and December 31, 2002. Sixteen of the 32 properties are still serving low-income tenants through project-based Section 8 rental assistance contracts. For 13 of these 16 properties, the rental assistance covers 100 percent of the units (799 assisted units), and for the remaining three properties it covers 54 percent of the units (174 assisted units). Using HUD’s archived data for inactive properties, we were able to obtain rent information for 10 of the remaining 16 properties. We found that all 10 (none of which have project-based rental assistance contracts) are offering rents that are affordable to tenants with incomes below 50 percent of area median income. Because of the variety of factors that can influence owners’ decisions, however, these properties are not necessarily indicative of what will happen to other properties as their HUD mortgages mature. Various property managers we contacted also provided information about their efforts to keep their properties affordable. For example, a senior complex (nonprofit ownership) continues to generally charge residents about 30 percent of their income for rent as they did when they were in HUD’s subsidized portfolio. According to the property manager of two of the properties (for-profit ownership), he unsuccessfully sought incentives from HUD in 2002 to keep the properties in the inventory when the mortgages reached maturity and both properties left HUD’s multifamily portfolio. However, both properties are accepting tenant-based vouchers and the rents in both properties are affordable to very low-income tenants. HUD does not offer any tools or incentives to keep properties affordable after HUD mortgages mature, although it does offer incentives to maintain affordability for properties that also have expiring rental assistance contracts. According to officials from the four national housing and community development organizations we contacted, because few HUD mortgages have matured to date, their member state and local agencies have not experienced the need to develop programs to deal with mortgage maturity. They noted that their member agencies could offer tools and incentives, such as loans and grants, which might be used by owners to keep properties affordable after mortgage maturity. However, about three- quarters of the state and local agencies that responded to our survey reported that they do not track the maturity dates on HUD mortgages, and none provided examples of tools or incentives used to keep units affordable after mortgage maturity. During the 1990s, HUD established incentive programs to deal with the loss of affordable units because owners were prepaying their mortgages and opting out of their Section 8 contracts, but these incentives do not directly address the termination of the affordability requirements resulting from mortgage maturity. Rather, they can extend, under certain circumstances, the affordability period beyond the original term of the mortgage or allow property owners to be better positioned financially to continue providing affordable housing. The state and local agencies we surveyed identified 18 different tools and incentives used to preserve affordable housing. Of the 18, 6 were funded directly by the federal government, while 12 were administered by state and local governments and not directly federally funded. However, there was no evidence that they have been used to protect properties when HUD mortgages mature. This may be because relatively few mortgages have matured to date. State and local tools and incentives include housing trust funds used to make loans and grants, financial assistance to nonprofit organizations to aid them in acquiring HUD-subsidized properties, and property tax relief to owners of HUD-subsidized properties. These state and local agencies identified several incentives that they believe are the most effective in preserving the affordability of housing for low-income tenants. For example, over 60 percent of the 62 state agencies that responded identified the 4-percent tax credit and HOME programs as effective means for preserving the affordability of HUD-subsidized properties. Of the 76 local agencies that responded, over 70 percent identified HOME as effective and over 60 percent identified CDBG as effective. Over 50 percent of the survey respondents reported that they have no system in place to identify and track properties in their states or localities that could leave HUD’s subsidized housing programs. Further, about three- quarters reported that they do not track the maturity dates of HUD mortgages. Several respondents to our survey noted that it would be helpful to them if HUD could provide information about properties that might leave HUD’s programs. Of the 102 agencies that indicated they identified and tracked properties, 56 (55 percent) said that they monitored the scheduled maturity dates of HUD mortgages on local properties (see fig. 3). More agencies (82 or 80 percent) reported that they identified and tracked properties that might opt out of HUD project-based rental assistance contracts. HUD officials noted that they make property-level information available to the public on HUD’s multifamily housing Web site. This Web site contains detailed property-level data on active HUD-insured mortgages and expiring rental assistance contracts. However, according to our survey, some state and local agencies perceive that the information is not readily available. One problem may be that these data are in a format that may not be sufficiently “user-friendly” for these agencies. The data must be accessed using database software, which requires users to be proficient in these types of software. HUD officials agreed that the agency could provide more “user friendly” information because the data are not as accessible to state and local agencies as they could be. They also noted that these agencies could benefit from a “watch list” that identifies properties that may leave HUD subsidy programs in their jurisdictions, such as upon mortgage maturity, especially if such data were updated annually and readily available online so that agencies would have the information needed to prioritize and fund efforts to preserve low-income housing in their jurisdictions. While awareness of the potential for a HUD mortgage to mature or rental assistance to end does not guarantee that state or local agencies will take action to preserve the assisted units’ affordability to low-income tenants, such knowledge could better position state and local agencies to use available tools and incentives. Accordingly, we recommended that HUD take steps to provide more widely available and useful information. Using HUD’s data that we obtained to respond to your request, we also developed a prototype searchable database, available in CD-ROM format, showing property-level data for each of HUD’s subsidized rental properties scheduled to mature in the next 10 years. Mr. Chairman, this concludes my prepared statement. I would be happy to answer any questions at this time. For further information on this testimony, please contact David G. Wood at (202) 512-8678, or Andy Finkel at (202) 512-6765. Individuals making key contributions to this testimony included Mark Egger, Daniel Garcia-Diaz, Rich LaMore, and John McGrail. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Housing and Urban Development (HUD) has subsidized the development of about 1.7 million rental units in over 23,000 privately owned properties by offering owners favorable long-term mortgage financing, rental assistance payments, or both in exchange for owners' commitment to house low-income tenants. When owners pay off mortgages--the mortgages "mature"--the subsidized financing ends, raising the possibility of rent increases. Based on a report issued in January 2004, this testimony discusses (1) the number and selected characteristics of HUD-subsidized rental properties with mortgages scheduled to mature in the next 10 years, (2) the potential impact on tenants upon mortgage maturity, and (3) the tools and incentives that HUD, the states, and localities offer owners to keep HUD properties affordable upon mortgage maturity. Nationwide, the HUD mortgages on 2,328 properties--21 percent of the 11,267 subsidized properties with HUD mortgages--are scheduled to mature in the next 10 years, but among states this percentage varies significantly: from 7 percent in Alabama, to 53 percent in South Dakota. About three-quarters of these mortgages are scheduled to mature in the last 3 years of the 10-year period. As part of our analysis, we developed a searchable database available on a CD-ROM, showing property-level data for each of HUD's subsidized rental properties scheduled to mature in the next 10 years. Impacts on tenants depend on tenant protections available under program statutes and regulations, as well as on property owners' decisions about their properties. No statutory requirement exists to protect tenants from increases in rent when HUD mortgages mature, absent the existence of rental assistance contracts or other subsidies. Without tenant protection requirements, tenants in over 101,000 units that do not receive rental assistance may have to pay higher rents or move when the HUD mortgages on these properties mature and rent restrictions are lifted. During the past 10 years, HUD-insured mortgages at 32 properties reached mortgage maturity, and the majority of these properties are still serving low-income tenants. HUD does not offer incentives to owners to keep properties affordable upon mortgage maturity. While many state and local agencies GAO surveyed offered incentives to preserve affordable housing, they have not directed them specifically at properties where HUD mortgages mature. Most of the agencies do not track HUD mortgage maturity dates for subsidized properties. In addition, although HUD's Web site contains detailed property-level data, some state and local agencies perceive that the information is not readily available.
FAA fulfills its mission of maintaining the NAS by providing services through four lines of business: Air Traffic Organization, Aviation Safety, Airports, and Commercial Space Transportation. The Air Traffic Organization (ATO) is the business line that provides air traffic control (ATC) services to users of the NAS through a network of towers, control centers, and flight service stations. ATC includes a variety of activities that guide and control the flow of aircraft through the NAS. ATO groups these activities into four types of services—oceanic en route (oceanic), domestic en route (en route), terminal, and flight services. The costs to operate and maintain this network and to make improvements to the ATC system are currently funded through excise taxes deposited into the Airport and Airway Trust Fund and contributions from the General Fund of the U.S. Treasury. FAA is subject to various laws that have an effect on agencies’ development and use of cost information. These laws were enacted after the Comptroller General’s 1985 report which provided the framework for the reforms needed to improve federal financial management and manage the cost of government. The earliest of these laws—the Chief Financial Officers Act (CFO Act) of 1990—applied to 24 federal departments and agencies, including the Department of Transportation, of which FAA is a part. Another of these laws is the Federal Financial Management Improvement Act of 1996 (FFMIA), which required, among other things, that agencies covered by the CFO Act have systems that comply substantially with federal accounting standards. One such standard is Statement of Federal Financial Accounting Standards (SFFAS) No. 4, Managerial Cost Accounting Standards and Concepts, which states that essential uses of cost information include controlling costs, measuring performance, setting fees, evaluating program costs and benefits, and making economic choice decisions. In plain language, the principal purpose of cost accounting is to assess how much it costs to do whatever is being measured, thus allowing agency management, Congress, and others to analyze that cost information when making decisions. When cost accounting is used as a basis for setting fees or recovering costs, the objective is to ensure that users who receive the related services or products are assigned costs appropriately to avoid unintentional cross- subsidization among users who would then pay more or pay less than the cost of the services they use. The Federal Aviation Reauthorization Act of 1996 requires that FAA develop a cost accounting system that accurately reflects the investment, operating and overhead costs, revenues, and other financial measurement and reporting aspects of its operations. One of the stated purposes of the act was also to authorize FAA to recover the costs of services from those who benefit from, but do not contribute to, the national aviation system and the services provided by FAA. Specifically, FAA was required to collect overflight fees and to ensure that the fees were directly related to FAA’s costs of providing the services rendered. In 1997, the National Civil Aviation Review Commission (the “Mineta Commission”) recommended that FAA establish a cost accounting system to support the objective of FAA operating in a more performance-based, business-like manner. These legislative requirements and recommendations provided the impetus for FAA’s decade-long development and deployment of its cost accounting system to all of its lines of business. The International Civil Aviation Organization (ICAO), an advisory organization affiliated with the United Nations, aims to promote the establishment of international civil aviation standards and recommended practices and procedures. As such, ICAO has issued policies and guidance on assigning costs and the establishment of charges for air navigation services. The United States is a member of the governing Council of ICAO. The previous FAA study of air traffic control service costs issued in 1997 allocated fixed costs (those that do not vary with the level of activity or output) and common costs (such as general and administrative overhead, which cannot be traced to a particular product or service) of air traffic control services to user types based on an economic pricing method that assigns more of these costs to users that have a greater willingness to pay them. This pricing method takes into consideration the demand for services and how the pricing of those services may impact demand. However, in its January 2007 report on its study of 2005 costs, FAA assigned costs to users without using this pricing method, an approach which is consistent with the statutory requirement for setting overflight fees and the federal government’s policies on establishing user fees. In designing its cost assignment methodology, simplicity and transparency were among FAA’s objectives to facilitate stakeholder understanding and acceptance. The methodology, known as CAMERA (Cost Assignment Methodology for Estimating Resource Allocation), assigns air traffic control service costs to user groups by type of aircraft—turbine and piston—and to aircraft operators—commercial, general aviation, and exempt. After developing six cost pools for air traffic control services, CAMERA assigns costs to Tiers 1, 2, and 3 depending on whether the cost can be directly assigned to a single user group (Tier 1), can be assigned to both user groups (Tier 2), or is overhead, indirect, or other miscellaneous cost allocated to both groups (Tier 3). The total of the three tiers for each user group is allocated to aircraft operators: commercial, general aviation, and exempt. Figure 1 illustrates the CAMERA process. FAA’s CAS is the source for the cost data used to develop a cost base for CAMERA. CAS captures cost data from FAA’s financial accounting system as those costs are recognized as expenses incurred in accordance with generally accepted accounting principles (GAAP). CAS classifies costs by ATO’s four types of air traffic services—oceanic, en route, terminals, and flight services—by assigning direct costs and allocating overhead and other indirect costs to services and facilities that provide those services. CAMERA classifies the CAS data, as adjusted, into six cost pools, namely, oceanic, en route, large hubs, middle terminals, low-activity towers, and flight services. Once the CAMERA costs are classified into the six cost pools, the costs for distinct operations and capital projects within five of the pools (excluding flight services) are put into one of three tiers for assignment to users—turbine or piston. The three tiers of costs are 1. costs exclusively assigned to a single user group, either because the project in question principally benefits a single user group or because the other user group does not drive material or measurable incremental costs (Tier 1); 2. costs assigned to both user groups because the projects benefit both user groups and use by the secondary user group drives measurable incremental costs (Tier 2); and 3. overhead costs, costs indirectly related to the delivery of services, and other miscellaneous costs that could not be directly assigned to the user groups, which were allocated based on each user group’s proportional share of the total costs assigned to the first two tiers (Tier 3). Once all costs are assigned by type of service, tier, and user group, total costs for turbine and piston user groups within each of the five service pools (excluding flight services) are further allocated to subgroups representing the types of aircraft operators—commercial, general aviation, and exempt—based on the proportion of each operator’s share of total activity at facilities in the pool. The allocations by type of aircraft operator within the turbine and piston user groups are then combined and serve as the basis on which the proposed user fees, fuel taxes, or general fund appropriations are determined. Designing a costing methodology requires, within the parameters of applicable cost accounting principles, that management make judgments about how precise the resulting cost information needs to be and whether the benefits of achieving a higher level of precision justify the additional resources required to refine its cost methodology and related systems. These judgments will in turn influence management’s choice of assumptions and cost assignment methods. Different sets of assumptions and methods applied to the same pool of costs can yield different results. FAA designed its CAMERA cost methodology so that the resulting cost assignments would be consistent with federal policies on the establishment of user fees, and, to the extent practicable, with international guidance for air navigation service providers on setting fees. Federal cost accounting standards recognize that one of the purposes of cost information is to set fees, and both the federal standards and the ICAO guidance for implementing its policy on user fees provide direction on allocating these costs. We found that, as designed, key elements of CAMERA used methods that are generally consistent with federal accounting standards and ICAO guidance. However, as discussed subsequently, we identified matters related to the application of certain assumptions and cost assignment methods underlying FAA’s methodology that needed better support through additional documentation and analysis to demonstrate that the resulting cost assignments to users are reasonable. Federal cost accounting standards establish a flexible principle for assigning costs, not a specific methodology that agency management must follow. The standards recognize that agency management should select costing methods that best meet their needs, taking into consideration the costs and benefits of reasonable alternatives, and once selected, follow those methods consistently. Further, the standards require that cost information developed for different purposes should be drawn from a common data source, such as consistently using information from an entity’s financial management system to prepare all cost analyses. To attribute costs to services or products, the federal standards list three categories of cost assignment methods in order of preference: (1) direct tracing of costs to, in this case, a specifically identifiable user wherever feasible and economically practicable, (2) assigning costs on a cause-and- effect basis, or (3) allocating costs on a reasonable and consistent basis. The standards provide that when seeking to assign costs of resources that are shared by, for example, activities, services, or customers, agency management may find it useful to classify these activities, services, or customers as either primary or secondary. If this method is used, management can then determine which costs are (1) necessary to support (in this case) the primary customer and are therefore unavoidable even without the secondary customer and (2) incurred for the secondary customer and, therefore, are incremental to the costs of the primary customer. The standards also state that management should maintain and use activity information, as appropriate, to allocate costs as necessary, such as accumulating and using data on miles flown as the basis for allocating certain costs of en route services that are not directly assignable to users. As designed, elements of FAA’s methodology are consistent with principles and methods set forth in federal cost accounting standards. FAA’s common data source for CAMERA is costs by service type reported in CAS, which FAA also uses for operational analysis. FAA used the three categories of costing methods found in the federal standards to assign costs to users. To facilitate these cost assignments, FAA identified the turbine and piston user groups as either primary or secondary. FAA sought to determine the amount of each Tier 1 and Tier 2 project’s costs that did not change with the level of services provided or other relevant activity, and assigned that amount entirely to a primary group of users. FAA used a two-step process for determining the Tier 2 incremental costs for both groups of users. First, FAA determined the amount of a project’s total cost that was incremental and varied with the activity of all users. Second, FAA allocated these incremental costs to the primary and secondary user groups based on each group’s proportional share of total activity, such as miles flown or number of terminal operations. Although the international guidance does not specify particular methods for assigning costs, FAA’s cost assignment methodology is generally consistent with the principles outlined in the ICAO guidance. ICAO members are not legally required to follow these principles and may apply the guidance differently depending on the circumstances. Further, the ICAO guidance provides that it is essential that all costs be determined in accordance with GAAP and appropriate costing principles so that costs can be analyzed and users are not assigned costs not properly attributable to them. In designing CAMERA, FAA relied on federal cost accounting standards to address these criteria. FAA’s CAS provides information by facility and defines air traffic control services in a manner consistent with the ICAO guidance. Further, for en route services, FAA’s cost assignment methodology allocates costs among user groups and aircraft operators using the type of activity metric the ICAO guidance suggests is likely to be the most appropriate, namely distance flown. For terminal services, ICAO’s guidance states that the number of flights meets the basic requirement for allocating costs. FAA used a more detailed metric—operations—which includes both takeoffs and landings and which represents a reasonable basis on which to allocate ATO’s terminal costs among user groups and aircraft operators. ICAO guidance on pre-funding capital projects states this funding method should be used subject to appropriate safeguards and when other funding sources are not sufficient or available. The safeguards ICAO cites are focused on ensuring that the pre-funding charges link to users that will ultimately benefit from the projects, encouraging advance consultation with users, and that accounting for the pre-funding will be transparent. FAA’s use of pre-funding capital projects, as discussed later in this report, is limited to the excess of current-year budget authority for (F&E) expenditures over the GAAP-based current-year expense related to F&E. Also, FAA’s F&E budget is authorized and user fees proposed under the safeguard of public transparency and congressional oversight. Consistent with ICAO guidance, this limited pre-funding was incorporated into FAA’s methodology because other permanent financing for budgeted capital projects is not currently available to FAA and was not provided in the President’s proposal. While elements of FAA’s cost assignment methodology design comply with pertinent guidance, we identified matters related to the application of certain assumptions and cost assignment methods underlying the methodology that need further justification to demonstrate that the resulting cost assignments to users are reasonable. Cost accounting is intended to associate an entity’s costs with its products, services, or activities. The processes and procedures for making these cost associations must be documented according to federal cost accounting standards. Further, federal internal control standards require that significant events, which can include key decisions, be clearly documented. CAMERA uses certain key assumptions about factors that affect the costs of providing air traffic control services and how to assign those costs to particular users. We found that FAA justified its assumption that turbine and piston aircraft drive costs differently. However, FAA did not (1) adequately document the basis on which it assigned costs to turbine and piston user groups or (2) conduct sufficient analysis (e.g., econometric analysis) to support its assumption that all types of aircraft with the same type of engine (e.g., smaller jet aircraft versus larger commercial jets) affect costs in the same manner. Further, the precision of FAA's approach to allocating overhead, indirect, and other miscellaneous costs might be improved by using allocations previously entered into CAS and, for certain of these costs, by using more appropriate allocation methods. Because FAA has not adequately supported certain assumptions and methods, it is not able to demonstrate conclusively whether the resulting cost assignments are reasonable. FAA analyzed the activities related to the delivery of air traffic control services and found that different types of aircraft and aircraft operations have different effects on FAA’s workload and the associated costs to provide its services. FAA determined that the principal indicator of the differences between aircraft and aircraft operations—in terms of the air traffic control workload they represent and as cost drivers—is whether the aircraft operate with turbine or piston engines. Turbine aircraft fly at higher cruising altitudes, higher speeds, and normally under instrument flight rules (IFR), which require they be “controlled” by air traffic controllers through en route airspace and for takeoffs and landings. Turbine aircraft are also more likely to fly in all weather conditions, which can affect the capacity of the NAS. Factors such as aircraft speed and weight also affect which airports turbine aircraft can use. Piston aircraft, as a group, fly more often under visual flight rules (VFR) than IFR and fly at lower cruising altitudes and lower speeds. Aircraft flying under VFR may not require air traffic control services if they do not fly to airports that have control towers. Having appropriately identified types of aircraft and aircraft operations as cost drivers, FAA placed each project into one of three cost tiers depending on whether and to what extent the costs were related to the delivery of services to user groups. The costs of projects placed into the first two tiers were then assigned to the turbine and piston user groups, based primarily on the input of internal subject matter experts (SME) and, as discussed later, the costs of Tier 3 were allocated proportionally to user groups based on the total costs assigned through the Tier 1 and Tier 2 processes. According to FAA, these SMEs were selected from a cross- section of en route and terminal facilities and air traffic service units and were collectively knowledgeable in the delivery of air traffic control services; airspace usage; and FAA’s financial, cost, and activity data systems. FAA officials told us that they obtained input from the SMEs on matters such as the specific activities necessary to deliver services; differences in the services provided to different user groups and the resources consumed to provide those services; and how factors such as traffic volume, mix of operators and aircraft type, weather, and congestion affect FAA’s workload. Further, to help quantify the amount of incremental costs, FAA asked the SMEs how ATC services and costs would be affected if a group of users ceased operations altogether or if a user group permanently increased its operations by a certain percentage. FAA also performed regression analyses to corroborate the input received from the SMEs on the percentage of a project’s costs that varied with volume of activity. We noted that the results of some of the analyses were either different from the cost assignment decisions based upon SME input or were inconclusive. When such differences arose, FAA relied on the judgment of the SMEs rather than the results of the regression analyses. FAA officials said they chose to rely on SME input over the results of the regression analyses because their past experience had been that regressions would produce results that were indicative, but not conclusive, and that performing more complex regressions would make the cost assignments less transparent and more difficult for external stakeholders to understand. FAA also explained that the aggregation of certain related but different projects and their costs was necessary to facilitate the SMEs’ evaluation of these costs. This aggregation, however, may have contributed to some regression results implying different cost assignment decisions than the cost assignment decisions based on SME input and may also have contributed to other regression results being inconclusive. Although the final decisions as to the percentage of total costs attributable to the user groups were documented, the key input from SMEs and the rationale linking this key information and related regression analyses with the final cost assignments were not well documented. FAA officials believe that the agency adequately analyzed the SME information in preparing its cost study and explained that the agency lacked sufficient documentation of SME input and the rationale linking that input to the final cost assignment decisions because the meetings with the SMEs were part of the early development of the methodology, which at that point was essentially a work-in-progress. We acknowledge that the development of a cost assignment methodology is an iterative process and that the judgment of those individuals— including the SMEs—most knowledgeable of the business, its customers, and the factors that drive costs is essential to this process. However, the effects of the SME input and related regression analyses on the final cost assignments are critical for explaining decisions about the resulting cost assignments. Therefore, documentation of the input and rationale is needed to provide a basis for justifying current decisions as well as for evaluating any future changes to the assumptions that drive cost assignment decisions. Further, we acknowledge the challenges faced when trying to perform regression analyses to quantify the relationship between costs and the activity presumed to drive those costs. Improving the reliability of these regressions may involve further analysis of the cost drivers and improving the quality of the underlying data. Performing more detailed statistical analysis to support or corroborate its conclusions may assist FAA in effectively demonstrating to stakeholders that its cost assignment methodology is a reasonable basis on which to recover costs. ) and, in most cases, using the results of the regressions would have required significant extrapolation from the observable data to the origin. CAMERA aggregated (pooled) certain facility; service; ATO; and allocated FAA headquarters, regional, and accrued expenses together (classified as Tier 3 costs) before allocating those costs to the turbine and piston user groups. The Tier 3 costs were allocated based on each group’s proportional share, by service, of total costs directly assigned or allocated through the Tier 1 and Tier 2 processes. CAMERA pooled these costs because FAA determined that (1) the costs were not directly related to the delivery of services and, therefore, did not vary with the volume of user activity, (2) the inherent nature of the costs did not allow for a direct assignment to either of the two user groups, or (3) the underlying transactions did not have sufficient data in CAS to directly assign the costs to a particular facility and service that would permit further analysis and allocation to the user groups in Tier 1 or Tier 2. However, we found that FAA’s CAS had already associated some like costs to specific services and projects. The CAS assignments to services and projects could have been retained, avoiding CAMERA’s aggregation and reallocation among all types of services, which affects the ultimate allocation of these costs to user groups. We also found that certain costs could have been allocated in a manner that resulted in a more precise distribution between the user groups, for example certain telecommunication and flight inspection costs were allocated to all services, even though they related only to terminal services; indirect labor costs of equipment maintenance personnel were allocated to both turbine and piston user groups even though some of the related equipment and direct labor costs were assigned to a single user group in Tier 1; and annual leave, workers compensation, pension, and postretirement health costs were allocated to all user groups based on each group’s share of direct labor and other nonlabor costs instead of basing the allocation only on the labor costs to which these benefit costs more closely relate. These cost allocation processes are examples of the CAMERA methodology’s underlying objectives that it be simple and transparent. The 2006 draft report of the contractor who assisted FAA in developing the cost methodology states that the benefit of the cost allocation approach is the simplicity and transparency achieved by virtue of not having to rely on a highly complex system for allocating costs. FAA designed CAMERA to avoid the CAS process of allocating the same costs more than once. However, the report further notes that FAA’s CAS was “designed to support the management of costs for highly detailed activities at individual locations, so a more complex allocation system is required” than the contractor considered necessary for purposes of assigning costs to users. CAS was designed to allocate costs to the facilities that provide services to users so that managers could use this cost information in making operational decisions. FAA also uses CAS information to prepare its external statement of net costs, which is audited by an independent public accounting firm. Despite FAA’s reliance on CAS for these and other purposes and despite the fact that in fiscal year 2005 CAS associated about 34 percent of Tier 3 costs with specific services, CAMERA’s method for allocating overhead, indirect, and other miscellaneous costs did not retain the preexisting allocations in CAS. Consequently, aggregating these costs and then allocating them to the turbine and piston user groups resulted in shifting some costs between service types compared to the CAS allocations, which affects the ultimate allocation of these costs to user groups. According to FAA officials, in fiscal year 2006 the agency addressed some of these issues related to how transactions had previously been recorded in CAS, notably requiring that technical support personnel charge their time to specific facilities where maintenance is performed and allocating a portion of ATO’s annual leave expenses to the facilities based on direct labor charges. While these changes should help improve the precision of some cost assignments, until FAA has resolved the issues noted above concerning the allocation of telecommunication and flight inspection costs, indirect labor costs of maintenance personnel, and worker benefits, we believe that retaining the service and project allocations already established by CAS may provide a more precise cost assignment to turbine and piston user groups. FAA officials told us that, although retaining the preexisting CAS allocations would not likely have a significant effect on the CAMERA allocations to user groups, FAA is considering increasing reliance on the CAS cost distributions for future user group cost studies. Further, FAA officials stated that CAS is continuing to evolve and CAMERA is designed to adapt to changes in data quality. CAMERA allocated the turbine and piston pools to commercial, general aviation, and exempt operators based on each operator’s proportional share of total activity within each service. This allocation assumed that all types of aircraft operators with the same type of engine (e.g., smaller jet aircraft versus larger commercial jets) contributed to their respective group’s costs in the same proportion as their share of distance flown (for en route services) and number of terminal operations (for terminal area services in each of three subgroups based on airport size). However, FAA did not conduct sufficient analysis (e.g., econometric analysis) to support this assumption. CAMERA assigned Tier 1 and a portion of Tier 2 costs to the group that is the primary user of the air traffic control services that generate those costs. The turbine group was determined to be the primary user for Tier 1 costs of oceanic, en route, and all terminal services and Tier 2 costs of oceanic, en route, and terminal services at large hubs and middle terminals in FAA’s analysis of 2005 data. Because general aviation jet aircraft are included in the turbine user group, FAA’s methodology allocated a portion of these costs, such as those for navigational aids and other equipment, to the general aviation aircraft operators. Thus, the general aviation jet users receive the benefit of the air traffic control personnel and equipment, and allocating a portion of costs in this manner is acceptable when there is sufficient commonality between the activity and the driver of the related costs. However, FAA did not sufficiently justify its assumption that allocating costs on an average basis to all types of operators of one engine type would produce results similar to determining whether particular costs principally benefit a single group of operators. For example, FAA did not sufficiently support its assumption that individuals or companies that fly smaller jet aircraft drive terminal costs in the same way as commercial airlines that fly larger jets when they fly to the same airport. FAA stated that it considered aircraft characteristics (such as the speed at which small jets fly compared to large jets, and the percentage of flight hours flown under IFR plans) and discussed this issue with SMEs. However, FAA did not quantify the extent to which commercial, general aviation, and exempt users of either type of aircraft impose costs differently on the air traffic control system. The contractor FAA retained to assist it in developing this methodology reported that, while variations in cost pools could have been developed, the simplicity and transparency of the turbine and piston pools provides an easily defined test that is also easy to administer. We agree that the benefits in terms of greater precision from a more detailed analysis need to outweigh the additional costs of that analysis. However, we believe that additional analysis of how different types of operators drive costs associated with each aircraft type would help identify how much precision is sacrificed to ensure simplicity and is needed to justify and support FAA’s simpler approach. Because the total of ATO’s fiscal year 2005 GAAP-based acquisition, implementation, and depreciation expenses taken from CAS were less than ATO’s budget authority for the F&E account, a user fee based on GAAP expenses would be insufficient to fund the budgeted costs for facilities and equipment. Therefore, to have the funds that would be needed to acquire budgeted air traffic control assets, FAA’s CAMERA methodology adjusted ATO’s GAAP-based expenses upward to equal total ATO budget authority for F&E. CAMERA then assigned those adjustments to the services and users of services in proportion to the historical, GAAP- based expenses. The manner in which these adjustments are assigned may, over time, result in costs being assigned to users who differ from the ultimate users of the new F&E when it becomes operational, leading to unintentional cross-subsidization among users. This can occur because of uncertainties related to the nature, timing, and cost of future F&E acquisitions and the volume and distribution of future flights that will use those assets. Also, because the budget includes multiyear spending authority, some F&E purchases may be funded several years before the expenditures are made and the related improvements become operational. It can take many years before FAA knows the actual distribution of any single year’s F&E budget across service types and to users. The long-term nature of these capital projects is such that FAA typically has 3 years to obligate F&E funds and another 5 years beyond that to expend these funds from the Airport and Airway Trust Fund. Further, more than 40 percent of the fiscal year 2005 F&E budget was related to projects that support more than one type of service. Consequently, FAA will not know for many years how the actual distributions of a particular year’s F&E budget to each service compare to each service’s adjusted expenses for that same year. FAA officials explained that CAMERA is designed to accommodate process changes to address the issues associated with recovering portions of the future costs of capital projects from current users. However, FAA had not yet designed a mechanism to monitor, identify, and adjust for those potential differences. Furthermore, as new projects are included in the authorized budget for F&E, the differences that can arise due to the use of historical GAAP- based expenses to allocate costs become greater. For example, the difference between total ATO-related F&E budgets and actual expenses may increase as funding for the next generation (NextGen) of air traffic control increases. FAA expects NextGen to cost between $15 billion and $22 billion before 2025. However, the actual nature, timing, and cost of NextGen are not yet known, nor are the total volume and distribution of future flights by aircraft type. These uncertainties increase the risk that relying on the GAAP-based historical costs of a predominately ground- based system to allocate portions of the prospective, budgeted costs of a satellite-based NextGen system may result in a distribution of these prospective costs among user groups and types of aircraft operators that does not reflect the actual future use by these groups. Accordingly, in accordance with ICAO guidance, FAA needs to monitor these differences in future years and provide a basis for making appropriate adjustments. In 2005, ATO’s GAAP-based expenses were $2,253.6 million while its F&E budget authority was $2,428.2 million, representing a difference of $174.6 million. In order to adjust GAAP-based expenses to total budget authority, FAA increased the amounts for each project within each service proportionally using the ratios of budget authority to expenses calculated for total nonterminal services and total terminal services. For example, the expenses of each en route project were increased by marking them up 2 percent, using the ratio of total nonterminal budget authority to total non-terminal expenses of 1.02. The resulting marked up GAAP-based project expenses within each service were then assigned to the turbine and piston user groups. These markup adjustments could accumulate over several years. Table 1 shows the distribution of the ATO-related F&E budget, expenses, and the difference by service and in total. Table 1 also shows how this method increased the expenses associated with each service and the portion of the en route and flight services F&E budgets that would be assigned to users of oceanic services. FAA explained that the cyclical nature of funding projects means that the relative distribution of the F&E budget among service types may change over time and that the distribution of historical costs (GAAP-based expenses) by service type represented a stable means of allocating the funding of long-lived assets. FAA reasoned that this approach can smooth out sharp year-to-year fluctuations in user fees that might otherwise occur if each service’s F&E budget authority were used to adjust the service’s expenses instead of using an overall markup based on total nonterminal and total terminal ratios of budget authority to expenses. FAA’s reasoning has merit; however, we have concerns that using this method introduces risk that costs for F&E acquisitions may be assigned to users that differ from the users of those assets once they become operational. Lastly, although we did not audit FAA’s CAS data, it is important to note that FAA’s external auditor has for the past 2 years reported an internal control weakness on the lack of timely processing and accounting for construction in progress. This weakness affects the GAAP-based expenses for assets and depreciation of certain capital projects and has required that FAA record significant year-end adjustments to its financial statements. Although FAA’s auditors do not indicate in their report that this problem is limited to a particular line of business or category of facility or service, the impact of using depreciation figures that may not be accurate to allocate the F&E budget to user groups is not known. Together these issues highlight the inherent and potential difficulties of pre-funding long-lived capital projects in general, and NextGen in particular, with revenues generated from current users. However, to avoid these challenges, FAA would have to seek other alternatives to fund its F&E budget, such as borrowing authority with a repayment period that closely matches the useful lives of acquired assets or special appropriations. FAA’s methodology for assigning costs to users is intended to link the costs that different user groups impose on the air traffic control system to fees that would be charged to users. Developing this type of methodology involves developing key assumptions and making decisions about the level of precision needed to achieve the objectives and the associated costs and benefits. The design of FAA’s methodology is generally consistent with the principles and methods set forth in federal cost accounting standards and international guidance. However, the lack of sufficient support for certain of the methodology’s underlying assumptions and methods leaves open the possibility that the study should assign costs to commercial, general aviation, and exempt users differently. Notwithstanding the need to balance precision with simplicity and transparency, FAA, Congress, and users of air traffic control services would benefit from additional documentation and analysis for key assumptions impacting the assignment of costs to the different user groups and further evaluation of the reasonableness of FAA’s method of allocating overhead, indirect, and other miscellaneous costs. This additional documentation and analysis for FAA’s cost assignment methodology is critical to help justify the results in order to promote user acceptance. In addition, because FAA’s methodology for allocating cost adjustments for FAA’s budgeted facilities and equipment projects can allow unintentional cross-subsidization among users, careful monitoring of actual project costs and users compared to original cost allocations is needed to identify and adjust for any significant differences. To provide additional support for the reasonableness of FAA’s cost assignment methodology and to monitor F&E cost assignments to users, we recommend that the Secretary of Transportation direct the Administrator of FAA to adequately document the basis on which costs are assigned to user groups; evaluate the methods and basis upon which various overhead, indirect, and other miscellaneous costs are assigned to user groups and document the effect of any changes thereto; determine whether and quantify the extent to which commercial, general aviation, and exempt users who use either single type of aircraft—turbine or piston—impose costs differently on the air traffic control system; and establish a mechanism for monitoring, by user group, any cumulative difference between original cost allocations for budgeted facilities and equipment project costs and actual usage of those assets, and adjusting prospective cost assignments accordingly. We provided a draft of this report to the Secretary of Transportation for review and comment. The Department’s comment letter is attached as Appendix I. While the Department expressed general concurrence with our recommendations in the technical comments it provided separately, it neither explicitly agreed nor disagreed with our findings, conclusions, and recommendations in its letter. The Assistant Secretary for Administration stated that the fiscal year 2006 allocation will address several of the issues identified in our report, including improved documentation of subject matter expert (SME) input to the assumptions and better assignment of indirect labor costs. However, these actions specified in the Department’s letter appear to address only narrow elements of two of our four recommendations. The Department’s letter is unclear about FAA’s and the Department’s position on the broader scope of our recommendations. For the Department to be able to support its assertions that CAMERA provides reasonable estimates of costs and is well supported, we believe that FAA must follow through with all of our recommendations. The first three of our recommendations each relate to how well the results of FAA’s methodology are supported and the extent to which the reasonableness of those results can be assessed. FAA’s agreement to improve documentation of key source input from its internal SMEs, which provided the basis for FAA’s cost assignments, is a good first step in completing the methodology documentation process. At the same time it represents only part of the input and methods FAA used to assign costs. As we reported, the effects of the SME input on costs assigned to the turbine and piston user groups as well as the related regression analyses of those costs are critical to the final cost assignments. Accordingly, documentation of the rationale linking the SME input to cost assignment decisions is also needed to justify those decisions. The Department also stated that FAA concluded that more analysis of how turbine and piston users drive air traffic control costs had the potential for only marginal, if any, gain. While the value of more detailed analysis with respect to the accuracy of related cost assignments can be determined only upon completion of that analysis, there is intrinsic value in performing such analysis in terms of demonstrating to stakeholders that FAA’s cost assignment methodology is a reasonable basis on which to recover costs. We believe more detailed analysis, at least regarding the most significant costs, would help achieve this primary goal. This is particularly important considering that, as we reported, the results of some regression analyses undertaken by FAA to support SME-based cost assignment decisions implied different cost assignments and others were inconclusive. Concerning our recommendation that FAA evaluate the methods and basis upon which various overhead, indirect, and other miscellaneous costs are assigned to user groups, the Department commented that using the allocations of FAA’s Cost Accounting System (CAS), an option suggested in our report, would not necessarily produce more precise cost allocations to users. However, as we reported, the method FAA used to allocate these costs to users resulted in shifting some costs between service types as compared to the allocations in CAS, which ultimately affects the allocation of these costs to user groups. CAS allocates costs to the facilities that provide services to users, and FAA managers rely on that information to make operational decisions. Accordingly, we believe that the CAS allocations may provide a more precise way of assigning these costs to users. The Department did not specifically comment on our recommendation to determine and quantify the extent to which commercial, general aviation, and exempt users of either engine type—turbine or piston—impose costs differently on the air traffic control system. FAA’s allocation method assumes, for example, that smaller jet aircraft drive terminal costs in the same way as commercial airlines that fly larger jets when they fly to the same airport. We believe that further analysis is needed to sufficiently justify FAA’s assumption that allocating costs to all types of operators of one engine type produces results similar to determining whether particular costs principally benefit a single group of operators and would help identify to what extent precision is sacrificed using FAA’s simpler method. The Department also did not comment on our recommendation that a mechanism be established to monitor any cumulative difference between original cost allocations for budgeted facilities and equipment (F&E) project costs and actual usage of those assets. FAA’s method for allocating the costs of budgeted F&E to users may, over time, result in costs being assigned to users who differ from the ultimate users of the new F&E when it becomes operational. We believe that in an environment with a cost- based revenue structure that incorporates funding for the costs of budgeted F&E, monitoring cumulative differences would help identify unintentional cross-subsidization among users. While the Department recognized our concerns with respect to adequacy of the support for the methodology’s underlying methods and assumptions, it stated that we did not offer quantitative evidence of fundamental flaws in FAA’s methodology. Our objective was to determine the extent to which FAA had supported its assumptions and methods, not to demonstrate through quantitative analysis that the resulting cost assignments to users are or are not reasonable. It is the responsibility of the agency to adequately support the assumptions and methods underlying its own methodology and the reasonableness of the results using quantitative analysis where appropriate. We found this support to be insufficient. We are sending electronic copies of this report to the Secretary of Transportation, the Administrator of FAA, and other interested parties. This report will be available at no cost on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-9471 or franzelj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. In addition to the contact named above, significant contributions to this report were made by Jack Warner (Assistant Director), H. Donald Campbell, Jay Cherlow, Gerald Dillingham, Fred Evans, Maxine Hattery, Ed Laughlin, Maureen Luna-Long, Maren McAvoy, Scott McNulty, and Meg Mills.
In January 2007 Federal Aviation Administration (FAA) reported the results of its study that assigned the fiscal year 2005 costs of its Air Traffic Organization (ATO) to users. FAA used this study to support the President's proposal to replace many current excise taxes with cost-based fees for commercial aviation users and higher fuel taxes for general aviation users. GAO assessed (1) the consistency of FAA's cost assignment methodology with established standards and guidance, (2) the support for selected cost assignment assumptions and methods, and (3) the impact of including budgeted capital costs in the cost baseline. GAO compared FAA's methodology to federal accounting standards and international guidance, reviewed available documents and analyses supporting FAA's assumptions and methods, and interviewed FAA officials and consultants. With the federal government preparing for the next generation of air travel, the President, Congress, and users of the national airspace are considering alternative methods for funding air traffic control in the national airspace. To support a cost-based funding structure such as the current proposal from the President, FAA developed a methodology for assigning costs to users. Federal cost accounting standards and international guidance establish flexible principles for assigning costs and recognize that the selection of methods involves making choices that require balancing the cost of development and implementation with the benefit of precision in the resulting cost assignments. GAO found that the design of key elements of FAA's methodology was generally consistent with federal standards and international guidance. But GAO also identified matters related to the application of certain assumptions and cost assignment methods that need additional documentation and analysis. Because building a methodology for assigning costs to users involves standards, alternative methodologies, and choices, documenting the decisions made and how they were made is important to allow users and others to assess whether the methodology and the structure of cost assignment is reasonable. FAA provided adequate support for its decision to assign costs based on whether the aircraft using air traffic control services are powered by turbine engines, such as jets, or piston engines, such as propeller-driven airplanes. However, FAA did not adequately document the basis on which it assigned costs to the aircraft groups or support its assumption that all types of aircraft with the same engine type affect costs in the same manner, leaving open the possibility that costs should be assigned to users differently. GAO also found that FAA's methodology does not take advantage of allocations already made in its cost accounting system, but instead aggregates the costs and then allocates them to aircraft groups. For some of these costs, such as employee benefit costs, a different method of allocation could have produced a more precise distribution between the groups. A user fee designed to fund new facilities and equipment expenditures must provide funds equal to the annual budget for those expenditures. FAA's methodology includes adjusting current-year actual expenses to equal the budgeted amount for facilities and equipment costs. These adjustments are then assigned to users in the same proportion as are current acquisition, implementation, and depreciation expenses. But users of future facilities and equipment may be different from users of existing facilities and equipment. The manner in which the costs of facilities and equipment are assigned may, over time, result in assigning costs to users who are different from the ultimate users of future facilities and equipment once they become operational. Consequently, the implementation of this method warrants careful monitoring to avoid unintentional cross-subsidization among users.
Victims of sexual assault may receive a sexual assault forensic examination by a medical provider who may or may not be a trained sexual assault forensic examiner. Medical providers assess victims’ clinical conditions; provide appropriate treatment and medical referrals; and, given consent by the victim, collect forensic evidence through a sexual assault forensic examination that may follow steps and use supplies from a sexual assault evidence collection kit. Under its protocol for sexual assault forensic examinations, DOJ recommends that medical providers collect a range of physical evidence, which can include, but is not limited to, clothing, foreign materials on the body, hair (including head and pubic hair samples and combings), body swabs, and a blood or saliva sample for DNA analysis and comparison. In addition, sexual assault forensic exams typically include documenting biological and physical findings such as cuts or bruises, either in writing or photographs, and a recording of a victim’s medical forensic history such as the time and nature of the assault. Once the exam is complete, medical providers preserve the collected evidence, which may include packaging, labeling, and sealing evidence collection kits and storing kits in a secure location. Medical providers typically perform such exams only for acute cases of sexual assault, such as in cases where the assault occurred within the previous 72 to 96 hours, when the physical and biological evidence on a person’s body or clothes is considered most viable. DOJ, IAFN, and the American College of Emergency Physicians (ACEP) recommend that sexual assault forensic exams be performed by specially trained medical providers—or sexual assault forensic examiners (examiners). These examiners include physicians, physician assistants, nurse practitioners, and other registered nurses who have been specially educated and completed clinical requirements to perform sexual assault forensic exams. Sexual assault nurse examiners (SANE) —a particular type of sexual assault forensic examiners—are registered nurses, including nurse midwives and other advanced practice nurses, who have received specialized education and have fulfilled clinical requirements to perform sexual assault forensic exams. Examiner programs have been created in hospital or non-hospital settings whereby specially trained examiners are available to provide first-response care and exams to sexual assault victims. Additionally, for pediatric victims, specially trained examiners may perform medical forensic exams in a child-specific facility, such as a child advocacy center. DOJ, IAFN, and some states have issued guidelines pertaining to the minimum level of training examiners should receive in order to properly collect and preserve evidence, identify victims’ medical and emotional health care needs, and provide counseling and referrals for victims. These guidelines include recommendations of objectives and topics that training programs should cover. For example, in their guidelines, DOJ and IAFN recommend that examiners receive comprehensive training that covers such topics as how to identify and deliver proper elements of a victim-centered sexual assault forensic examination where victims are fully informed of their options during and after the exam; how to assess patients and provide culturally competent medical care, including testing and delivery of prophylaxis for sexual transmitted infections and pregnancy; how to collect and document evidence in a way that protects the evidence’s integrity; how to testify about findings in court; and how to protect the chain of custody of evidence and coordinate care across a multidisciplinary team. The goal of training, as outlined in the DOJ and IAFN guidelines, is for examiners to be able to effectively evaluate and address victims’ health concerns, minimize their trauma and promote their healing during and after the exam, and to detect, collect, preserve, and document physical evidence related to the assault for potential use by the legal system. In addition, registered nurses can become certified SANEs through IAFN to perform exams, though no such national certification exists for examiners who are not registered nurses. Depending on the state, examiners may also become certified through a state certifying body, such as a state board of nursing. There are no federal requirements concerning the training or availability of examiners in health care facilities outside of military, correctional, and Indian Health Service facilities. While a Joint Commission accreditation standard requires hospitals to establish policies for identifying and assessing possible victims of sexual assault and to train staff on those policies, each hospital is responsible for determining the level of specificity of such policies, including the minimum level of training required of its medical staff that perform exams. Some states may have established minimum training requirements of nurses that perform sexual assault forensic exams and require nurses to become certified either through the IAFN or the state. As authorized by VAWA, DOJ administers several grant programs that aim to, among other things, improve response to and recovery from four broad categories of victimization—domestic violence, sexual assault, dating violence, and stalking. The grant programs aim to address these categories of victimization through a range of activities, including public education and prevention; improved collaboration among stakeholders; training of law enforcement, prosecutors, court personnel, and victim service providers; strengthening victim services; developing and implementing more effective police, court, and prosecution policies and services; and improving data collection and communication systems related to these crimes. According to DOJ officials, there are three key VAWA authorized grant programs administered by DOJ’s Office on Violence Against Women (OVW) that can be used by grant recipients to fund or train sexual assault forensic examiners. In March 2011, the Indian Health Service within the Department of Health and Human Services implemented a new policy, through its Indian Health Manual, that all Indian Health Service-operated facilities must provide patients age 18 and older who present themselves for sexual assault services with access to an exam on-site or by referral. Victims who are referred elsewhere must be transported within 2 hours of the victim’s presentation at the medical facility. All registered nurses, advanced practice nurses, physicians, and physician assistants new to caring for adult and adolescent sexual assault patients must complete 40 hours of examiner training as well as clinical practice experience under the guidance of a forensically experienced medical provider. All examiner training and clinical practice experience must conform to the SANE educational requirements of the IAFN and the DOJ National Sexual Assault Medical Examining Training Standards. Department of Health and Human Services, Indian Health Service, “Chap. 29—Sexual Assault.” Pt. 3 in Indian Health Manual (Rockville, Md.: May 16, 2014). Services-Training-Officers-Prosecutors Violence Against Women Formula Grant Program (STOP Grant Program): The purpose of the STOP grant program, the largest of the three key grant programs, is to help states, courts, and local governments develop and strengthen effective law enforcement and prosecution strategies to combat violent crimes against women and to develop and strengthen victim services in cases involving violent crimes against women. Under the STOP grant program, there are 20 statutorily defined purposes for which funds may be used, one of which pertains directly to training examiners in the collection and preservation of evidence, analysis, prevention, and providing expert testimony and treatment of trauma related to sexual assault. The STOP Grant Program is a formula grant program in which all states and territories are awarded a minimum amount of $600,000 plus an additional amount based on state population size. STOP Grant Program awards may support up to 75 percent of the costs of all projects, including the cost of administering those subgrants; the remaining 25 percent of costs must be covered by nonfederal match sources. The average STOP grant award to states in fiscal year 2015 was about $2.5 million and ranged from roughly $600,000 to $13.2 million. Once states receive funds, a designated state agency—referred to as the state STOP administrator—is responsible for distributing funds to subgrantees based on the state’s own subgrant award process. However, state STOP administrators must allocate funds according to a statutorily defined formula—that is, 25 percent of funds must be distributed for law enforcement, 25 percent for prosecutors, 30 percent for victim services, 5 percent to state and local courts, and 15 percent for discretionary distribution within the program purpose areas. We refer to STOP subgrantees as grantees throughout this report. Grants to Encourage Arrest Policies and Enforcement of Protection Orders Program (Arrest Grant Program): The purpose of the Arrest Grant Program is to encourage state, local and tribal governments and courts to treat domestic violence, dating violence, sexual assault, and stalking as serious violations of criminal law requiring the coordinated involvement of the entire criminal justice system. Eligible applicants include states, territories, and units of local government; Indian tribal governments; state, local, tribal, and territorial courts; victim service providers; state or tribal sexual assault or domestic violence coalitions; and government rape crisis centers. For the Arrest Grant Program, at least 25 percent of appropriated funds must be allocated to activities that address sexual assault. Developing, implementing, or enhancing examiner programs, including the hiring and training of such examiners, is 1 of 22 purpose areas for which Arrest Program grant funding can be used. The average grant award in fiscal year 2015 was $601,361 and ranged from $224,668 to $900,000. Rural Sexual Assault Domestic Violence, Dating Violence, and Stalking Assistance Program (Rural Grant Program): The purpose of the Rural Grant Program is to enhance the safety of rural victims of sexual assault, domestic violence, dating violence and stalking, and support projects uniquely designed to address and prevent these crimes in rural areas. At least 75 percent of total Rural Grant Program funding must be allocated to eligible entities in “rural states,” as defined by VAWA 2013. Eligible entities include states, territories, Indian tribes, local governments, and nonprofit entities including tribal nonprofit organizations. In addition, at least 25 percent of funds appropriated for the Rural Grant Program must be allocated to activities that address sexual assault in rural areas. Regardless of whether a grantee is from a rural or nonrural state, funds must be used for services and activities in a rural area or rural community. Grantees are required to implement at least one of five statutorily defined strategies, one of which includes developing, enlarging, or strengthening programs addressing sexual assault, including examiner programs. The average grant award in fiscal year 2015 was $599,997 and ranged from $144,000 to $999,993. Across these three grant programs, a total of $186.7 million in funds was awarded to grantees in fiscal year 2015. (See table 1.) Organizations that receive DOJ grant awards (or subgrant awards) through the STOP, Arrest, and Rural Programs are required to submit annual or biannual reporting forms to the OVW that include information about how they used grant funding, including specific information about whether funding was used to provide training for or fund sexual assault forensic examiners. DOJ officials told us that funding from additional DOJ grant programs may be used to fund, train, or support the training of examiners, though officials stated that the use of such grant funding for these purposes is limited. Such programs include the Office for Victims of Crime’s Training and Technical Assistance Center and its National Sexual Assault TeleNursing Center demonstration project as well as the Bureau of Justice Assistance’s Byrne Justice Assistance Grant Program. In addition, OVW also administers the Technical Assistance Grant Program, which aims to provide direct technical assistance to existing and potential grantees to successfully implement projects supported by OVW grant funds. The Technical Assistance Program is aimed at providing in-person and online educational training opportunities, peer-to-peer consultations, site visits, and other types of tailored assistance to help grantees, including STOP, Arrest, and Rural Program grantees, implement grant- funded activities effectively. Although Technical Assistance Program providers could also use awarded funding to provide training that would help examiners to perform at a higher level of proficiency, DOJ officials noted that such providers do not provide comprehensive classroom or clinical training to medical providers aspiring to become an examiner. According to officials from HHS, as of September 2015, HHS did not administer any grant programs that are used to train or fund examiners nor has it issued guidance or requirements concerning the training of medical professionals on conducting exams or the availability of examiners. Although HHS was authorized through VAWA 2013 to administer the Consolidated Grants to Strengthen the Healthcare System’s Response to Domestic Violence, Dating Violence, Sexual Assault, and Stalking, funds were never appropriated to HHS for this program. In addition, officials from both agencies told us that, as of September 2015, DOJ and HHS had not collaborated on any activities concerning the training of medical providers on conducting sexual assault forensic exams or the availability of trained examiners. In 49 states, at least one STOP, Arrest, or Rural Program grantee— including STOP subgrantees—reported using federal grant funds to provide training for sexual assault forensic examiners in 2013, the most recent year for which complete data were available. Grantees used funds for a variety of examiner training activities. In addition to training examiners, grantees in 26 states funded examiner staff positions in 2013, although grantees in these states funded less than one full-time equivalent (FTE) examiner position, on average. In nearly all states in 2013, at least one STOP, Arrest, or Rural Program grantee reported using federal grant funds to provide training for sexual assault forensic examiners in 2013. Specifically, in 2013, approximately 227 grantees in 49 states reported using grant funds to provide training for over 6,000 examiners. Most examiners (4,936) received training from STOP grantees. (See table 2.) However, on the basis of available data, it is unclear how many examiners received comprehensive examiner training versus other training that could help enhance their ability to serve victims. Based on interviews with grantees in some of our six selected states and a review of grantee progress reports submitted to DOJ in 2013, the type of training that grantees provided for examiners ranged from comprehensive examiner training and certification to training on specific topics that enable examiners to improve their response to victims. Grantees reported using federal grant funds to provide examiners with the following types of training: Comprehensive Examiner Training or Certification: Grantees reported using funds to provide comprehensive examiner training that, for example, included 40 or more hours of classroom training as well as, in some cases, clinical practice training. For example, in 2014, a Wisconsin grantee used STOP grant and state budget funds to provide five 40-hour training courses and three clinical skills labs for examiners, training a total of 115 new examiners. Although the state neither offers nor requires examiner certification, those that participated in the training could become certified through IAFN. In addition, the statewide examiner training program in Colorado, which is partially supported using STOP grant funds, both trained and certified 74 new examiners in 2014. Examiner Recertification or Continuing Education Training: In addition to comprehensive examiner training or certification, some grantees used grant funds to offer periodic recertification or “refresher” training so that examiners can maintain competency. For example, a grantee in Kansas provided refresher training to examiners who were identified by forensic lab evaluation forms as having errors in evidence collected through sexual assault forensic exams. Topical Training: Some grantees reported providing training to examiners on specific topics, such as interviewing and photography techniques, court room testimony, or victim confidentiality protocols. For example, officials from the statewide examiner training program in Colorado told us that in the past they have used both STOP and Arrest Program funds to provide courtroom training for examiners. Other grantees reported providing training to examiners on working with certain types of victims, such as lesbian, gay, bisexual, transgender, disabled, or elderly victims. The extent of examiner training efforts supported with STOP, Arrest, and Rural Grant Program funds varied by state. In 2013, the total number of examiners who received training funded through these grant programs in each state ranged from 0 to 604. In half of the states (26), fewer than 100 examiners received training and in 12 of these states fewer than 25 examiners received training. (See figure 1.) Further, based on interviews with grantees in selected states, we found that while some grantees used funds to support statewide comprehensive examiner training programs, other grantees used funds to provide training in specific locations, such as a single county or hospital. For example, at least one grantee each in Colorado, Florida, Massachusetts, and Wisconsin used STOP or Arrest Program funds, in combination with other funding, to support statewide comprehensive examiner training programs. However, Jefferson County in Oregon used its Rural Program funds for one examiner’s recertification as well as other continuing education training for four examiners at a local hospital in 2014. Additionally, the Colorado Sexual Assault Response Project used Arrest Program funds to provide training for 44 examiners in rural areas to perform exams. The number of grantees in each state that used grant funds to provide training for examiners also varied. For example, in 2013, the number of grantees that reported providing training for examiners per state ranged from 0 to 19. While 12 grantees provided training for a total of 43 examiners in North Carolina, two grantees provided training for a total of 347 examiners in Illinois in 2013. (For more detailed information on the number of grantees that reported providing training for examiners and the number of examiners provided training in each state by grant program, see appendix I.) Some entities used funds from DOJ’s Technical Assistance Program to provide training for examiners on a national scale. DOJ officials told us that although OVW does not fund any Technical Assistance Program award recipients to provide classroom or clinical training for examiners, some may provide national training that assists examiners in developing knowledge, experience, or skills to perform at a higher level of proficiency. For example, in 2014 IAFN used Technical Assistance Program funds to provide online and in-person training for examiners on topics such as treating transgender victims of sexual assault and payment policies for forensic exams. In the reporting period January through June 2014, Technical Assistance Program award recipients provided training for 1,772 examiners. DOJ officials told us that the majority of these examiners (1,609) received training provided by five Technical Assistance Program award recipients. DOJ officials told us that the STOP, Arrest, and Rural grant programs are the key grant programs from which funds are available to train examiners, though grantees may use funds to address four broad categories of victimization—sexual assault, domestic violence, dating violence, and stalking. Further, within the category of sexual assault, there is an extensive range of issue areas that grantees can choose to address, including providing training for examiners. For example, grantees may choose to use funds to pay for victim services or to train other professionals, such as law enforcement officers, judges, and prosecutors, on issues related to sexual assault. Of all STOP, Arrest, and Rural Program grantees, 8.4 percent reported using grant funds specifically to provide training for examiners in 2013. According to DOJ and state officials, grantees might not use STOP, Arrest, or Rural Grant Program funds to provide training for examiners for a variety reasons, including competing demands for the use of funds and a lack of competitive grant applications from entities seeking funds for this purpose. For example, officials in Florida reported that STOP grant funds are not used to provide training for examiners but are instead targeted towards other areas, such as law enforcement, victim services, or developing and supporting sexual assault response teams. DOJ officials and state STOP administrators also told us that not all grant applications seeking funds to train examiners may be approved. For example, DOJ officials told us that grant applications may be denied if they do not meet the standardized criteria OVW uses in the review of applications for the Arrest and Rural programs or, despite meeting OVW’s criteria and scoring well, other applications score better. Additionally, officials told us that it is possible that few applicants seek funding to train examiners. For example, the Nebraska STOP Administrator told us that due to a lack of knowledge that grant funds can be used for this purpose, the state received only one application to train examiners, which was not approved due to competing demands for available funds. Some grantees in our six selected states that did not use STOP, Arrest, or Rural Grant Program funds to provide training for examiners used other funds, such as funds from state and hospital budgets or nonprofit organizations, to train examiners. In Nebraska, for example, examiner training is primarily funded by a hospital system that also employs over half of the examiners in the state (24 out of 43 examiners). Although Oregon did not use federal funds to provide training for examiners, officials told us that the Oregon Sexual Assault Task Force, a statewide nonprofit organization, uses grant funds from the state department of justice to offer 40-hour comprehensive examiner training courses two times per year. Finally, some grantees told us that they used federal grant funds to provide sexual assault exam overview training for health and other professionals. For example, one grantee in Massachusetts told us that they used grant funds to provide basic forensic evidence collection training to staff at hospitals that did not have trained examiners available. In half of the states, at least one STOP, Arrest, or Rural grantee funded examiner staff positions in 2013. Approximately 75 grantees in 26 states funded roughly 50 FTE examiner positions in 2013, most of which (46 FTE examiner positions) were funded by STOP grantees. (See table 3). In these 26 states, grantees funded, on average, less than one FTE examiner position each, ranging from 0.1 to 9.8 FTEs in 2013. Further, few STOP, Arrest, or Rural grantees used funds to pay for FTE examiner positions in 2013. In 2013, approximately 2.5 percent of STOP grantees, 5.2 percent of Arrest grantees, and 6.3 percent of Rural Program grantees reported using grant funds for FTE examiner positions. Information from interviews with officials in two of our six selected states suggests that grantees that fund examiner positions may fund an examiner to act as an examiner program coordinator. Program coordinator duties may include overseeing the operations of examiner programs, training examiners, providing technical assistance, and providing forensic exams. For example, officials in both Massachusetts and Wisconsin told us that STOP Grant Program funds were used to pay for a statewide coordinator in FY 2014. In addition, grantees also used grant funds to pay for examiners to be on-call. For example, a grantee in Hawaii reported that STOP grant funding allowed them to provide on-call pay to examiners in the state. According to our literature review and the experts we interviewed, only limited nationwide data exist on the availability of sexual assault forensic examiners—that is, both the number of practicing examiners and health care facilities that have examiner programs. While IAFN reported that, as of September 2015, there were 1,182 nurses with active IAFN SANE certification in the United States, such data do not represent all practicing examiners nationwide. For example, the data do not account for examiners who completed training through an IAFN or a state training program but never became certified or were certified through another entity, such as a state board of nursing. IAFN also collects data on examiner programs nationwide—that is, data on hospitals, clinics, and other sites where examiners practice. Such data provide an indication of the availability of examiners, but the data are also limited. While 703 examiner programs nationwide voluntarily reported to IAFN’s examiner program database, as of September 2015, IAFN officials noted that the database is often not up-to-date and some health care settings where sexual assault forensic exams are conducted, such as child advocacy centers, are not represented. In addition, data collected on staffing characteristics of examiner programs are often unavailable in the IAFN examiner program database. For example, only about one-third of the examiner programs reported on the number of examiners practicing in their program and about one-third reported on whether examiners were available on-site versus on-call. In three of six selected states, STOP administrators or officials from sexual assault coalitions were able to provide estimates of the number of practicing examiners and, in all six states, they were able to provide information on the estimated number of examiner program locations in their state. Of states that reported, the number of practicing examiners and examiner programs varied by state. (See table 4.) However, such data may also present an incomplete picture of the availability of examiners. For example, only one of the six selected states has a system in place to formally track the number and location of examiners. Instead, officials generally reported on the estimated number of examiners or examiner locations that were part of a statewide examiner program or were identified through an ad hoc data collection effort. Although data are limited, STOP administrators and sexual assault coalition officials in all six selected states nevertheless told us that the number of examiners available does not meet the need for exams within their states. For example, coalition officials in Wisconsin told us that nearly half of all counties in the state do not have any examiner programs available, and coalition officials in Nebraska told us that most counties in the state do not have examiner programs available. In addition, in four of six selected states—Colorado, Florida, Nebraska, and Wisconsin—state STOP administrators and coalition officials told us that few or some health care facilities in their state have examiners available. As a consequence, officials said victims may need to travel long distances to be examined by a trained examiner or be examined by a medical professional without specialized training. For example, the Colorado STOP administrator explained that although there is an examiner program available in all regions of the state, not all hospitals participate in Colorado’s statewide examiner program. As a result, in the rural Western region of Colorado, for example, victims may have to travel more than an hour to reach a facility with examiners available. While in the other two selected states—Massachusetts and Oregon—state STOP administrators and coalition officials stated that some or most facilities have examiners available, they noted that there is still a need for additional capacity to reduce the burden on those examiners who are available or to make examiners available in a number of areas where examiners are currently unavailable. For example, Massachusetts coalition officials told us that there is an ongoing need for examiners across the state. There were few or in some cases no examiners available in rural areas, according to state STOP administrators or coalition officials we interviewed in selected states. STOP administrators and coalition officials in Colorado, Florida, and Wisconsin told us that in rural areas there may be only one examiner or one examiner program available across multiple counties. For example, Colorado coalition officials told us that of the five rural counties in Central Colorado, only one county had an examiner available. Alternatively, according to the Nebraska STOP administrator, some victims might have to travel to a major metropolitan area to reach a facility with examiners available, which could take 2 or more hours. In general, state STOP administrators and coalition officials explained that it could take a victim 30 minutes or less in urban areas to up to 2 hours in rural areas to reach a facility that has an examiner available. STOP administrators and coalition officials we interviewed explained that the availability of examiners in rural areas is challenging for a number of reasons, including the limited availability of health care providers generally, weather-related travel restrictions that can affect the time and distance victims must travel to reach a facility with an examiner, difficulty recruiting qualified nurses to undergo training, and a lack of capacity in rural areas to provide examiner training opportunities. Even in some urban areas the availability of examiners may be limited, according to state STOP administrators or coalition officials we interviewed. For example, Wisconsin coalition officials explained that just one of the five major hospitals in Milwaukee has examiners available, and some victims may be unwilling to travel to that hospital to receive an exam from an examiner. In addition, Florida coalition officials told us that even in urban areas there are only a few specialized places where victims can receive an exam from a trained examiner. In health care facilities where examiners are available, they are typically available through hospitals on an on-call basis, according to literature we reviewed as well as all STOP administrators and coalition officials we interviewed. Results from a 2005 national survey of examiner programs showed that most programs (60 percent) were administered through hospitals and 71 percent of examiner programs used staffed examiners on a part-time, on-call basis. According to literature we reviewed as well as experts and Colorado, Florida, and Oregon coalition officials we interviewed, on-call examiners may serve “dual roles”—that is, they simultaneously work as emergency department nurses and cover their on-call examiner shift. Specifically, results from the 2005 survey showed that about one-quarter of all examiner programs used nurses who overlapped their emergency department shifts with their on-call examiner shifts. Alternatively, according to the STOP administrators in Colorado and Oregon, examiners in some facilities or rural areas may not work based on an official on-call schedule. Instead, an examiner program coordinator will call through a list of examiners practicing in a region of the state when a victim arrives to find an examiner available to conduct the exam. The Colorado STOP administrator noted, however, that it is often the case that no examiners are available and the coordinator, who is also a trained examiner, will ultimately come in to the hospital to perform the exam instead. In addition, among facilities that have examiners available, the number of examiners available varies and may not provide enough capacity for facilities to offer examiner coverage 24 hours, 7 days a week, according to state STOP administrators and coalition officials we interviewed. Nebraska coalition officials, for example, told us that while one hospital in Omaha has a team of 26 examiners available, other facilities in the state may have as few as three examiners available. Further, Florida coalition officials and the Colorado STOP administrator told us that there are few facilities in their states able to offer full coverage with examiners available 24 hours, 7 days a week. For example, Memorial Hospital in Colorado Springs is the only facility in Colorado that has enough examiners available to provide examiners on-staff 24 hours a day, 7 days a week, according to Colorado officials we interviewed. Staff from a rural hospital in Oregon explained that although it has two on-call examiners and one additional examiner available if needed, there are not enough examiners available to provide on-call coverage 24 hours, 7 days a week. According to state STOP administrators and coalition officials we interviewed in six selected states, health care facilities may have their own protocols in place concerning the expected response time of on-call examiners, transferring victims to facilities that have examiners available, and paging on-call examiners. Florida coalition officials as well as the Massachusetts and Oregon STOP administrators told us, for example, that facilities with examiners available may have an agreement in place that specifies the expected response time of examiners. State STOP administrators or coalition officials we interviewed in five of six selected states told us that, in general, examiners are expected to arrive at a facility within 30 minutes to 1 hour of being paged in urban areas, though it could take longer in rural areas. Some STOP administrators or coalition officials in selected states informed us that facilities that do not have examiners available may transfer or encourage victims to go to another facility with examiners available, or they may be treated by an untrained medical professional. One coalition official noted that victims who are referred elsewhere for exams often do not follow through and thus never receive an exam. This may be because, according to Florida coalition officials, victims may be responsible for transporting themselves or they may be transported on a case-by-case basis by law enforcement. Last, officials told us that the timing of when on-call examiners are paged varies. For example, Colorado and Florida officials told us that, if a victim is being transported to another hospital, the destination facility may not page the on-call examiner until the victim has arrived. However, officials from one hospital in rural Oregon and Wisconsin coalition officials explained that, in their states, local law enforcement will notify the destination hospital when a victim is being transported to their hospital so that an examiner can be paged in advance. According to state STOP administrators and state sexual assault coalition officials we interviewed in six selected states, maintaining a supply of trained examiners that meets communities’ needs for exams is challenging for multiple reasons, including the limited availability of training, a lack of technical assistance and other resources, weak stakeholder support for examiners, and low examiner retention. In order to address these challenges, state officials told us that they have employed a variety of strategies, such as offering web-based training courses or clinical guidance or support for examiners, clinical practice labs, mentorship programs, and developing multidisciplinary teams within communities that respond to cases of sexual assault. Limited availability of training. Officials in five of six selected states told us that the limited availability of classroom, clinical, or continuing education training is a barrier to maintaining a supply of trained examiners. Regarding classroom training, some officials told us that training may only be offered once per year in their states. Additionally, officials from both Florida and IAFN told us that there is a need for qualified instructors to run training sessions. Experts and officials from Colorado, Nebraska, and Oregon also told us that medical professionals in rural areas may have difficulty completing the clinical training necessary to become an examiner. Obtaining clinical experience, such as performing exams under the supervision of a trained examiner, is a particular challenge in rural areas where hospitals may treat only a few sexual assault cases per year. One official in Nebraska told us that trained examiners in rural areas might not feel competent to perform exams due to the low number of cases they treat. A lack of continuing education opportunities may also pose a challenge for examiners in maintaining the skills necessary to perform exams. For example, the National Sexual Violence Resource Center (NSVRC) reported that, based on common challenges identified through a survey of and group discussions among examiner program coordinators, maintaining competency may be difficult for nurses in rural areas due to a low volume of patients presenting in need of exams and limited access to ongoing and advanced training. Officials told us that they have been able to increase the availability of examiner training through alternative training methods such as web- based training courses and simulated clinical training. For example, officials in Colorado told us that their state’s web-based examiner training program has made training less expensive and has increased examiner recruitment. Officials in Wisconsin told us that they developed a clinical training lab that allows examiners to gain hands-on experience by performing elements of exams on models who are experienced teaching assistants and hired for the purpose of training new examiners. Further, in 2014, a DOJ-funded evaluation of examiner training programs found that a web-based training course may help increase the availability of trained examiners; the study also found that implementing web-based training had benefits such as decreasing the costs associated with attending in- person training, expanding training opportunities to remote areas, and allowing examiners to be trained by national experts. Lack of technical assistance and other supportive resources. Officials in four of six selected states told us that the limited availability of technical assistance and other supportive resources for examiners poses a challenge to maintaining a supply of trained examiners. For example, officials in Florida, Nebraska, Oregon, and Wisconsin explained that, in general, there is a lack of mentorship opportunities and leadership within the examiner community. Officials also noted that the sustainability of examiner programs may be threatened by a lack of internal capacity, such as not having a full-time, paid examiner program coordinator available. Further, in its survey of and group discussions with examiner program coordinators, NSVRC found that examiners and examiner programs needed technical assistance and support in the following areas: aspects of performing exams, training, leadership development and policy issues, and examiner program sustainability. Specifically, examiners needed technical assistance and support on topics such as testifying in court; professional development; performing certain types of procedures, such as a colposcopy or anogenital photography; and working with special populations. Officials we spoke to told us about strategies that can be used to increase support for examiners and examiner programs, such as offering web- based technical assistance. For example, officials in Massachusetts told us that through their National Sexual Assault TeleNursing Center, trained SANEs provide remote clinical guidance to two hospitals in the state that do not have trained examiners available. In addition, officials from Colorado told us that an examiner program coordinator in an urban hospital in the state provides volunteer on-call technical assistance and clinical guidance to examiners in rural parts of the state, where those resources are not otherwise available. Further, one study we reviewed found that several states were engaged in promising practices to increase support for examiners, such as implementing state-wide mentorship programs, developing regional examiner list-serves and online discussion boards, creating formal leadership positions with in the examiner community, and requiring examiner program evaluations. Weak stakeholder support for examiners. Officials in five of six selected states told us that limited stakeholder support for examiners and examiner programs, such as from hospitals and law enforcement, is a challenge to maintaining a supply of trained examiners. Some officials told us that hospitals may be reluctant to support examiners and examiner programs due to a low number of sexual assault cases treated each year. As a result, medical professionals may have to cover the cost of their examiner training courses themselves, including their travel and lodging expenses, and face lost wages associated with attending training. One official told us that hospitals may be reluctant to send nurses to examiner training as it takes away from their regular shift availability. Additionally, some hospitals do not pay examiners to be on-call. Officials in three states told us that hospitals typically either do not pay examiners to be on-call or pay on-call examiners significantly less than other on-call medical professionals. For example, one official in Wisconsin estimated that some examiners in their state receive between $1.00 and $1.50 per hour when on-call while others are not paid for on-call time. Officials from the American Hospital Association, when asked about whether they have developed any requirements, policies, or protocols concerning the training of or access to examiners in hospitals, told us that the association has not produced any information in this area. Apart from hospital support, officials in Colorado and Oregon explained that there is a need for more multidisciplinary support for examiners, such as increased law enforcement, prosecutor, and first responder understanding of examiners’ role. The literature we reviewed also shows that ambiguity around the role of the examiner in responding to sexual assault, may be a source of conflict between examiners and other professionals. For example, examiners were found to have experienced instances where victim advocates or law enforcement questioned examiners’ medical decisions, speed of evidence collection, or asked examiners to comment on the credibility of a victim’s case. One nationally representative survey of examiner programs found that examiner program coordinators felt that ongoing education of community stakeholders on sexual assault and examiner programs was needed due to high turnover in staff at relevant community institutions and agencies, such as law enforcement officers, victim advocates, and prosecutors. Through our interviews with officials, we learned of strategies that selected states have used to increase or mitigate limited stakeholder support for examiners and examiner programs. For example, officials in Colorado, Florida, Nebraska, Oregon, and Wisconsin told us that sexual assault response teams have been developed in their states to help community stakeholders to understand examiners’ role and better coordinate to meet the medical and legal needs of sexual assault victims. Further, a 2005 nationally representative survey of examiner program coordinators found that some programs addressed limited stakeholder support for examiner training by negotiating with employers to count training as paid work. Officials in Colorado also suggested that one strategy to mitigate limited hospital support for examiners would be to partner with non-hospital facilities such as health clinics that might support examiner programs. Low examiner retention rates. Officials in four of six selected states told us that low examiner retention rates can be an impediment to maintaining a supply of trained examiners. In addition to the challenges of limited training opportunities, technical assistance and other supportive resources, and stakeholder support for examiners, the physically and emotionally demanding nature of examiner work contributes to low examiner retention rates. Further, studies have indicated that dissatisfaction with compensation, long work hours, and lack of support, among other things, may contribute to examiner burnout. Examiners typically work on-call in addition to their full time jobs as, for example, emergency department nurses. Officials in Florida told us that examiners may be on-call for 6-hour, 12-hour, or even 24-hour shifts. Further, one survey of examiner programs in Maryland found that examiners were required to be on-call for an average of 159 hours per month. Wisconsin officials estimated that although 540 SANEs were trained over a 2-year period, only 42 (less than 8 percent) were still practicing in the state at the end of those 2 years. In addition, the 2005 survey of examiner program coordinators found that nearly two-thirds believed that examiner staffing, generally, was a challenge and nearly a third believed that SANE retention was a challenge. We provided a draft of this report to DOJ for review. DOJ provided technical comments that we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees, the Attorney General of the United States, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or IritaniK@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix II. The following three tables show the number of grantees that used funding to provide training for examiners and the number of examiners who received training by federal grant program and state in 2013 and, where available, 2014. In addition to the contact named above, Kristi Peterson (Assistant Director), Leia Dickerson, Katherine Mack, Laurie Pachter, and Emily Wilson made key contributions to this report.
In 2013, about 285,000 individuals age 12 or older were reported victims of sexual assault, according to the Bureau of Justice Statistics. Studies have shown that exams performed by sexual assault forensic examiners—medical providers trained in collecting and preserving forensic evidence—may result in better physical and mental health care for victims, better evidence collection, and higher prosecution rates. Yet, concerns have been raised about the availability of examiners. The Violence Against Women Reauthorization Act of 2013 authorized funding for DOJ grant programs that can be used by states and other eligible entities, such as nonprofit organizations, to train and fund examiners. GAO was asked to review the availability of examiners nationwide. In this report, GAO describes (1) the prevalence and use of federal grants to train or fund sexual assault forensic examiners, (2) what is known about the availability of such examiners nationwide and in selected states, and (3) the challenges selected states face in maintaining a supply of examiners. GAO analyzed 2013 DOJ data on grantees' use of funding to train or fund examiners—the most recent full year of data available—and reviewed literature, relevant laws and DOJ documentation. GAO also interviewed grantees in six states selected based on several factors including population and geographic location, as well as DOJ officials, Department of Health and Human Services officials, and experts, such as health care association officials. Federal funding from three key Department of Justice (DOJ) grant programs can be used to train or fund sexual assault forensic examiners and for a range of other activities related to sexual assault, domestic violence, dating violence, and stalking. In 2013, at least one grantee in 49 states used such funds to provide training to examiners and at least one grantee in 26 states funded examiner positions. In 49 states, approximately 227 grantees or subgrantees—referred to collectively as grantees—reported providing training for over 6,000 examiners in 2013. The type of training examiners received ranged from comprehensive examiner training to training on specific topics, such as courtroom testimony. The extent of examiner training efforts supported with funds from the three DOJ grant programs varied by state. For example, in about half of the states, fewer than 100 examiners received training. In addition, in the states where at least one grantee funded examiner staff positions in 2013, grantees funded less than one position, on average. Approximately 75 grantees in 26 states funded roughly 50 full-time equivalent examiner positions in 2013. On the basis of literature GAO reviewed as well as interviews with experts and state officials, data on the number of examiners nationwide and in selected states are limited or unavailable. However, officials in all six selected states told GAO that the number of examiners available in their state did not meet the need for exams, especially in rural areas. For example, officials in Wisconsin explained that nearly half of all counties in the state do not have any examiners available. In health care facilities where examiners are available, they are typically available in hospitals on an on-call basis, though the number available varies by facility and may not provide enough capacity to offer examiner coverage 24 hours, 7 days a week. There are multiple challenges to maintaining a supply of examiners, according to interviews with officials in six selected states. These include: Limited availability of training . Officials in five of six selected states reported that the availability of classroom, clinical, and continuing education training opportunities is a challenge to maintaining a supply of trained examiners. Weak stakeholder support for examiners. Officials in five of six selected states reported that obtaining support from stakeholders, such as hospitals, was a challenge. For example, hospitals may be reluctant to cover the costs of training examiners or paying for examiners to be on-call. Low examiner retention rates. The above-mentioned and other challenges, including the emotional and physical demands on examiners, contribute to low examiner retention rates. Officials in one state estimated that while the state trained 540 examiners over a two-year period, only 42 of those examiners were still practicing in the state at the end of those 2 years. Officials described strategies that can help address these challenges, such as implementing web-based training courses, clinical practice labs, mentorship programs, and multidisciplinary teams that respond to cases of sexual assault. DOJ provided technical comments on a draft of this report, which GAO incorporated as appropriate.
VA’s health care system was established in 1930 primarily to provide rehabilitation and continuing care for veterans injured during wartime. Now, VA’s health care system serves about as many low-income veterans with medical conditions unrelated to wartime service as service-connected veterans. VA’s system comprises one of the nation’s largest networks of direct delivery health care providers. It includes 173 hospitals, 376 outpatient clinics, 133 nursing homes, and 39 domicilaries. These facilities are organized into a system of medical centers that typically include one or more hospitals as well as some of the other types of health care facilities. These facilities provided care to about 2.2 million veterans at a cost of about $16 billion in fiscal year 1995. VA has experienced a dramatic decline in its hospital inpatient workload. Over the past 25 years, the average daily workload in VA hospitals dropped by about 56 percent (from 91,878 in 1969 to 39,953 in 1994). VA reduced its operating beds by about 50 percent, closing or converting to other uses about 50,000 hospital beds. A number of factors could lead to a continued decline in VA hospital inpatient workload. For example: The veteran population is estimated to decline by one-half over the next 50 years. The downsizing of the military will likely make the decline even more dramatic. The number of veterans with health insurance coverage is expected to increase, which will likely decrease demand for VA acute hospital care. The nature of insurance coverage is changing. For example, increased enrollment in health maintenance organizations—from 9 million in 1982 to 50 million in 1994—is likely to reduce the use of VA hospitals. VA hospitals too often serve patients whose care could be more efficiently provided in alternative settings. The major veterans service organizations noted in their 1996 Independent Budget that a recent study indicated that VA could reduce its hospital inpatient workload by up to 44 percent if it treated patients in more appropriate settings. VA’s Under Secretary for Health testified in April 1995 that it will not be that many years before acute care hospitals become primarily intensive care units taking care of only the sickest and most complicated cases, having switched all other medical care to other settings, including ambulatory care settings, hospices, and extended care facilities. For fiscal year 1996, VA medical centers proposed to headquarters more than $3 billion in funding requests for major construction projects. VA headquarters officials reviewed and prioritized these projects. In the fiscal year 1996 budget request, the President asked the Congress to appropriate $514 million for nine projects. The projects range in size from $9 million to renovate nursing units in one hospital to $211.1 million to build a new medical center, as shown in table 1. On March 17, 1995, VA’s Under Secretary for Health announced a plan called “Vision for Change” to restructure the Veterans Health Administration. Essentially, VA’s central office and regional office structure would be replaced with veterans integrated service networks (VISN) supported by VA headquarters and such other infrastructures as management assistance councils. The plan calls for 22 VISNs, each headed by an accountable director and consisting of 5 to 11 medical centers. Each network would cover areas that reflect patient referral patterns and aggregations of patients and facilities to support primary, secondary, and tertiary care. The plan is designed to increase the efficiency of VA-provided health care by trimming unnecessary management layers, consolidating redundant medical services, and using available community resources. VA began implementing the plan on October 1, 1995. On August 29, 1995, the Under Secretary requested input from top VA health officials and others on a draft paper containing criteria for use in realigning medical facilities and programs as well as for siting new VA health care facilities. The paper was developed to help VA management identify opportunities for efficiencies. For example, it suggests that medical center directors use community providers if the same kind of services of equal or higher quality are available either at lower cost or equal cost but in more convenient locations for patients. It also encourages medical center directors to use nearby VA facilities and to merge, integrate, or consolidate duplicative or similar services if doing so would yield significant administrative or staff efficiencies or projected demand for services is expected to significantly decrease. The nine projects would, for the most part, benefit veterans needing VA inpatient care. The two new medical centers are intended to reduce veterans’ travel distances or times to access VA care. The seven renovation projects are intended to improve delivery of veterans’ health care at existing medical centers by correcting fire and safety deficiencies, improving patient environment, and increasing efficiency. The renovation projects would not correct all the deficiencies at the seven medical facilities. (See app. II for detailed project information.) The proposed medical centers in Brevard County, Florida, and at Travis Air Force Base in Fairfield, California, are intended to improve veterans’ geographic access to VA health care in east central Florida and northern California, respectively. As we reported in August 1995, the Brevard project, which includes a 470-bed hospital, a 120-bed nursing home, and an ambulatory care clinic, would improve access to VA health care facilities for many of the 258,000 veterans living in a six-county target area. The target area currently is served by VA medical centers in Gainesville, Tampa, Bay Pines (psychiatric care only), and West Palm Beach that are, respectively, 175, 125, 155, and 120 miles from the Brevard site. Our analysis of VA documents showed that the Travis project would provide VA with 243 hospital beds and an outpatient clinic and is intended to improve access to VA health care facilities for many of the 447,000 veterans living in a 32-county target area. Veterans in the area currently receive outpatient care from NCHCS’s clinics in Berkeley, Martinez, Oakland, Redding, and Sacramento; a day treatment facility in Martinez; and some inpatient care from the Travis Air Force Base Hospital, with which VA has negotiated for the use of 55 interim beds in anticipation of the Travis project. They also receive inpatient care from VA medical centers in San Francisco, Palo Alto, Livermore, and Fresno, California; and Reno, Nevada. NCHCS officials said that northern California veterans find these facilities difficult to access due to distance, congested highways, poor public transportation, and such geographic obstacles as the Sierra Nevada mountain range and San Francisco Bay. Two VA studies showed that inpatient utilization of northern California and northern Nevada VA medical centers has decreased since VA closed its Martinez medical center in 1991 for earthquake safety concerns. The studies recognized that several factors could have influenced utilization but had no evidence to indicate the extent to which the decline in utilization was caused by the lack of access to VA facilities. NCHCS’s acting director believes, however, that the decline is significantly attributable to the lack of access. All seven renovation projects would enhance the delivery of health care for patients at existing VA medical centers in the seven target areas, as shown in table 2. Medical center officials said that all seven projects would correct safety deficiencies and five would correct fire deficiencies. For example, two projects would widen patient room doors that are too narrow for beds, thereby allowing bed-ridden patients to be easily evacuated in case of fire and transported for treatment and other services without the risk of being dropped when removed from their beds. Most projects also would install sinks in patient rooms, reducing the risk of spreading infection and disease. One project would extend fire stairs from the fourth to the top floor of a five-story hospital, providing an escape route for patients in case of fire. Medical center officials told us that fire or safety deficiencies had been identified by the Joint Commission on Accreditation of Healthcare Organizations (JCAHO)—the organization that assesses medical facilities’ capabilities to provide quality care. Officials in three centers said that their centers were not cited because they planned to correct the deficiencies with their proposed projects. Officials at three medical centers said that accreditation might be jeopardized if deficiencies are not corrected. According to one medical center director, losing JCAHO accreditation would make attracting medical staff difficult and could jeopardize the center’s affiliation with its neighboring medical school. Officials at another medical center said that they would correct deficiencies with minor construction funds if the project is not funded. Medical center officials also expected all the projects to improve patient environment. For example, all would increase patient privacy, primarily by converting patient rooms now containing as many as nine beds and congregate bath and toilet facilities to single and double rooms with private and semiprivate bathrooms. Five would improve handicapped accessibility by such modifications as installing hand and wheelchair rails and increasing the number of wheelchair-accessible bathrooms. And most would upgrade heating and air conditioning, improving air quality and increasing patient comfort. Finally, all the projects are expected to increase the medical centers’ efficiency. For example, officials at two medical centers said that nursing staff should save time and energy spent escorting patients to remote congregate bath and toilet facilities when such facilities are replaced with private and semiprivate bathrooms. Staff at two facilities should save time spent escorting patients to dining and treatment rooms in remote buildings when these rooms are relocated to the buildings where patients reside. In addition, staff at one medical center would no longer use intensive care beds for patients requiring only routine monitoring when monitoring equipment is installed in patient rooms. Our analysis of VA documents shows that VA has identified major construction needs in addition to the proposed projects. We reported in August 1995 that, along with the new Brevard medical center, VA plans to spend $14 million to convert a former Naval hospital into a VA nursing home for veterans in east central Florida. VA studies indicate that, in addition to the new Travis medical center, VA plans to build a new 120-bed nursing home and a replacement outpatient clinic in Sacramento, California. Also, the 5-year facility plans for the seven medical centers with proposed renovation projects show that, in addition to the proposed projects, the facilities need about $308 million for other major and minor projects. This includes almost $210 million for 20 major projects and about $98 million for 47 minor projects. The plans identify at least one major construction project for each of six medical centers and at least two minor construction projects for each of the seven centers, as shown in table 3. While it is too early to know the effects of VA’s planned realignment efforts, most officials in the seven existing medical centers and NCHCS do not believe the plan would significantly affect the need for and scope of their projects. However, if VA’s reorganization plan changes the medical centers’ missions, their physical requirements would change. Moreover, if—as VA now contemplates under its realignment plans—alternatives to the nine projects had been rigorously analyzed as the project proposals were being developed, lower cost alternatives to construction may have been identified. To the extent that the reorganization plan would change the missions and service populations of the nine VA medical centers, medical centers’ physical requirements would change. The plan was announced in March and has not been fully implemented, so its effects are unknown. However, officials in most of the seven medical centers with proposed renovation projects and NCHCS believe that it should not significantly affect the need or scope of their proposed projects. Officials in the medical centers with proposed renovation projects believe that their medical centers will continue to provide the same health care services to veterans in the target areas and will continue to need renovation. NCHCS officials also believe that veterans in the proposed Travis target area continue to need better access to VA inpatient care. Most VA officials said that the nine projects were developed without rigorously analyzing available alternatives, including the types of service delivery alternatives that the proposed criteria suggest be analyzed for realigning existing medical facilities and siting new ones. Had they, lower cost alternatives may have been identified. In August 1995, we reported that building the new Brevard medical center is not the most prudent and economical use of VA’s resources. VA did not adequately consider the availability of hundreds of community nursing home beds and unused VA hospital beds or the potential decrease in future demand for VA hospital beds. VA could achieve its service goals for the target area by using existing capacity. For example, it could buy more convenient and less costly care from community nursing homes and use the former Naval hospital in Orlando for more accessible medical and psychiatric services. Like Brevard, building the proposed Travis medical center also may not be the most cost-effective option available at this time. A VA task force study appropriately determined that it was the best option in June 1992, when a replacement for the closed Martinez medical center was being sought. However, circumstances have changed, creating an opportunity for more efficient or effective options. For example, using the Mather Air Force Base hospital in Sacramento to serve veterans could be a viable option. It would provide a 105-bed facility that is about 30 miles from the Travis site that could be more accessible to many veterans in the Travis medical center target area. The VA task force rejected the Mather Air Force Base hospital in Sacramento as a viable option for a joint VA and Department of Defense venture, but two of the factors that led to the rejection of that facility have now changed. First, although the Air Force had planned to use the Mather Hospital to serve McClellan Air Force Base beneficiaries, the Department of Defense now plans to close McClellan. Second, the hospital at Mather was rejected because it was too small to meet VA’s projected needs for a 243-bed facility. However, some of VA’s needs are currently being met with 55 beds negotiated at the Travis Air Force Base hospital, and some needs could possibly be met with available community hospital space. Moreover, the demand for inpatient care in the target area will likely decline in the future as the veteran population in northern California declines as it is projected to do throughout the country. The VA task force that selected the Travis site for the proposed VA medical center had ranked other options involving dual inpatient locations higher than the Travis option for veterans’ access to health care but had rejected these options, in part, because they were too costly. Now, however, with the planned closing of McClellan and the possible availability of the Mather hospital for VA use, providing some inpatient care at the Travis hospital and some at the Mather hospital (or another northern California site) may provide veterans with better and more cost-effective access to VA health care than can be provided by a single Travis project. VA also did not rigorously analyze available alternatives when developing the seven renovation proposals. Had criteria similar to that recently proposed by VA for realigning medical facilities and siting new ones been used, lower cost alternatives may have been identified. The need for the proposed projects was determined on the basis of the physical needs identified in the medical centers’ facility development plans. These plans indicate that some alternatives were considered, but officials at most of the seven medical centers told us that they did not conduct detailed studies or analyses of all available options. Some said, for example, that they did not thoroughly explore the possibility of using community and other VA medical facilities. They believe, however, that using other VA medical centers is infeasible, usually because the other VA centers are too far away or do not provide the needed medical services, and that use of community facilities is infeasible, usually because contracting for care is thought to be too expensive. The effects of delaying the proposed projects on construction award dates and costs would depend on the length of the delay. Delaying project funding until fiscal year 1997 should have a negligible effect on construction award dates for the projects if current design schedules are met. Because construction schedules for all but two projects show that construction award dates would be July 1996 or later, the dates for starting construction would be delayed only 1 to 3 months; the Reno, Nevada, project would be delayed 11 months, and the Travis project has no single award date because it has several phases. Delaying the projects longer would extend the construction award dates. Moreover, VA headquarters officials expressed concern that, if delayed, the projects may not be selected for VA’s fiscal year 1997 major construction budget because VA may identify other higher priority projects. Most medical center officials believe that delaying the awards of construction contracts would increase costs due to inflation. However, delaying the awards until fiscal year 1997 would have minimal effects on costs because cost increases from inflation would involve time periods of fewer than 3 months for most projects. Similarly, savings expected from increased efficiencies would be lost for only a short time. In addition, VA would defer for a relatively short time the project activation costs, which are estimated at more than $100 million for the Brevard and Travis projects, and the costs associated with providing such new services as air conditioning. The effects on costs would increase if the project award dates slip beyond fiscal year 1997. VA officials told us that veterans would continue receiving health care regardless of how long project funding is delayed. Long-term commitments for any major construction or renovation of predominantly inpatient facilities in today’s rapidly changing health care environment accompany high levels of financial risk. VA’s recent commitment to a major realignment of its health care system magnifies such risk by creating additional uncertainty. For example, our assessment of the proposed Brevard project shows the potential for lower cost alternatives to new construction for meeting veterans’ needs. In addition, we believe that analyzing such alternatives in connection with the other major construction projects in VA’s budget proposal is entirely consistent with VA’s suggested realignment criteria. Delaying funding for these projects until the alternatives can be fully analyzed may result in more prudent and economical use of already scarce federal resources. The Congress may wish to consider delaying funding for all major VA construction projects until VA has completed its criteria for assessing alternatives to such projects and applied the criteria to projects that it proposes for congressional authorization and funding. If it wants to avoid significant delays of construction awards for projects that are ultimately justified under VA’s pending assessment criteria, the Congress may wish to make design funds available in fiscal year 1996 for the proposed projects. We obtained comments on a draft of this report from VA officials, including the Deputy Under Secretary for Health. VA officials disagree with our suggestion that the Congress may wish to consider delaying funding of all major construction projects until VA has completed and applied criteria for assessing alternatives to projects proposed for congressional authorization and funding. VA officials reiterated that the proposed new medical center in Brevard County, Florida, should not be delayed because they believe the facility is needed, as explained in comments on our report, VA Health Care: Need for Brevard Hospital Not Justified (GAO/HEHS-95-192, Aug. 29, 1995). They also said that the proposed replacement medical center at Travis Air Force Base should be fully funded in fiscal year 1996. In addition, they do not believe that the remaining seven projects should be delayed because the projects would correct fire, safety, and environmental deficiencies in some of VA’s most antiquated facilities. They said that without needed attention, fire and safety code violations at these facilities could conceivably result in catastrophic consequences. VA officials said that the inference in our report that the planned realignment creates uncertainty in construction needs is misleading. They recognize that the VISN concept is new but do not believe that the planned realignment will preclude the need to upgrade the facilities. Officials in the seven medical centers scheduled for renovation and NCHCS do not believe that the realignment will significantly affect the need for and scope of their projects. The VA officials told us that VA managers recently validated the projects’ consistency with the needs of a network organization and with anticipated facility missions and workloads. These officials believe that veterans will continue to be served at the facilities. They said that any uncertainty about construction needs is created by the uncertain future direction of health care in general, not by VA’s planned realignment. Despite these arguments, we continue to believe that the Congress should consider delaying funding for construction of major projects until VA has had time to implement its planned realignment efforts. This implementation is expected to include completing and applying criteria for assessing all alternatives for serving veterans, such as using community or other VA facilities. VA’s planned realignment efforts have merit, and VISN directors need time to determine what changes should be made to improve the effectiveness and efficiency of VA health care delivery. We believe that the planned realignment creates uncertainty because it appears to suggest that medical centers may not operate in the future as they do today. However, our review showed that VA determined the medical centers’ construction needs on the basis of the assumption that the centers would continue to operate essentially as they do today. Our concern is that VA may determine, as part of the realignment effort, that services provided by one or more of the facilities could be provided more effectively or efficiently through sharing or contracting with other providers or consolidating with services of other VA medical centers. If the proposed construction projects are under way, VA may continue providing services as usual, even though doing so may be less effective or efficient than other potential service alternatives. Delaying construction funding should provide VA the time needed to assess available alternatives to the proposed renovation projects and to reexamine the Travis project in view of the changed circumstances, such as the closure of McClellan Air Force Base in Sacramento. If the assessment shows that the facilities would operate for 25 or more years, the projects would be justified. Our position that the proposed Brevard project is unjustified remains unchanged. VA officials are concerned that delaying project funding could significantly affect construction award dates. They said that design funds for most projects have already been delayed and have not been approved. Without congressional approval of design funding, no awards of construction document contracts will be made for fiscal year 1996. According to VA officials, this will delay project schedules for at least 1 year. We have revised our “Matter for Congressional Consideration” to clarify that the Congress may wish to make design funds available in fiscal year 1996 for the proposed projects if it wants to avoid significant delays of construction awards for projects that are ultimately justified under VA’s pending assessment criteria. Our observation that delaying project funding until fiscal year 1997 should have a negligible effect on construction award dates for projects if design schedules are met is based on the premise that design funds will be available for the projects in fiscal year 1996. VA officials also said that the projects were not intended to correct all the deficiencies at the seven medical centers scheduled for renovation. They said that the size and number of projects in the fiscal year 1996 request were constrained by anticipated budget levels and that VA managers were instructed to limit the size of projects to address only the most pressing patient environment, ambulatory care, and infrastructure needs. Moreover, they said that the six projects involving renovation of inpatient areas purposely affect 50 percent or less of the total inpatient space at most facilities to recognize the downsizing of inpatient care capability. We reported that the projects would not correct all deficiencies to discuss the projects in proper perspective—not to criticize VA for failing to make all corrections at once. Appendix II shows that the proposed projects would affect only a fraction of the inpatient beds in most of the facilities scheduled for renovation. While the renovation projects generally would reduce the number of upgraded inpatient beds, the fact remains that VA’s fiscal year 1996 major construction budget focuses on inpatient care. We are sending copies of this report to the Ranking Minority Member of the Subcommittee on Hospitals and Health Care; the Chairmen and Ranking Minority Members of the House and Senate Committees on Veterans’ Affairs; the Chairmen and Ranking Minority Members of the House and Senate Subcommittees on VA, HUD, and Independent Agencies, Committees on Appropriations; and the Secretary of Veterans Affairs. Copies also will be made available to others on request. Please call me at (202) 512-7101 if you or your staff have any questions about this report. Other contributors to this report are listed in appendix III. To obtain information about the projects included in VA’s fiscal year 1996 budget request, including a description of the projects and expected benefits, we reviewed key VA documents, such as VA’s fiscal year 1996 major construction budget request and the facility development plans and 5-year facility plans for the seven medical centers where renovation projects are planned. We also visited the seven medical centers with proposed renovation projects and the NCHCS, where we interviewed VA officials and reviewed such project-specific documents as the architect and engineer plans, schematic drawings, and project space programs.For the remaining project, we used information gathered during recently completed work on another assignment. Further, we discussed the projects with officials in the medical centers and NCHCS. In addition, we discussed the Travis project; the Reno, Nevada, project; and VA construction procedures with officials in VA’s Western Region and headquarters. To assess the relationship between the proposed projects and VA’s planned efforts to realign its medical facilities into 22 VISNs and the effect of delaying the projects, we reviewed the proposed plan; selected testimony of the Under Secretary for Health; and the August 29, 1995, draft paper containing criteria for realigning VA facilities and programs. We discussed how the construction budget would be affected by this plan with officials in VA Western Region and headquarters. We also discussed how individual projects would be affected by VA’s planned restructuring with officials in the seven medical centers and NCHCS. We conducted our review between June and October 1995, following generally accepted government auditing standards. This appendix contains information on the nine proposed projects in the President’s fiscal year 1996 VA major construction budget request. For each proposed project, it provides a general description, including characteristics on the existing medical center or target service area, characteristics of the project, and information on additional construction plans for the target area. It also provides the expected veterans’ health care benefits and VA costs, the relationship between the proposed project and VA’s planned reorganization, and the potential effects of delayed project funding on veterans’ health care and VA costs. The planned reorganization, called “Vision for Change,” was announced on March 17, 1995, by VA’s Under Secretary for Health. Essentially, the Veterans Health Administration’s central office and regional office structure would be replaced with VISNs supported by VA headquarters and such other infrastructures as management assistance councils. The plan calls for 22 VISNs, each headed by an accountable director and consisting of 5 to 11 medical centers. Each network would cover areas that reflect patient referral patterns and aggregations of patients and facilities to support primary, secondary, and tertiary care. The plan is designed to increase the efficiency of VA-provided health care by trimming unnecessary management layers, consolidating redundant medical services, and using available community resources. VA began implementing the plan on October 1, 1995. On August 29, 1995, the Under Secretary requested input from top VA health officials and others on a draft paper containing criteria for use in realigning medical facilities and programs, as well as for siting new VA health care facilities. The paper was developed to help VA management identify opportunities for efficiencies. For example, it suggests that medical center directors use community providers if the same kind of services of equal or higher quality are available either at lower cost or equal cost but at more convenient locations for patients. It also encourages using nearby VA facilities and merging or consolidating duplicative or similar services if doing so would yield significant administrative or staff efficiencies or projected demand for services is expected to significantly decrease. The proposed Brevard project would construct a new medical center on 77 acres in Brevard County, Florida. The target service area would be six counties in east central Florida, where 258,000 veterans live. The new medical center would provide primary and secondary medical, surgical, and psychiatric care and nursing home care. It also would be the psychiatric referral facility for all Florida VA medical centers. The center would have 470 hospital beds, including 195 medical, 45 surgical, 230 psychiatric beds, and 120 nursing home beds. It would not be affiliated with any medical school or have any agreements with Department of Defense or other medical institutions. The project includes 792,524 gross square footage of hospital and outpatient clinic space, and 57,886 gross square footage of nursing home space. The estimated cost is $171.9 million, of which $17.2 million was previously appropriated for design and other costs. The Brevard target area is currently served by VA medical centers in Gainesville, Tampa, Bay Pines (psychiatric care), and West Palm Beach, which are, respectively, 175, 125, 155, and 120 miles from the Brevard site. When the Brevard medical center is opened, inpatient workload for these centers would decline, increasing their excess capacity. Veterans’ health care: The new Brevard medical center is designed to improve access to VA hospital care for veterans in east central Florida. As a state-of-the-art facility, it would comply with all fire, safety, and other requirements. VA costs: VA estimates that activation costs would be $34.9 million and recurring costs, $88.7 million, primarily for 1,329 staff; resources would be shifted from other medical centers to staff and operate this center. The Brevard project manager in VA headquarters said that it is too early to know the effects of the planned reorganization on the proposed Brevard medical center or east central Florida veterans. VA did not consider all available options when developing the Brevard proposal. In August 1995, we reported that converting the former 153-bed Orlando Naval hospital to a nursing home and building a new hospital and nursing home in Brevard is not the most prudent and economical use of VA resources. VA inadequately considered the availability of hundreds of community nursing home beds and unused VA hospital beds as well as potential decreases in future demand for VA hospital beds. VA could achieve its service goals by using existing capacity. For example, VA could purchase care from community nursing homes to meet veterans needs more conveniently and at lower costs ($106 verses $207 per patient day) and use the former Naval hospital to improve veteran’s accessibility to medical and psychiatric care. VA could also use excess beds in its Gainesville, Tampa, and Bay Pines medical centers if necessary. Considering such alternatives would ensure that VA’s planning strategy focuses on the most prudent and economical use of resources. Also, such lower cost alternatives would provide VA the opportunity to meet its service delivery goals in a more timely manner. Veterans’ health care: East central Florida veterans would continue to receive care from community and other less convenient VA medical facilities. VA costs: Project design is scheduled for completion in February 1996, construction award in September 1996, and construction completion in December 1999. The project manager said that if the project is delayed, inflation would increase construction costs; no estimates had been made. Also, the construction boom in Florida could have an even greater affect on costs because Disney World and the general housing market place a high demand on construction. The proposed Travis project, which is a joint venture with the Air Force, would be a major addition and alteration to the David Grant Medical Center at Travis Air Force Base. The target service area is 32 counties in northern California where 447,000 veterans live. The project would provide VA with 243 beds, including 170 new ones and 73 existing ones dedicated by the Air Force for VA use; add new ambulatory care space; and renovate existing radiology, dietetic, and other support space. The new medical center would provide primary and secondary medical, surgical, and psychiatric care. It would be affiliated with the University of California at Davis. The project includes 560,502 gross square footage of new construction and 125,450 gross square footage of renovation. The estimated cost is $211.1 million, of which $22.6 million was previously appropriated for design and other costs. Northern California veterans currently receive inpatient care from VA medical centers in San Francisco, Palo Alto, Livermore, and Fresno, California; and Reno, Nevada, which, according to NCHCS officials, are difficult to access due to distance, congested highways, poor public transportation, and such geographic obstacles as the Sierra Nevada mountain range and the San Francisco Bay. When the Travis medical center opens, inpatient workloads for these VA medical facilities will likely decline. VA plans to request funds for an outpatient clinic to replace a small antiquated clinic and for a new nursing home, both in Sacramento. Veterans’ health care: NCHCS officials said that the new Travis medical center would improve access to VA hospital care for northern California veterans. It would be a state-of-the-art medical facility. As a joint venture with the Air Force, the center would provide opportunities for savings, through shared equipment and specialties, and increased opportunities for education, training, and research. VA costs: VA estimates activation costs would be $67.1 million and recurring costs, $72.5 million, primarily for 969 staff. NCHCS officials do not believe that VA’s planned reorganization would significantly affect the need for the new Travis medical center. They said that NCHCS already extensively coordinates with the medical centers that would be in the proposed VISN and the need for a medical center to serve northern California would not change. VA considered a number of options before selecting the Travis site as the best option to provide quality care to the largest number of veterans with the lowest life-cycle costs. In December 1992, we reported that in selecting the replacement site for the closed Martinez medical center, VA should consider the construction cost, the time needed to complete construction, effects on veterans’ access to care, potential for affiliation with medical schools, environmental impact, capabilities of the replacement facility, and consistency with the long-range needs of VA and the Department of Defense beneficiaries in the target area. We also noted that VA’s basis for closing the Martinez facility on an emergency basis was unclear and that analysis leading to a decision to locate the replacement facility in Davis, California, was flawed, biased against renovating the Martinez medical center, and did not adequately consider all available options. On the basis of an analysis by a second task force, VA announced on November 10, 1992, that the replacement facility would be located at Travis Air Force Base. In addition to analyzing 10 potential siting options, the task report discussed opportunities for a sharing or joint venture at the David Grant Medical Center, Mather Air Force Base, and Letterman Army Medical Center. The task force rejected the Mather Air Force Base hospital as a viable option because the Air Force was planning to use the hospital to serve McClellan Air Force Base, it was too small (105 beds), and it had seismic and other safety problems. Veterans’ health care: Northern California veterans would continue to receive services from community and other less convenient VA medical centers. VA costs: Project design is scheduled for completion in February 1996 and construction completion in June 2000. If delayed, inflation would increase construction costs. The Boston medical center is a nine-building campus on 21 acres that serves the New England states. It is affiliated with Boston University Medical and Dental, Tufts University Medical and Dental, and Harvard University Dental schools, and it has sharing agreements with the New England Baptist Hospital, the Shattuck Hospital, and the New England Organ Bank. It is the tertiary medical and surgical center for VA medical centers in New England and provides psychiatric care. In fiscal year 1994, the average number of operating hospital beds was 215 medical, 117 surgical, and 108 psychiatric; and the average daily census was 151, 93, and 71, respectively. The center admitted 9,156 patients and provided 355,437 outpatient visits, and about 94.3 percent of its patients were category A veterans, including 42.6 percent service-connected, 43 percent nonservice-connected low-income, and 8.7 percent nonservice-connected with special needs (4.6 percent were other veterans, and 1.1 percent were nonveterans). This $28 million, 97,722-gross-square-foot ambulatory care project would add a three-story section to the main hospital to replace the existing operating, recovery, and emergency rooms. It would provide 130 new outpatient examination rooms; new operating, recovery, and emergency rooms; and a 170-space parking deck. The project would not correct all the medical center’s deficiencies. Boston’s 5-year facility plan also includes $59.4 million for four major projects for a research addition, a hospital seismic renovation, and ward renovations and $28.9 million for 11 minor construction projects. Veterans’ health care: The Boston medical center would not serve any new types of patients or provide any new services. Medical center officials said that the project would correct safety deficiencies, improve patient environment, and increase efficiency. Expanding the emergency room would correct deficiencies cited by JCAHO for insufficient space provided for patient care and privacy. Colocating the operating and recovery room would correct infection control deficiencies cited by JCAHO. Increasing the number of specialty and general-purpose examination rooms would improve staff scheduling and reduce overcrowding and patient inconvenience in accordance with VA’s policy to provide veterans an accessible modern environment; current outpatient space is adequate for about one-half the workload under VA space standards. Relocating the emergency room closer to the ambulance offload area would eliminate the need to transport patients through public corridors, reducing the time for treatment and increasing privacy. Modernizing the operating rooms would provide space to accommodate additional medical specialists and the latest equipment. Expanding the parking space would reduce crowding and provide weather protection for patients, increasing customer satisfaction. In addition, handicapped accessibility will be improved. VA costs: VA estimates that activation costs would be $14.6 million and recurring costs, $3.1 million, partly for four additional staff. VA plans to offset some costs by consolidating Boston’s outpatient clinics. The Boston medical center director did not believe that VA’s planned reorganization would significantly affect the medical center because it should continue to be the tertiary center for the proposed VISN. Boston medical center would serve one fewer medical center than is currently served. Medical center officials believe that no feasible alternative exists but conducted no formal studies or analyses. Using other VA medical centers would not be feasible because many do not have the expertise or equipment to provide the kinds of care provided by Boston, such as radiation therapy, intensive chemotherapy, and kidney transplants. Some, like the Brockton and Bedford medical centers, which primarily provide psychiatric outpatient care, cannot provide needed services; some are too far away, such as West Haven medical center, which is about 150 miles away—a 3-hour drive from Boston; and some, such as the West Roxbury medical center, are operating at capacity. Using community facilities would be infeasible because contracting is prohibitively expensive; officials estimate that outpatient care in community facilities would be about $185 per visit versus their cost of about $69, and emergency room care would cost about $1,000 per visit versus their cost of about $166. Renovating the hospital would be infeasible because all hospital floors are being used, it would be too costly to move the existing support columns to make room for larger operating rooms, and there is no overhead space for needed utilities. Renovating other buildings would be infeasible because they are too small, used for research or other specific purposes, or too far from the hospital. Finally, segmenting the project would not be feasible because total costs could increase by up to $6 million; deleting the ambulatory care facility would render the current operating, recovery, and emergency rooms too small for outpatient clinics; and no nearby acreage is available for parking. Veterans’ health care: Boston medical center officials said that the center would continue to provide ambulatory care in an increasingly constrained, outmoded physical plant; patient infection risk, scheduling, and privacy problems would continue; operations would continue to be performed in a suite that is not suited for current and future diagnostic and monitoring equipment or procedures; and parking would remain inadequate. VA costs: Project design is scheduled for completion in December 1995, construction contract award in July 1996, parking lot completion in September 1996, and building construction completion in January 1999. If delayed, inflation would increase costs $1.25 million each year that the project is delayed, according to the chief of engineering services. The Reno medical center is a 16-building campus on 14 acres that serves 23 counties in northern Nevada and northeast California. It is affiliated with the University of Nevada School of Medicine and has sharing agreements with the Nevada Army and Air National Guard and Sierra Army Depot. It provides primary and secondary medical and surgical care, psychiatric care, and nursing home care. During fiscal year 1994, the average number of operating beds was 58 medical, 22 surgical, 32 psychiatric, and 60 nursing home beds; and the average daily census was 40, 18, 17, and 54, respectively. The center admitted 3,796 inpatients and provided 122,044 outpatient visits and about 96 percent of its patients were category A veterans, including 35.3 percent service-connected, 51 percent nonservice-connected low-income, and 9.7 percent nonservice-connected with special needs (0.4 percent were other veterans and 3.5 percent were not veterans). This $27.4 million ($7.3 million was previously appropriated for design), 108,639-gross-square-foot patient environment project would add a five-story medical, surgical, and psychiatric nursing unit to the main hospital to replace existing nursing units. It would replace four-bed rooms and congregate bath and toilet facilities with single and double rooms with private, wheelchair-accessible bathrooms; upgrade HVAC and other utility systems; install medical gases (oxygen and suction) and nurses’ call systems in patient rooms; expand ambulatory care capabilities; relocate the loading dock, trash compactor, generator and research buildings, and bulk oxygen storage tanks; and demolish and replace existing engineering quonset huts. The project would decrease the number of beds from 112 to 110 and, according to VA headquarters officials, could be scoped down if demand for inpatient care decreases. It would not affect nursing home beds. The project would not correct all the medical center’s deficiencies. Reno’s 5-year facility plan also includes $35.0 million for five major construction projects to build and expand the ambulatory care facilities, expand nursing home care, and replace HVAC in two buildings and $6.4 million for three minor construction projects. Veterans’ health care: The Reno medical center would not serve any new types of patients or provide any new services. Reno medical center officials said that the project would correct fire and safety deficiencies and improve patient environment. Installing a sprinkler system and adding in-wall medical gases and suction would correct JCAHO life and safety standards and meet VA requirements. Adding isolation rooms designed for patients with such highly infectious diseases as tuberculosis and acquired immunodeficiency syndrome and installing sinks in every patient room would help decrease the spread of infection and disease. Replacing existing four-bed rooms and congregate bath and toilet facilities with single and double rooms with private bathrooms not only complies with VA privacy goals and JCAHO patient rights standards, it also improves staff efficiency and eliminates the need to close bathrooms when in use by the opposite sex. Widening doors and hallways complies with JCAHO environment-of-care requirements. Upgrading air conditioning would increase patient comfort. VA costs: VA estimates that activation costs would be $5.6 million and recurring costs, $10.1 million; no staff changes are planned. Medical center officials believe that operating costs would increase due to the addition of air conditioning, but maintenance costs would decrease because of more efficient equipment and design; no cost estimates have been made. Reno officials believe that the planned reorganization would have no significant affect on the medical center or the proposed project. Reno’s relationship with other VA medical centers in the proposed VISN would remain essentially as it is now. For example, Reno would continue to send patients to San Francisco for cardiology and Palo Alto for psychiatric services. Reno officials also believe that no feasible alternative exists but conducted limited cost studies when developing the proposed project. Using other VA medical centers would not be feasible because other facilities are too far away (the closest is over 200 miles away) and are too difficult to access, especially in the winter for patients who must cross the Sierra Nevada mountain range. Using community facilities would be infeasible because Reno’s medical school affiliate does not have its own medical facility; the affiliation would be threatened because no opportunity would exist for resident training; the continuity of care would be disrupted because patients would be treated by physicians who do not follow them in both inpatient and outpatient care; and contracting for community care is believed to be too expensive—officials estimated that the annual cost of contracting for all inpatient care, excluding physician fees, would range between $34 million and $71 million. Acquiring an existing facility would not be feasible because ambulatory care would be provided at the existing medical center and inpatient care would be provided at the acquired facility, requiring the transportation of patients, staff, and equipment between facilities, which would increase operational costs, inconvenience patients, and increase contract hospital costs. Renovating the facility would not be feasible because doing so would not eliminate narrow doors and hallways or correct certain other deficiencies and patients would have to be put into costly community facilities during the renovation. Finally, segmenting the project would be infeasible because building only two or three of the five floors would not allow Reno to meet all JCAHO standards and would likely increase costs due to inflation. No estimates were made. Moreover, no guarantee exists that funding would be available to complete the project. Veterans’ health care: The Reno medical center would continue to provide inefficient care in facilities that do not meet industry standards. In addition, medical center management and VA Western Region officials believe that a funding delay could result in losing JCAHO accreditation after the upcoming October 1995 accreditation review. Medical center management believes that losing accreditation would result in losing affiliation with the University of Nevada, causing university doctors, nursing staff, and other professionals to refuse to practice in the nonaccredited facility; research opportunities and funding also could be lost. VA costs: Design is scheduled for completion in November 1995, construction contract award in January 1996, and construction completion in January 1999, although the director believes that construction would be completed in late summer 1998. Cost estimates for funding delay have not been computed. The Marion, Indiana, medical center is an 88-building campus on 151 acres that serves north central Indiana and northwestern Ohio. It is affiliated with Indiana University and four other state universities for education and training experience. It provides primary and secondary medical and surgical care, nursing home care, and tertiary psychiatric care for other VA medical centers in Indiana. For fiscal year 1994, the average number of operating beds was 124 medical, 320 psychiatric, and 69 nursing home; and the average daily census was 97, 285, and 65, respectively. The center admitted 2,037 inpatients and provided 54,701 outpatient visits, and about 93 percent of its patients were category A veterans, including 33.8 percent service-connected, 48.2 percent nonservice-connected low-income, and 11.2 percent nonservice-connected with special needs (3.1 percent were other veterans and 3.6 percent were nonveterans). This $17.3 million, 69,259-gross-square-foot patient environment project would construct a new two-story psychiatric nursing care building to replace three existing buildings that would remain vacant. The project would replace rooms with up to four beds and congregate bath and toilet facilities with single and double rooms with private bathrooms (12 single rooms and 16 double rooms would be wheelchair-accessible); locate nursing stations on the same floor with patient rooms; and add dining facilities, elevators, and central heat and air conditioning to the building. The project would decrease acute psychiatric beds from 141 to 100. The project would not affect other buildings on the campus. The project would not correct all the medical center’s deficiencies. Marion’s 5-year facility plan also includes a $9 million major construction project and $3.5 million for three minor projects. Moreover, Marion received $45.8 million in fiscal year 1992 for a new 240-bed geropsychiatric facility. Veterans’ health care: The Marion, Indiana, medical center would not serve any new types of patients or provide any new services. Medical center officials said that the project would construct a modern building that would correct fire and safety deficiencies, improve patient environment, and improve efficiency. The buildings’ attic floors currently do not meet fire code. Replacing existing four-bed rooms with single and double rooms with private baths would meet VA privacy goals. Increasing the number of handicapped-accessible rooms and installing elevators would meet VA accessibility criteria. Installing central heating and air conditioning would increase patient and staff comfort. Locating dining facilities and other support services in the patient building would save staff time transporting patients and traveling between buildings, and locating nursing stations on patient floors would improve patient monitoring and supervision. Providing all acute psychiatric care in one building saves staff time traveling between buildings. Strategically locating nursing stations allows more efficient patient monitoring. VA costs: VA estimates that activation costs would be $3.2 million and recurring costs, almost $800,000 annually, primarily for 12 additional staff. Marion medical center officials believe that VA’s planned reorganization would not significantly affect the medical center. They believe that Marion would be the psychiatric referral facility for the seven other VA medical centers that would be in the proposed VISN. Further, workload may increase, not only as a result of the plan but also because Indiana closed a large state mental health facility this year; Indiana state officials have already tried to place veterans in the Marion facility. Marion officials also believe that no feasible alternative exists to the new center. Using other VA medical centers would not be feasible because the Fort Wayne medical center does not provide psychiatric care; the Indianapolis medical center, with only 20 acute psychiatric beds, has limited capacity; and psychiatric facilities in VA medical centers in Chillicothe, Cleveland, and Dayton, Ohio, are more than 4 hours away, and Indiana law prohibits referring patients with court-ordered treatment across state lines. Using community facilities would not be feasible because northern Indiana has no comparable community inpatient psychiatric facilities. Officials rejected renovating the existing buildings because doing so would be too expensive, but they had made no cost estimates. In addition, renovation would not correct patient privacy problems and would only partly improve inefficient operations—staff would continue to spend time transporting patients across the campus for treatment, meals, and other activities—and installation of elevators would reduce space available for patient rooms. Finally, segmenting a new building is not practical because an entire new building must be built. Veterans’ health care: The Marion, Indiana, medical center would continue to provide inefficient care in facilities that do not meet industry standards. Medical center officials noted, however, that some deficiencies would be corrected by installing elevators and central heat and air conditioning with minor construction funds in fiscal year 1997. In addition, Marion officials are concerned that JCAHO accreditation could be lost if the project is not funded. VA costs: Project design is scheduled for completion in August 1995, construction award in September 1996, and construction completion in November 1998. If delayed, inflation would increase construction costs; no estimates have been made. The Salisbury medical center is a 27-building campus on 155 acres that serves 17 counties in southern North Carolina. It is affiliated with eight institutions and has agreements with Bowman Gray School of Medicine for ophthalmology services and Rowan Memorial Hospital for treatment of patients when the VA system has no space or transferring patients to another VA facility is too risky. It provides primary and secondary medical and surgical care and nursing home care and is the psychiatric referral center for all VA medical centers in North Carolina. During fiscal year 1994, the average number of operating beds was 330 medical, 24 surgical, 235 psychiatric, and 93 nursing home beds; and the average daily census was 320, 22, 181, and 89, respectively. The center admitted 3,457 inpatients and provided 93,196 outpatient visits and about 95.3 percent of its patients were category A veterans, including 49.3 percent service-connected, 35.8 percent nonservice-connected low-income, and 10.2 percent nonservice-connected with special needs (4.5 percent were other veterans and 0.3 percent were nonveterans). The proposed $17.2 million, 106,871-gross-square-foot patient environment project would renovate medical and surgical nursing units in one building. It would expand all three floors over the entrance; convert rooms with up to four beds and shared or congregate toilet and bath facilities to single rooms with private, handicapped-accessible bathrooms; upgrade air circulation, electrical, and plumbing systems; and expand the fire stairs at the end of the corridors from the fourth floor to the fifth floor. It would decrease the number of beds in the renovated area from 174 to 162. The project would not affect other buildings on campus. The project would not correct all the medical centers’ deficiencies. Salisbury’s 5-year facility plan also includes $51.8 million for five major construction projects and $11.8 million for six minor projects. In addition, Salisbury received fiscal year 1987 funds for a geropsychiatric center and fiscal year 1993 funds for a new nursing home. Veteran’s health care: The Salisbury medical center would not serve any new types of patients or provide any new services. Medical center officials said that the project would correct fire and safety deficiencies, improve patient environment, and increase efficiency. Extending the fire stairs up to the fifth floor to eliminate dead-end corridors would comply with the National Fire Protection Association and National Building Code standards. Upgrading plumbing and electrical systems would comply with Underwriters Laboratories, National Electrical Code, and National Fire Protection Association standards. Overhauling the fresh air exchange and replacing the fan coil system with an all-air system to eliminate potential risks associated with recirculating water-cooled air and improve indoor air quality would meet the American Society of Heating, Refrigeration, and Air Conditioning Engineers standards. Converting to single rooms with private, handicapped-accessible bathrooms would increase privacy, improve handicapped accessibility, decrease the risk of infectious disease, and eliminate the need for staff to carry patient waste to inconvenient congregate facilities. Increasing patient room space would make room for furniture and medical equipment so that mechanical lifts can be properly operated, reducing risk of injury to patients and staff. Relocating nurses’ stations would provide better line of sight and improve the monitoring of the patients. Increasing storage space would allow halls and offices to be used as intended. VA costs: VA estimates that activation costs would be $2.8 million and recurring costs would be $3.2 million annually, primarily for 52 added staff. Salisbury medical center officials said that it is too soon to know the effect of the planned reorganization, but they believe that it would have little effect on the medical center’s operations. They do not think that the medical center’s mission or the need for the project would change significantly; that is, the statewide VA network would remain intact, with the four VA medical centers in North Carolina continuing to function as in the past. Medical center officials also believe that no feasible alternative exists but conducted no cost or feasibility studies. They believe that using other VA medical centers would not be feasible because the centers are more than 100 miles away. Leasing space, establishing sharing agreements, and contracting for community care would be infeasible because of the lack of available facilities or high cost. New construction would not be feasible because it would be too expensive. The project could be segmented, but doing so would not be practical because patient floors would be disrupted for long periods of time and costs would be higher. Veterans’ health care: The Salisbury medical center would continue to provide inefficient care in facilities that do not meet industry standards. In addition, Salisbury officials said that JCAHO accreditation could be jeopardized, although Salisbury has not received any citations in the past. They also said that veterans may choose not to seek care from Salisbury. VA costs: Project design is scheduled for completion in August 1995, construction contract award in September 1996, and construction completion in December 1999. If delayed, the chief engineer said that deficiencies would be corrected with a series of smaller projects that would take longer and be less efficient and more costly; no estimates have been made. The Perry Point medical center is a 208-building campus on 478 acres that serves Maryland, the District of Columbia, and parts of Delaware, Pennsylvania, Virginia, and West Virginia. It is affiliated with the University of Maryland and Johns Hopkins University medical schools, has sharing agreements with the Department of Defense to provide cardiology services and Harford Memorial Hospital to provide specialized diagnostic testing, and provides training programs with over 20 colleges and universities. It provides primary and secondary medical care, long-term care, and tertiary psychiatric care. In fiscal year 1994, the average number of operating beds was 248 medical and 340 psychiatric, and the average daily census was 167 and 246, respectively. The center admitted 3,056 inpatients and provided 92,646 outpatient visits and about 92.2 percent of its patients were category A veterans, including 36 percent service-connected, 46.2 percent nonservice-connected low-income, and 10.1 percent nonservice-connected with special needs (7.5 percent were other veterans and 0.3 percent were nonveterans). This $15.1 million, 73,028-gross-square-foot patient environment project would renovate psychiatric nursing units in two buildings. It would convert rooms with up to six beds and congregate bath and toilet facilities in single and double rooms with private and semiprivate handicapped-accessible bathrooms, relocate nursing stations; upgrade HVAC systems, add therapeutic support space to both buildings, remodel one cafeteria and relocate another, and correct basement flooding problems. The number of beds in the two buildings would decrease from 160 to 108. The project would not correct all the medical center’s deficiencies. Perry Point’s 5-year facility plan also includes $30 million for a major construction project to build a new nursing unit building and $22.2 million for nine minor construction projects for clinical improvements, patient environment improvements, and fire and safety deficiency corrections. Veteran’s health care: The Perry Point medical center would not serve any new types of patients or provide any new services. Perry Point officials said that the project would improve the patient environment and increase efficiency. JCAHO had identified deficiencies but had not cited Perry Point for violations because the deficiencies were to be corrected with the project. Relocating nursing stations and adding therapy space would improve patient observation and supervision. Replacing rooms with up to six beds and congregate bath and toilet facilities with single and double rooms with handicapped-accessible private bathrooms would correct privacy deficiencies and improve patient accessibility. Upgrading elevators and locating treatment space and cafeterias in the buildings would save staff time transporting patients. Locating supply rooms more conveniently should save nurse time. In addition, the director and chief of staff believe that the project would make Perry Point more competitive with community providers. VA costs: VA estimates that activation costs would be $2.0 million and recurring costs, $0.5 million. Medical center officials estimate that upgrading HVAC would save about $2,000 a year in operations costs. Perry Point’s director believes that the planned reorganization would have no significant impact on the medical center. Perry Point’s mission would not change because it is the only VA medical center in the proposed VISN that would provide long-term psychiatric care. Under the realignment, however, three of the medical centers in the network—Perry Point, Baltimore, and Fort Howard—will be managed by one director. Perry Point officials believe that they have no feasible options. Using other VA medical centers would be infeasible because they are too far away. The closest facility, Coatesville, does not have the capacity to handle the number of patients cared for by Perry Point. Using community facilities would not be feasible because the affiliated facilities do not provide the tertiary care that Perry Point provides and others are prohibitively expensive. Renovation was selected over new construction because the existing buildings are structurally sound and management thought that this option would provide a better chance to get other needed construction at the center. Finally, segmenting the buildings is not feasible because all the buildings need renovation. Veterans’ health care: The Perry Point medical center would continue to provide veterans with inefficient care in facilities that do not meet industry standards. In addition, officials said that the medical center would continue to be less attractive than community facilities in competing for patients. VA costs: Project design was completed in September 1995, construction contract award is scheduled for completion in August 1996, and construction completion in February 1999. If delayed, Perry Point’s chief engineer said that inflation would increase costs; no estimates had been made. Moreover, increased competition in the local construction industry could further raise costs. The Marion medical center is a 14-building campus on 76 acres that serves southern Illinois, southwest Indiana, and western Kentucky. It is affiliated with Southern Illinois University School of Medicine and colleges in Missouri, Kentucky, Indiana, and Illinois and has sharing agreements with Naval Reserve Fleet Hospital 500 and the Army Reserve 21st General Hospital. It provides primary and secondary medical and surgical care and nursing home care. During fiscal year 1994, the average number of operating beds was 123 medical, 26 surgical, and 60 nursing home beds; and the average daily census was 81, 17, and 60, respectively. The center admitted 4,784 inpatients and provided 58,007 outpatient visits and about 94.4 percent of its patients were category A veterans, including 21.2 percent service-connected, 62.6 percent nonservice-connected low-income, and 10.6 percent nonservice-connected with special needs (5.0 percent were other veterans and 0.6 percent were nonveterans). This $11.5 million, 49,157 gross-square foot patient environment project would renovate medical and surgical nursing units on two floors and part of a third in a four-story hospital building. It would convert rooms with up to nine beds and congregate bath and toilet facilities to single and double rooms with private, handicapped-accessible bathrooms; convert a first-floor hospital wing to patient rooms; move the existing outpatient clinic; modernize the intensive care unit; replace the electrical, heating, air conditioning, and plumbing systems; and modify the interior structure for seismic protection. Medical center officials said that the number of beds would not change. The project would not correct all the medical center’s deficiencies. Marion’s 5-year facility plan includes no additional major construction projects but includes $4.2 million for two minor projects. In addition, a new $15.6 million outpatient clinic is under construction. Veterans’ health care: The Marion, Illinois, medical center would not serve any new types of patients or provide any new services. Medical center officials said that the project would correct fire and safety deficiencies, improve patient environment, and increase efficiency. They said that JCAHO had not cited the medical center for fire and life and safety violations because the project would correct the violations but noted that failure to complete the project in a timely manner would result in citations. Upgrading air conditioning would not only reduce the risk of airborne infection and improve patient comfort but also correct National Fire Protection Association code violations by reducing the threat of smoke inhalation from a fire. Upgrading electrical and medical gas systems would also correct code violations. Converting patient rooms to single and double rooms with private handicapped-accessible baths would meet VA space and handicapped accessibility criteria, Uniform Federal Accessibility Standards, and VA privacy goals. Removing asbestos from the building and making seismic improvements also would increase patients’ safety. Expanding the nursing station space would reduce instances of transcription and medication errors and eliminate the crowding of administration and medical professionals. Increasing room space would eliminate the need to move beds when doors are opened or closed, patients are moved in or out of the room, or bedside treatment is given to patients. Adding waiting rooms for relatives and other visitors would increase customer satisfaction. VA costs: VA estimates that activation costs would be $3.1 million and recurring costs would be $.3 million; no staff changes are planned. The senior engineer estimates that the project would save $146,000 in annual utility and maintenance and repair costs. The Marion, Illinois, medical center director believes that the proposed project complements VA’s planned reorganization and that the reorganization would have no significant affect on the medical center. This is because the medical center would continue to provide basic health care in the new target area; support the Secretary’s mandate to “put patients first;” and meet the VISN objective of ensuring patient satisfaction, access, quality, and efficiency. The director also believes that no feasible alternative exists. Using other VA medical centers would not be feasible because the nearest VA hospital is 120 miles away and continuity of care would be disrupted. Renovating the hospital rather than constructing a new one is cost effective; but no studies have been done. Segmenting is not feasible because the utility systems need total replacement and the project would involve the entire hospital. When developing the project proposal, medical center officials determined that several options were infeasible. Using community facilities would be infeasible because renting bed space would increase costs by about $5.8 million per year and contracting for inpatient care would destroy the continuity of patient care and increase costs by about $6.6 million per year. Also, reducing the number of beds in existing rooms would fail to meet the Secretary’s priority of comparable facilities, perpetuate deficiencies, and increase maintenance and repair costs. Veterans’ health care: The Marion, Illinois, medical center would continue to provide veterans with inefficient care in facilities that do not meet industry standards. VA costs: Project design is scheduled for completion in January 1996; construction contract award in December 1996; and construction completion by August 1999. If delayed for 3 years, the senior engineer estimates that inflation would increase construction costs by $1.8 million. In addition, a likely utility system failure would require increased repairs and interim upgrades costing $3.3 million. The Lebanon medical center is a 31-building campus on 213 acres that serves south central Pennsylvania. It is affiliated with the Pennsylvania State University College of Medicine and 45 other colleges and universities and has several sharing agreements with the Department of Defense. It provides primary and secondary medical and surgical care and nursing home care. In fiscal year 1994, the average number of operating beds was 256 medical, 20 surgical, 193 psychiatric, and 177 nursing home beds; and the average daily census was 187, 10, 169, and 166, respectively. The center admitted 3,421 patients and provided 78,040 outpatient visits, and about 89 percent of its patients were category A veterans, including 41.1 percent service-connected, 38.7 percent nonservice-connected low-income, and 9.5 percent nonservice-connected with special needs (9.8 percent were other veterans and 0.9 percent were nonveterans). This $9 million, 50,425 gross-square-foot patient environment project would renovate medical and surgical nursing units on three floors of one building. The project would replace rooms with up to four beds and congregate bath and toilet facilities with single and double rooms with private and semiprivate bathrooms; relocate and expand nursing stations and other support space; upgrade HVAC, electrical, medical gas, and other building systems; improve patient amenities; and establish a combined psychiatric and acute medical care unit. The number of beds in the renovated area would decrease from 128 to 110. The project would not affect the rest of the renovated building or any other buildings on the campus. The project would not correct all the medical center’s deficiencies. Lebanon’s 5-year facility plan also includes $24.3 million for four major construction projects to develop a rehabilitation center for the blind, consolidate rehabilitation outpatient clinic and administrative services, renovate a nursing home facility, and expand ambulatory care facilities and $21.2 million for 13 minor projects. Veterans’ health care: The Lebanon medical center would not serve any new types of patients or provide any new services. Medical center officials said that the project would correct fire and safety deficiencies, improve patient environment, and increase efficiency. When doorways are widened, the medical center would comply with all fire code requirements; doorways are too narrow for gurneys. Increasing the number of handicapped-accessible patient rooms and bathrooms by installing hand and wheelchair rails and other modifications would meet JCAHO and Americans with Disabilities Act (ADA) space standards. Installing sinks in patient rooms would eliminate the need for nurses, doctors, and patients to use remote congregate bathrooms to wash hands and dispose of patient waste and would address JCAHO’s requirements for adequate infection control. Converting most rooms to single and double rooms with private and semiprivate bathrooms would move toward VA’s privacy goals (the goals would not be totally met because Lebanon obtained a waiver for several double rooms to share bathrooms because exterior wall construction would preclude building private bathrooms in some areas). Upgraded ventilation would improve indoor air quality. Upgrading heating and air conditioning should make patients more comfortable. Telephones and televisions would be installed in every room, improving patient comfort and satisfaction. In addition, efficiency would increase because nurses would spend less time on such routine tasks as disposing of human waste, bathrooms would not be closed to patients of the opposite sex, and intensive care units would not be used for routine patient monitoring because rooms are not equipped with monitoring devices. Finally, Lebanon should be able to better compete with private providers. VA costs: VA estimates that activation costs would be $1.8 million and recurring costs would be $.4 million; no staff changes are planned. The executive assistant to Lebanon’s director anticipates savings from reduced operating and maintenance costs for the renovated area; no estimates were made. The Lebanon medical center director said that it is too early to know how the reorganization would affect the project and medical center but believes that it would have little effect because the medical center would continue to serve veterans and continue to need renovation. The medical center director also believes no feasible alternative exists, but no studies have been conducted. Using other VA medical centers would not be feasible because they do not provide acute care or are too far from Lebanon; the closest is about 75 miles away. Using community facilities would be infeasible because private hospitals do not want to serve veterans who do not have insurance or the income to pay for care because such veterans are viewed as high-cost risks—being generally older, sicker, and poorer and often having alcohol abuse and other social problems. Further, transferring medical and surgical functions to the Pennsylvania State University College of Medicine would be too expensive because a new building would have to be constructed at the University and too inconvenient because Lebanon nursing home patients would have to be transported 17 miles to the University if they would need medical or surgical services. Constructing a new building would be infeasible because new construction would be more expensive; no cost analysis has been done. Finally, segmenting the project would be infeasible because plumbing and some other renovations are interrelated and require refurbishing all three floors. Veterans’ health care: Lebanon medical center officials said that the center would continue to provide veterans with inefficient care in facilities that do not meet industry standards. VA costs: Project design was completed in August 1995, construction award is scheduled for completion in August 1996, and construction completion in February 1999. If delayed, inflation would increase the cost of construction about 5 percent a year, according to the executive assistant. Moreover, the executive assistant believes that using minor construction funds to renovate the nursing units would lengthen the completion time and increase cost. Paul R. Reynolds, Assistant Director, (202) 512-7109 Byron S. Galloway, Assignment Manager John A. Borrelli, Evaluator-in-Charge Linda S. Bade Ralph J. Dagostino Sylvia Diaz Jones Vincent J. Forte John R. Kirstein Thomas P. Monahan Nancy T. Toolan The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional staff request, GAO provided information on nine proposed Department of Veterans' Affairs (VA) construction projects, focusing on: (1) the projects' benefits to veterans; (2) VA efforts to realign all of its facilities into new service networks; and (3) the potential effects of funding delays an VA contract award dates and costs. GAO found that: (1) the nine proposed construction projects would primarily enhance VA inpatient care capacity within designated target areas; (2) the two new medical centers would improve veterans' access to quality care and attract new users; (3) the seven renovation projects at existing medical centers would benefit users by correcting fire and safety deficiencies and improving patient environment; (4) the medical centers undergoing renovation need an additional $308 million to correct all of their deficiencies; (5) VA has not considered all available alternatives to the construction projects, partially because planned realignment criteria have not been finalized; (6) the construction of new and renovated facilities will likely limit future VA realignment decisions, since the facilities have an expected useful life of 25 years or more; and (7) delaying funding until fiscal year (FY) 1997 is likely to have a minimal affect on VA contract award dates and costs, but longer delays could significantly increase costs and award date slippage.
Since the 1960s, the United States has operated two separate polar- orbiting meteorological satellite systems. These systems are known as the Polar-orbiting Operational Environmental Satellites (POES), managed by the National Oceanic and Atmospheric Administration’s (NOAA) National Environmental Satellite, Data, and Information Service (NESDIS), and the Defense Meteorological Satellite Program (DMSP), managed by the Department of Defense (DOD). These satellites obtain environmental data that are processed to provide graphical weather images and specialized weather products, and that are the predominant input to numerical weather prediction models—all used by weather forecasters, the military, and the public. Polar satellites also provide data used to monitor environmental phenomena, such as ozone depletion and drought conditions, as well as data sets that are used by researchers for a variety of studies, such as climate monitoring. Unlike geostationary satellites, which maintain a fixed position above the earth, polar-orbiting satellites constantly circle the Earth in an almost north-south orbit, providing global coverage of conditions that affect the weather and climate. Each satellite makes about 14 orbits a day. As the earth rotates beneath it, each satellite views the entire earth’s surface twice a day. Today, there are two operational POES satellites and two operational DMSP satellites that are positioned so that they can observe the earth in early morning, morning, and afternoon polar orbits. Together, they ensure that for any region of the earth, the data are generally no more than 6 hours old. Figure 1 illustrates the current operational polar satellite configuration. Besides the four operational satellites, there are five older satellites in orbit that still collect some data and are available to provide some limited backup to the operational satellites should they degrade or fail. In the future, both NOAA and DOD plan to continue to launch additional POES and DMSP satellites every few years, with final launches scheduled for 2008 and 2009, respectively. Each of the polar satellites carries a suite of sensors designed to detect environmental data either reflected or emitted from the earth, the atmosphere, and space. The satellites store these data and then transmit the data to NOAA and Air Force ground stations when the satellites pass overhead. The ground stations then relay the data via communications satellites to the appropriate meteorological centers for processing. Under a shared processing agreement among the four processing centers—NESDIS, the Air Force Weather Agency, Navy’s Fleet Numerical Meteorology and Oceanography Center, and the Naval Oceanographic Office—different centers are responsible for producing and distributing different environmental data sets, specialized weather and oceanographic products, and weather prediction model outputs via a shared network. Each of the four processing centers is also responsible for distributing the data to its respective users. For the DOD centers, the users include regional meteorology and oceanography centers as well as meteorology and oceanography staff on military bases. NESDIS forwards the data to the National Weather Service for distribution and use by forecasters. The processing centers also use the Internet to distribute data to the general public. NESDIS is responsible for the long-term archiving of data and derived products from POES and DMSP. In addition to the infrastructure supporting satellite data processing noted above, properly equipped field terminals that are within a direct line of sight of the satellites can receive real-time data directly from the polar- orbiting satellites. There are an estimated 150 such field terminals operated by the U.S. government, many by DOD. Field terminals can be taken into areas with little or no data communications infrastructure— such as on a battlefield or ship—and enable the receipt of weather data directly from the polar-orbiting satellites. These terminals have their own software and processing capability to decode and display a subset of the satellite data to the user. Figure 2 depicts a generic data relay pattern from the polar-orbiting satellites to the data processing centers and field terminals. Polar satellites gather a broad range of data that are transformed into a variety of products for many different uses. Satellite sensors observe different bands of radiation wavelengths, called channels, which are used for remotely determining information about the earth’s atmosphere, land surface, oceans, and the space environment. When first received, satellite data are considered raw data. To make them usable, the processing centers format the data so that they are time-sequenced and include earth location and calibration information. After formatting, these data are called raw data records. The centers further process these raw data records into channel-specific data sets, called sensor data records and temperature data records. These data records are then used to derive weather products called environmental data records (EDR). EDRs range from atmospheric products detailing cloud coverage, temperature, humidity, and ozone distribution; to land surface products showing snow cover, vegetation, and land use; to ocean products depicting sea surface temperatures, sea ice, and wave height; to characterizations of the space environment. Combinations of these data records (raw, sensor, temperature, and environmental data records) are also used to derive more sophisticated products, including outputs from numerical weather models and assessments of climate trends. Figure 3 is a simplified depiction of the various stages of data processing. EDRs can be either images or quantitative data products. Image EDRs provide graphical depictions of the weather and are used to observe meteorological and oceanographic phenomena to track operationally significant events (such as tropical storms, volcanic ash, and icebergs), and to provide quality assurance for weather prediction models. The following figures present some polar-orbiting satellite images. Figure 4 is an image from a DMSP satellite showing an infrared picture taken over the west Atlantic Ocean. Figure 5 is a POES image of Hurricane Floyd, which struck the southern Atlantic coastline in 1999. Figure 6 is a polar- satellite image used to detect volcanic ash clouds, in particular the ash cloud resulting from the eruption of Mount Etna in 2001. Figure 7 shows the location of icebergs near Antarctica in February 2002. Quantitative EDRs are specialized weather products that can be used to assess the environment and climate or to derive other products. These EDRs can also be depicted graphically. Figures 8 and 9 are graphic depictions of quantitative data on sea surface temperature and ozone measurements, respectively. An example of a product that was derived from EDRs is provided in figure 10. This product shows how long a person could survive in the ocean—information used in military as well as search and rescue operations—and was based on sea surface temperature EDRs from polar-orbiting satellites. Another use of quantitative satellite data is in numerical weather prediction models. Based predominantly on observations from polar- orbiting satellites and supplemented by data from other sources such as geostationary satellites, radar, weather balloons, and surface observing systems, numerical weather prediction models are used to help forecast atmospheric, land, and ocean conditions hours, days, weeks, and months into the future. These models require quantitative satellite data to update their analysis of weather and to produce new forecasts. Table 1 contains examples of models run by the processing centers. Figure 11 depicts the output of one common model. All this information—satellite data, imagery, derived products, and model output—is used in mapping and monitoring changes in weather, climate, the ocean, and the environment. These data and products are provided to weather forecasters for use in issuing weather forecasts and warnings to the public and to support our nation’s aviation, agriculture, and maritime communities. Also, weather data and products are used by climatologists and meteorologists to monitor the environment. Within the military, these data and products allow military planners and tactical users to focus on anticipating and exploiting atmospheric and space environmental conditions. For example, Air Force Weather Agency officials told us that accurate wind and temperature forecasts are critical to any decision to launch an aircraft that will need mid-flight refueling. In addition to these operational uses of satellite data, there is also a substantial need for polar satellite data for research. According to experts in climate research, the research community requires long-term, consistent sets of satellite data collected sequentially, usually at fixed intervals of time, in order to study many critical climate processes. Some examples of research topics include long-term trends in temperature, precipitation, and snow cover. Given the expectation that converging the POES and DMSP programs would reduce duplication and result in sizable cost savings, a May 1994 presidential decision directive required NOAA and DOD to converge the two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. The converged program is called the National Polar-orbiting Operational Environmental Satellite System (NPOESS), and it is considered critical to the United States’ ability to maintain the continuity of data required for weather forecasting and global climate monitoring. To manage this program, DOD, NOAA, and the National Aeronautics and Space Administration (NASA) have formed a tri- agency integrated program office, located within NOAA. Within the program office, each agency has the lead on certain activities. NOAA has overall responsibility for the converged system, as well as satellite operations; DOD has the lead on the acquisition; and NASA has primary responsibility for facilitating the development and incorporation of new technologies into the converged system. NOAA and DOD share the costs of funding NPOESS, while NASA funds specific technology projects and studies. NPOESS is a major system acquisition estimated to cost $6.5 billion over the 24-year period from the inception of the program in 1995 through 2018. The program is to provide satellite development, satellite launch and operation, and integrated data processing. These deliverables are grouped into four main categories: (1) the launch segment, which includes the launch vehicle and supporting equipment, (2) the space segment, which includes the satellites and sensors, (3) the interface data processing segment, which includes the data processing system to be located at the four processing centers, and (4) the command, control, and communications segment, which includes the equipment and services needed to support satellite operations. NPOESS will be a launch-on-demand system, and satellites must be available to back up the planned launches of the final POES and DMSP satellites. The first NPOESS satellite—designated C1—is scheduled for delivery in 2008 and is to be available to back up the planned launch of the final POES satellite in 2008. If C1 is not needed to back up the final POES, it will be launched in April 2009. The second NPOESS satellite is to be available to back up the planned launch of the final DMSP satellite in late 2009, or if not needed as a backup, it is to be launched in 2011. Subsequent launches are expected to occur approximately every 2 years through 2018. Program acquisition plans call for the procurement and launch of six NPOESS satellites over the life of the program and the integration of 13 instruments, including 11 environmental sensors and 2 subsystems. Together, the sensors are to receive and transmit data on atmospheric, cloud cover, environmental, climate, oceanographic, and solar-geophysical observations. The subsystems are to support nonenvironmental search and rescue efforts, as well as environmental data collection activities. According to the integrated program office, 8 of NPOESS’s 13 instruments involve new technology development, whereas 5 others are based on existing technologies. The planned instruments and the state of technology on each are listed in table 2. Unlike the current polar satellite program, in which the four centers use different approaches to process raw data into the environmental data records that they are responsible for, NPOESS’ integrated data processing systemto be located at the four centers—is expected to provide a standard system to produce these data sets and products. The four processing centers will continue to use these data sets to produce other derived products, as well as for input to their numerical prediction models. NPOESS is planned to produce 55 environmental data records (EDRs), including atmospheric vertical temperature profile, sea surface temperature, cloud base height, ocean wave characteristics, and ozone profile. Some of these EDRs are comparable to existing products, whereas others are new. The user community designated six of these data products—supported by four sensors—as key EDRs, and noted that failure to provide them would cause the system to be reevaluated or the program to be terminated. The NPOESS acquisition program consists of three key phases: the concept and technology development phase, which lasted from roughly 1995 to early 1997; the program definition and risk reduction phase, which began in early 1997 and is ongoing now; and the engineering and manufacturing development and production phase, which is expected to begin next month and continue through the life of the program. The concept and technology development phase began with the decision to converge the POES and DMSP satellites and included early planning for the NPOESS acquisition. This phase included the successful convergence of the command and control of existing DMSP and POES satellites at NOAA’s satellite operations center. The program definition and risk reduction phase involves both system- level and sensor-level initiatives. At the system level, the program office awarded contracts to two competing prime contractors—Lockheed Martin and TRW—to prepare for NPOESS system performance responsibility. These contractors are developing unique approaches to meeting requirements, designing system architectures, and developing initiatives to reduce sensor development and integration risks. These contractors will compete for the development and production contract. At the sensor level, the program office awarded contracts to develop five sensors. These sensors are in varying stages of development. This phase will end when the development and production contract is awarded. At that point, the winning contractor will assume responsibility for managing continued sensor development. The final phase, engineering and manufacturing development and production, is expected to begin next month when the development and production contract is awarded. The program office issued a request for proposals for the contract in February 2002 and is currently evaluating proposals, with an expectation of awarding the contract by the end of August 2002. The winning contractor will assume system performance responsibility for the overall program. This responsibility includes all aspects of design, development, integration, assembly, test and evaluation, operations, and on-orbit support. In May 1997, the integrated program office assessed the technical, schedule, and cost risks of key elements of the NPOESS program, including (1) the launch segment, (2) the space segment, (3) the interface data processing segment, (4) the command, control, and communications segment, and (5) the overall system integration. As a result of this assessment, the program office determined that three elements had high risk components: the interface data processing segment, the space segment, and the overall system integration segment. Specifically, the interface data processing segment and overall system integration were assessed as high risk in all three areas (technical, cost, and schedule), whereas the space segment was assessed to be high risk in the technical and cost areas, and moderate risk in the schedule area. The launch segment and the command, control, and communications segment were determined to present low or moderate risks. The program office expects to reduce its high risk components to low and moderate risks by the time the development and production contract is awarded, and to have all risk levels reduced to low before the first launch. Table 3 displays the results of the 1997 risk assessment as well as the program office’s projections for those risks by August 2002 and by first launch. In order to meet its goals of reducing program risks, the program office developed and implemented an integrated risk reduction program that includes nine initiatives. While individual initiatives may address one or more identified risks, the program office anticipated that the combination of these nine projects would address the risk to overall system integration. The nine projects are as follows: Deferred development: To reduce program risk, the program office deferred development of 21 EDR requirements either because the technology needed to implement the requirements did not exist or because the requirement was too costly. For example, the requirement for measuring ocean salinity was deferred until the technology needed to take these measurements has been demonstrated in space. If feasible, the program office plans to implement these requirements later as program enhancements. Early sensor development: Because environmental sensors have historically taken 8 years to develop, development of six of the eight sensors with more advanced technologies was initiated early. In the late 1990s, the program office awarded contracts for the development, analysis, simulation, and prototype fabrication of five of these sensors. In addition, NASA awarded a contract for the early development of one other sensor. Responsibility for delivering these sensors will be transferred from the program office and NASA to the winning development and production contractor. According to program office officials, these sensors should be delivered at least 2 years before the earliest expected NPOESS launch because of these early development efforts. Building on existing sensor technologies: In order to minimize risks, the program office used existing sensor technologies as a starting point from which to build new sensors and also plans to use some existing sensors on NPOESS. For example, the new cross-track infrared sounder sensor grew from technology used on the POES high-resolution infrared sounder and on the atmospheric infrared sounder carried on NASA’s Earth Observing System/Aqua satellite. Also, NPOESS’ data collection system is based on the data collection system already flying on another satellite and, according to program officials, will likely be available largely “off the shelf.” Program office officials reported that building on existing sensors should enable them to obtain half of the NPOESS sensors and almost half of the required 55 EDRs while reducing the risk of integrating new technology into the program. Ground demonstrations: To reduce the risk to the data processing segment, the program office had both of the program definition and risk reduction contractors conduct four ground-based demonstrations of hardware and software components of the data processing system. Because of work done during the program definition and risk reduction contract phase, the program office expects the interface data processing segment to be relatively mature before contract award. Internal government studies: To reduce the risks in integrating the NPOESS space and interface data processing segments, over the past 5 years, the integrated program office has overseen risk reduction studies performed by over 30 major scientific organizations, including government laboratories, major universities, and institutes. These studies include observing system simulation experiments and data assimilation studies, which involve simulating a future sensor and then identifying ways to incorporate the new data into products and models. For example, the studies were used to assess the impact of advanced sounders similar to those on NPOESS and the impact of NPOESS-like data on forecasts and end user products. Aircraft flights: Since 1997, the integrated program office has used aircraft flights to demonstrate satellite sensors and to deliver early data to its users so that they can begin to work with the data. For example, in 2001, the NPOESS airborne sounder testbed project began using NASA aircraft to provide an environment in which instruments could be tested under conditions that simulate space. Operational algorithm teams: The integrated program office established five operational algorithm teams to serve as scientific advisory groups. The teams, made up of representatives from government and federally funded research and development centers, worked with the program office for 5 years to oversee the development and refinement of various algorithms that NPOESS will use. They will continue to work with the development and production contractor to refine the data processing algorithms. WindSat/Coriolis demonstration: WindSat/Coriolis is a demonstration satellite, planned for launch in 2003, to test critical new ocean surface wind-observing science and technology that will be used in the NPOESS conical microwave imager/sounder sensor. This demonstration project will also help validate the technology needed to support various EDRs. NPOESS preparatory project: This is a planned demonstration satellite to be launched in early 2006, 2 to 3 years before the first NPOESS satellite launch. It is scheduled to host three critical NPOESS sensors (the visible/infrared imager radiometer suite, the cross-track infrared sounder, and the advanced technology microwave sounder), and it will provide the program office and processing centers an early opportunity to work with the sensors, ground control, and data processing systems. This satellite is expected to demonstrate about half of the NPOESS EDRs and about 80 percent of its data processing load. NPOESS is expected to produce a massive increase in the volume of data sent to the four processing centers, which presents considerable data management challenges. Whereas current polar satellites produce approximately 10 gigabytes of data per day, NPOESS is expected to provide 10 times that amount. When combined with increased data from other sources—other satellites, radar, and ground sensors—this increase in satellite data presents immense challenges to the centers’ infrastructures for processing the data and to their scientific capability to use these additional data effectively in weather products and models. The four processing centers and the integrated program office are well aware of these data management challenges and are planning to address them. Specifically, each of the four centers is planning to build its capacity to handle increased data volumes, and both the centers and the program office are working to improve their ability to assimilate new satellite data in their products. Because the NPOESS launch is several years in the future, agencies have time to build up their respective infrastructures and models so that they can handle increased data volumes. However, more can be done to coordinate and further define these efforts. The expected increase in satellite data from NPOESS presents a considerable challenge to the processing centers’ infrastructures for obtaining, processing, distributing, and storing satellite data. All four of the central processing centers reported that their current infrastructures would require changes in order to support expected NPOESS data streams. In fact, two centers reported that their current infrastructures could not support any of the NPOESS EDRs that they expect to use; another center reported that its infrastructure could not support 82 percent of the EDRs it expects to use; and the fourth center reported that its infrastructure could not support 27 percent of the EDRs that it will use. As for specific shortcomings, officials at the processing centers reported that they need to increase the computational power of the supercomputers that will process the data records, upgrade the communication systems used to transmit the data, and/or increase the storage capacity of the systems used to archive the data. For example, National Weather Service officials told us that current supercomputers could not process the vast amount of satellite data NPOESS will generate within required timeframes to produce forecasts, because even today they are encountering computer capacity constraints. Specifically, the target usage rate for effectively processing modeling data is 50 percent of computing capacity. Officials told us that the average current usage rate is 70 percent of capacity, and usage often peaks well above this rate. As another example of an infrastructure challenge, officials at the Navy’s Fleet Numerical Meteorology and Oceanography Center reported that even with recent upgrades to their local data storage capacity, their current infrastructure could not likely support NPOESS increased data volumes. To handle these increased data volumes, the four processing centers have begun high-level planning to transform their respective satellite data processing infrastructures. Understandably, the centers have not yet begun detailed planning for operational and technology change in the 2008−2009 timeframe because there are too many unknowns for them to do so reliably. For example, the architectural characteristics of the NPOESS system will not be known until sometime after the development and production contract is awarded later this year. Also, as stated by center officials, technology changes so quickly that it is difficult to predict technology options 6 to 7 years from now. Although the centers are not yet building their infrastructures specifically to support NPOESS, officials told us that they are currently working to upgrade their infrastructures to support current and future data streams. For example, NOAA plans to increase the processing capacity of its supercomputers to handle the increased volume of satellite data expected over the next several years. In addition, the Air Force Weather Agency is in the process of upgrading its information technology infrastructure to increase the capacity of its computer and communications systems. The processing centers recognize the infrastructure challenges they face, and each is planning or initiating upgrades to improve its data management capacity to meet immediate challenges. Once the NPOESS development and production contract is awarded and the system design is determined, it is imperative that the four processing centers adjust and further define their future architectures to address this design, and identify the steps they need to take to reach that future goal. All of the centers have expressed their intentions to do so. The increased data volumes from NPOESS pose a challenge to those seeking to use these data in operational weather models and products. These models and products are heavily dependent on satellite data, but experts in the weather modeling community acknowledge that satellite data are not always used effectively because the science needed to understand and use the data is sometimes immature. For example, forecasters do not yet know how to use microwave data from areas covered in ice or under heavy precipitation in their weather prediction models. Experts reported that it often takes years of study and scientific advances to effectively assimilate new satellite data into weather models and to derive new weather products. While there is some debate as to how long it takes to develop the science to put new data in models, in 2000, the National Research Council reported that it generally takes 2 to 5 years of simulations and analyses before a satellite launch for data from new sensors to be effectively incorporated into weather models. They noted that if this work does not occur, there is a gap of several years during which data are collected but not used efficiently in models. Defense and civilian modeling officials reiterated the value of advance assimilation studies by citing an example in which such studies performed before a new sensor was launched allowed modelers to use the data only 10 months after launch. The processing centers acknowledge that much needs to be done for them to be able to incorporate NPOESS data into operational products. Officials at the processing centers reported that they should be able to use some EDRs after only minor changes to their data processing algorithms and models, because these products are expected to be comparable to current products. Other EDRs, however, involve new data and will require major scientific advances in order to be used. That is, the centers will not be able to use these data until they conduct new scientific investigations and determine how to best use the data in their derived products and models. In fact, the three centers that are the heaviest planned users of NPOESS EDRs reported that about 45 percent of the EDRs they plan to use would require major advances in science in order to be utilized. For example, NESDIS stated that it would take major science changes to be able to utilize all six of the key EDRs, including atmospheric vertical temperature profile, soil moisture, and sea surface winds. Table 4 lists the number of EDRs each of the processing centers plans to use and each center’s views of how many of those EDRs require major science changes. Appendix II identifies the EDRs that the centers reported as requiring major scientific advancements. Effective and efficient use of satellite data in weather products, warnings, and forecasts is critical to maximizing our national investment in new satellites. A committee representing the four processing centers noted that expedited incorporation of new satellite data into weather models is a key metric for measuring NPOESS’ success. Given that understanding, the processing centers and the integrated program office have various efforts under way and planned to address challenges in effectively using new NPOESS data. Key initiatives include the following: Joint Center for Satellite Data Assimilation: In July 2001, NOAA and NASA formed a joint organization to accelerate the rate at which satellite data are put into operational use. While the center is currently focused on assimilating data from existing satellites, joint center scientists plan to undertake projects to accelerate the assimilation of future satellite data, including NPOESS data, into weather prediction models. The joint center received $750,000 in its fiscal year 2002 budget and requested $3.4 million for fiscal year 2003. In a November 2001 letter to the processing centers, the integrated program office offered to help fund the joint center efforts to assimilate NPOESS data if the DOD processing centers were to join the joint center. The processing centers have discussed this option, but DOD has not yet made a final decision. Processing centers’ assimilation projects: Two of the three military processing centers, the Air Force Weather Agency and the Navy Fleet Numerical Meteorology and Oceanography Center, have developed programs to improve assimilation of high-resolution satellite data into their models. They have also developed a program that is designed to improve their models so that they will be able to use data from the NPOESS preparatory project, when they become available. Other government-sponsored studies: As noted in its risk reduction efforts, the integrated program office has funded studies—both simulations and data assimilation studies—to prepare for the NPOESS data. Since fiscal year 1995, the program office has reportedly spent more than $3 million on satellite data assimilation experiments and projects to develop techniques for processing satellite data. For example, the program office funded NOAA to develop methods to begin processing and assimilating sounding data from the advanced infrared sounder on a NASA satellite. This effort was expected to pave the way for processing and assimilating data from two sensors that will fly on the NPOESS preparatory project in early 2006 and on NPOESS in the 2008 to 2009 timeframe. Between now and the first NPOESS satellite launch, the four processing centers and the integrated program office have time to meet the challenges in effectively using NPOESS data, but more can be done to coordinate and define these efforts. The four centers’ views on their ability to use NPOESS EDRs in their models and products highlighted that the centers are not always consistent on whether an NPOESS data product requires major scientific advancements or not. Specifically, the centers’ views differ on over 30 EDRs. For example, in the case of one key EDR— atmospheric vertical temperature profile—one center states that it will require only minor software changes to use these data; another center states that it will require a major advancement in science to use the data; and a third states that it will not require a science change, but instead will require an upgrade to its supporting infrastructure. Appendix II lists the processing centers’ views of which EDRs require major scientific advancements in order to be used. While there may be valid reasons for some of these differences—such as the centers’ differing uses for these EDRs or their varying customers’ needs—the centers have not yet compared their differing views or identified opportunities for learning from other centers’ expertise. Agency officials generally agreed that such coordination would be valuable and stated their intentions to coordinate. In addition to coordinating on EDRs determined to pose scientific challenges, it will be important for the centers to identify what needs to be done to meet these major science challenges and to define their plans for doing so. However, the centers have not yet determined what actions are needed to effectively incorporate NPOESS EDRs in their respective models and derived products. Further, they have not yet established plans for addressing the specific EDRs that require major scientific advancements. Agency officials agreed that such planning is necessary and stated that they will likely accelerate these efforts after the development and production contract is awarded. Clearly, there are opportunities for the processing centers to coordinate their particular concerns, learn from other centers’ approaches, and define their plans for addressing challenges in using EDRs. Given the years it takes to effectively incorporate new satellite data into operational products, it is critical that such coordination and detailed planning occur so that NPOESS data can be effectively used. In summary, today’s polar-orbiting weather satellite program is essential to a variety of civilian and military operations, ranging from weather warnings and forecasts to specialized weather products. NPOESS is expected to merge today’s two separate satellite systems into a single state-of-the-art weather and environmental monitoring satellite system to support all users. This new satellite system is expected to provide vast streams of data, far more than are currently handled by the four central processing centers. To prepare for these increased data volumes, the four data processing centers must address key data management challenges— including building up their respective infrastructures and working to be able to efficiently incorporate new data in their derived weather products and models. Because the NPOESS launch date is still several years in the future, the four processing centers and the integrated program office have time to continue to develop, define, and implement their plans to address key data management challenges. Each of the processing centers is planning activities to build its capacity to handle increased volumes of data, but more can be done to coordinate and define these plans—including sharing information on what is needed in order for the centers to use particular weather products and developing a plan to address these scientific issues. Unless more is done to coordinate and define these plans, the centers could risk delays in using NPOESS data in operational weather products and forecasts. This concludes my statement. I would be pleased to respond to any questions that you or other members of the Subcommittee may have at this time. If you have any questions regarding this testimony, please contact Linda Koontz at (202) 512-6240 or by E-mail at koontzl@gao.gov. Individuals making key contributions to this testimony include Ronald Famous, Richard Hung, Tammi Nguyen, Colleen Phillips, Angela Watson, and Glenda Wright. The objectives of our review were to (1) provide an overview of our nation’s polar-orbiting weather satellite program, (2) identify plans for the NPOESS program, and (3) identify key challenges in managing future NPOESS data volumes and the four processing centers’ plans to address them. To provide an overview of the nation’s polar-orbiting weather satellite system, we reviewed NOAA and DOD documents and Web sites that describe the purpose and origin of the polar satellite program and the current POES and DMSP satellites’ supporting infrastructures. We assessed the polar satellite infrastructure to understand the relationships among the satellites, ground control stations, and satellite data processing centers. We also reviewed documents and interviewed officials at the integrated program office and four processing centers to identify the processes for transforming raw satellite data into derived weather products and weather prediction models. To identify plans for the NPOESS program, we obtained and reviewed documents that describe the program’s origin and purpose, and interviewed integrated program office officials to determine the program’s background, status, and plans. We assessed the NPOESS acquisition strategy and program risk reduction efforts to understand how the program office plans to manage the acquisition and mitigate the risks to successful NPOESS implementation. We reviewed descriptions of each of the NPOESS sensors and assessed NPOESS program requirement documents to determine the types of products that NPOESS will produce and how these products will be used. To assess NPOESS data management challenges, we reviewed documents from the program office and the four processing centers and discussed challenges with DOD and NOAA officials. We assessed descriptions of each center’s current and planned polar satellite infrastructure to identify plans for infrastructure growth. We also identified each processing centers’ views on which NPOESS products will require infrastructure changes or scientific advancements in order to be used. We analyzed this information to determine whether the centers face challenges in their ability to process NPOESS data and their scientific capability to assimilate NPOESS data into their weather prediction models. We reviewed documents that describe NOAA, DOD, and integrated program office efforts to address the challenges that we identified, and we evaluated current and planned efforts to address those challenges. We interviewed program office and processing center officials to discuss these documents and their plans to address NPOESS data management challenges. We obtained comments from NOAA and DOD officials on the facts contained in this statement. These officials generally agreed with the facts as presented and provided some technical corrections, which we have incorporated. We performed our work at the NPOESS Integrated Program Office, located at NOAA headquarters in Silver Spring, Maryland; the NESDIS Central Satellite Data Processing Center in Suitland, Maryland; the NCEP Environmental Modeling Center in Camp Springs, Maryland; the Air Force Weather Agency at Offutt Air Force Base in Omaha, Nebraska; the Fleet Numerical Meteorology and Oceanography Center in Monterey, California; and the Naval Oceanographic Office at Stennis Space Center in Bay St. Louis, Mississippi. Our work was performed between October 2001 and July 2002 in accordance with generally accepted government auditing standards.
This testimony discusses the planned National Polar-Orbiting Operational Environmental Satellite System (NPOESS). Today's polar-orbiting environmental satellite program is a complex infrastructure encompassing two satellite systems, supporting ground stations, and four central data processing centers that provide general weather information and specialized environmental products to a variety of users, including weather forecasters, the military, and the public. NPOESS will merge the two satellite systems into a single state-of-the-art environment monitoring satellite system, at a significant cost savings. To handle this increased volume of satellite data, the four processing centers will need to build up their respective infrastructures, and they will need to efficiently incorporate new data into their weather products and models.
The federal real property environment has many stakeholders and involves a vast and diverse portfolio of assets that are used for a wide variety of missions. Real property is generally defined as facilities; land; and anything constructed on, growing on, or attached to land. According to its fiscal year 2003 financial statements, the federal government currently owns billions of dollars in real property assets. The Department of Defense (DOD), U.S. Postal Service (USPS), the General Services Administration (GSA), and the Department of Veterans Affairs (VA) hold the majority of the owned facility space. Federal real property managers operate in a complex and dynamic environment. Numerous laws and regulations govern the acquisition, management, and disposal of federal real property. The Federal Property and Administrative Services Act of 1949, as amended (Property Act), and the Public Buildings Act of 1959, as amended, are the laws that generally apply to real property; and GSA is responsible for the acts’ implementation. Agencies are subject to these acts, unless they are specifically exempted from them, and some agencies may also have their own statutory authority related to real property. Agencies must also comply with numerous other laws related to real property. Despite significant changes in the size and mission needs of the federal government in recent years, the federal portfolio of real property assets in many ways still largely reflects the business model and technological environment of the 1950s and faces serious security challenges. In the last decade alone, the federal government has reduced its workforce by several hundred thousand personnel, and several federal agencies have had major mission changes. With these personnel reductions and mission changes, the need for existing space, including general-purpose office space, has declined overall and necessitated the need for different kinds of space. At the same time, technological advances have changed workplace needs, and many of the older buildings are not configured to accommodate new technologies. The advent of electronic government is starting to change how the public interacts with the federal government. These changes will have significant implications for the type and location of property needed in the 21st century. Furthermore, changes in the overall domestic security environment have presented an additional range of challenges to real property management that must be addressed. One reason the government has many unneeded assets is that some of the major real property-holding agencies have undergone significant mission shifts that have affected their real property needs. For example, after the Cold War, DOD’s force structure was reduced by 36 percent. Despite several rounds of base closures, DOD projects that it still has considerably more property than it needs. The National Defense Authorization Act for Fiscal Year 2002, gave DOD the authority for another round of base realignments and military installation closures in 2005. In addition, various factors may significantly reduce the need for real property held by USPS. These factors include new technologies, additional delivery options, and the opportunity for greater use of partnerships and retail co-location arrangements. A July 2003 Presidential Commission report on USPS stated, among other things, that USPS had vacant and underutilized facilities that had little, if any, value to the modern-day delivery of the nation’s mail. In April 2005 we reported that USPS faces future financial challenges due to its declining First-Class Mail volume and has excess capacity in its current infrastructure that impedes efficiency gains. USPS has stated that one way to increase efficiency is to realign its processing and distribution infrastructure. In the mid-1990s, VA began shifting its role from being a traditional hospital-based provider of medical services to an integrated delivery system that emphasizes a full continuum of care with a significant shift from inpatient to outpatient services. Subsequently, VA has struggled to reduce its large inventory of buildings, many of which are underutilized or vacant. The magnitude of the problem with underutilized or excess federal property puts the government at significant risk for wasting taxpayers’ money and missed opportunities. First, underutilized or excess property is costly to maintain. DOD estimates that it is spending $3 billion to $4 billion each year maintaining facilities that are not needed. It is likely that other agencies that continue to hold excess or underutilized property are also incurring significant costs for staff time spent managing the properties and on maintenance, utilities, security, and other building needs. Second, in addition to day-to-day operational costs, holding these properties has opportunity costs for the government, because these buildings and land could be put to more cost-beneficial uses, exchanged for other needed property, or sold to generate revenue for the government. Finally, continuing to hold property that is unneeded does not present a positive image of the federal government in local communities. Instead, it presents an image of waste and inefficiency that erodes taxpayers’ confidence in government. It also can have a negative impact on local economies if the property is occupying a valuable location and is not used for other purposes, sold, redeveloped, or used in a public-private partnership. Restoration, repair, and maintenance backlogs in federal facilities are significant and reflect the federal government’s ineffective stewardship over its valuable and historic portfolio of real property assets. The state of deterioration is alarming because of the magnitude of the repair backlog— current estimates show that tens of billions of dollars will be needed to restore these assets and make them fully functional. This problem has accelerated in recent years because much of the federal portfolio was constructed over 50 years ago, and these assets are reaching the end of their useful lives. As with the problems related to underutilized or excess property, the challenges of addressing facility deterioration are also prevalent at major real property-holding agencies. In recent discussions, a GSA official said that its $5.7 billion backlog, which we reported in 2003, has grown to between $6 and $7 billion. In recognition of the importance of addressing deferred maintenance, federal accounting standards require agencies to report deferred maintenance as supplementary information in their financial statements. As of September 30, 2004, the government’s consolidated financial statements showed a deferred maintenance cost range of $13.4 billion to $25.3 billion for the asset category General Property, Plant, and Equipment—which includes federal real property. Over the last decade, DOD reports that it has been faced with the major challenge of adequately maintaining its facilities to meet its mission requirements. In February 2003, we reported that although the amount of money the active forces have spent on facility maintenance had increased recently, DOD and service officials said that these amounts had not been sufficient to halt the deterioration of facilities. Too little funding to adequately maintain facilities is also aggravated by DOD’s acknowledged retention of facilities in excess of its needs. Our work over the years has shown that the deterioration problem leads to increased operational costs, has health and safety implications that are worrisome, and can compromise agency missions. In addition, we have reported that the ultimate cost of completing delayed repairs and alterations may escalate because of inflation and increases in the severity of the problems caused by the delays. As discussed above, the overall cost could also be reduced by government realignment. That is, to the extent that unneeded property is also in need of repair, disposing of such property could reduce the repair backlog. Another negative effect, which is not readily apparent but nonetheless significant, is the effect that deteriorating facilities have on employee recruitment, retention, and productivity. This human capital element is troublesome because the government is often at a disadvantage in its ability to compete in the job market in terms of the salaries agencies are able to offer. Poor physical work environments exacerbate this problem and can have a negative impact on potential employees’ decisions to take federal positions. Furthermore, research has shown that quality work environments make employees more productive and improve morale. Finally, as with excess or underutilized property, deteriorated property presents a negative image of the federal government to the public. This is particularly true when many of the assets the public uses and visits the most—such as those at national parks and museums—are not well maintained or in generally poor condition. As we reported in October 2003, in addition to the difficulties with excess and deteriorated property, the federal government faces other long­ standing real property-related problems. For example, there is a lack of reliable and useful real property data that are needed for strategic decision-making. In April 2002, we reported that the government’s only central source of descriptive data on the makeup of the real property inventory, GSA’s worldwide inventory database and related real property reports, contained data that were unreliable and of limited usefulness.GSA agreed with our findings and has revamped this database and produced a new report on the federal inventory; we have not evaluated GSA’s revamped database and related report. In addition to the problems with the worldwide inventory, in February 2005, we reported that as in the 7 previous fiscal years, certain material weaknessesin internal control and in selected accounting and financial reporting practices resulted in conditions that continued to prevent us from being able to provide an opinion as to whether the consolidated financial statements of the U.S. government were fairly stated in conformity with U.S. generally accepted accounting principles. We have reported that because the government lacked complete and reliable information to support asset holdings— including real property—it could not satisfactorily determine that all assets were included in the financial statements, verify that certain reported assets actually existed, or substantiate the amounts at which they were valued. In addition to problems with unreliable real property data, the government continues to rely on costly leasing for much of its space needs. As a general rule, building ownership options through construction or purchase are the least expensive ways to meet agencies’ long-term and recurring requirements for space. Lease-purchase—under which payments are spread over time and ownership of the asset is eventually transferred to the government—are generally less costly than using ordinary operating leases to meet long-term space needs. However, over the last decade, we have reported that GSA—as the central leasing agent for most agencies— relies heavily on operating leases to meet new long-term needs because it lacks funds to pursue ownership. Operating leases have become an attractive option in part because they generally look cheaper in any given year, even though they are generally more costly over time. Budget scorekeeping rules allow these costly operating leases to look cheaper in the short term and have encouraged an overreliance on them for satisfying long-term space needs. Finding a solution for this problem has been difficult; however, change is needed because the current practice of relying on costly leasing to meet long-term space needs results in excessive costs to taxpayers and does not reflect a sensible or economically rational approach to capital asset management. Federal agencies also face challenges in protecting their facilities due to the threat of terrorism. Terrorism is a major threat to federally owned and leased real property, the civil servants and military personnel who work in them, and the public who visits them. This was evidenced by the 1995 Oklahoma City bombing; the 1998 embassy bombings in Africa; the September 11, 2001, attacks on the World Trade Center and Pentagon; and the anthrax attacks in the fall of 2001. Since the 2001 attacks, the focus on security in federal buildings has been heightened considerably. Real property-holding agencies are employing such measures as searching vehicles that enter federal facilities, restricting parking, and installing concrete bollards. As the government’s security efforts intensify, the government will be faced with important questions regarding the level of security needed to adequately protect federal facilities and how the security community should proceed. In February 2004, the President added the Federal Asset Management Initiative to the President’s Management Agenda and signed Executive Order 13327 to address challenges in this area. The order requires senior real property officers at specified executive branch departments and agencies to, among other things, develop and implement an agency asset management plan; identify and categorize all real property owned, leased, or otherwise managed by the agency; prioritize actions needed to improve the operational and financial management of the agency’s real property inventory; and make life-cycle cost estimations associated with the prioritized actions. In addition, the senior real property officers are responsible, on an ongoing basis, for monitoring the real property assets of the agency. The order also established a new Federal Real Property Council (the Council) at OMB. In April 2005, OMB officials updated us on the status of the implementation of the executive order. According to these officials, all of the senior real property officers are in place, and the Council has been working to identify common data elements and performance measures to be captured by agencies and ultimately reported to a governmentwide database. In addition, OMB officials reported that agencies are working on their asset management plans. Plans for the DOD, VA, Energy, and GSA have been completed and approved by OMB. The Council has also developed guiding principles for real property asset management. These guiding principles state that real property asset management must, among other things, support agency missions and strategic goals, use public and commercial benchmarks and best practices, employ life-cycle cost-benefit analysis, promote full and appropriate utilization, and dispose of unneeded assets. In addition to these reform efforts, Public Law 108-447 gave GSA the authority to retain the net proceeds from the disposal of federal property for fiscal year 2005 and to use such proceeds for GSA’s real property capital needs. Also, Public Law 108-422 established a capital asset fund and gave VA the authority to retain the proceeds from the disposal of its real property for the use of certain capital asset needs such as demolition, environmental clean-up, repairs, and maintenance to the extent specified in appropriations acts. And, agencies such as DOD and VA have made progress in addressing long-standing federal real property problems and governmentwide efforts in the facility protection area are progressing. For example: VA has established a process called Capital Asset Realignment for Enhanced Services (CARES) to address its aging and obsolete portfolio of health care facilities. In March 2005, we reported that through CARES, VA identified 136 locations for evaluation of alternative ways to align inpatient services—99 facilities had potential duplication of services with another nearby facility or low acute patient workload. VA made decisions to realign inpatient health care services at 30 of these locations. For example, it will close all inpatient services at 5 facilities. VA’s decisions on inpatient alignment and plans for further study of its capital asset needs are tangible steps in improving management of its capital assets and enhancing health care. Accomplishing its goals, however, will depend on VA’s success in completing its evaluations and implementing its CARES decisions to ensure that resources now spent on unneeded capital assets are redirected to health care. In DOD’s support infrastructure management area, which we identified as high-risk in 1997, DOD has made progress and expects to continue making improvements. In May 2005, we testified that DOD implemented the recommendations from the previous BRAC rounds within the 6-year period mandated by law. As a result, DOD estimated that it reduced its domestic infrastructure by about 20 percent, as measured by the cost to replace the property; about 90 percent of unneeded BRAC property is now available for reuse. Substantial net savings of approximately $29 billion have been realized over time. DOD’s expectations for the 2005 BRAC round include further eliminating unneeded infrastructure and achieving savings. It also expects to use BRAC to further transformation and related efforts such as restationing of troops from overseas as well as efforts to further joint basing among the military services. The results of the 2005 BRAC round will be known later this year, once the legislatively mandated Defense Base Closure and Realignment Commission completes its work and its recommendations are considered by the President and the Congress. In light of the need to invest in facility protection since September 11, 2001, funding available for repair and restoration and preparing excess property for disposal may be further constrained. The Interagency Security Committee (ISC), which is chaired by the Department of Homeland Security (DHS), is tasked with coordinating federal agencies’ facility protection efforts, developing standards, and overseeing implementation. In November 2004, we reported that ISC had made progress in coordinating the government’s facility protection efforts by, for example, developing security standards for leased space and design criteria for security in new construction projects. Despite this progress, we found that its actions to ensure compliance with security standards and oversee implementation have been limited. Nonetheless, the ISC serves as a forum for addressing security issues, which can have an impact on agencies’ efforts to improve real property management. The inclusion of real property asset management on the President’s Management Agenda, the executive order, and agencies’ actions are clearly positive steps in an area that had been neglected for many years. However, despite the increased focus on real property issues in recent years, the underlying conditions—such as excess and deteriorating properties and costly leasing—continue to exist and more needs to be done to address various obstacles that led to our high risk designation. For example, the problems have been exacerbated by competing stakeholder interests in real property decisions, various legal and budget related disincentives to businesslike outcomes, and the need for better capital planning among real property-holding agencies. Competing Stakeholder Interests - In addition to Congress, OMB, and the real property-holding agencies themselves, several other stakeholders also have an interest in how the federal government carries out its real property acquisition, management, and disposal practices. These include foreign and local governments; business interests in the communities where the assets are located; private sector construction and leasing firms; historic preservation organizations; various advocacy groups; and the public in general, which often views the facilities as the physical face of the federal government in local communities. As a result of competing stakeholder interests, decisions about real property often do not reflect the most cost-effective or efficient alternative that is in the interests of the agency or the government as a whole but instead reflect other priorities. Legal and Budgetary Disincentives -The complex legal and budgetary environment in which real property managers operate has a significant impact on real property decisionmaking and often does not lead to economically rational and businesslike outcomes. For example, we have reported that public-private partnerships might be a viable option for redeveloping obsolete federal property when they provide the best economic value for the government, compared with other options, such as federal financing through appropriations or sale of the property. Resource limitations, in general, often prevent agencies from addressing real property needs from a strategic portfolio perspective. When available funds for capital investment are limited, Congress should weigh the need for new, modern facilities with the need for renovation, maintenance, and disposal of existing facilities, the latter of which often gets deferred. In the disposal area, a range of laws intended to address other objectives—such as laws related to historic preservation and environmental remediation— makes it challenging for agencies to dispose of unneeded property. Need for Improved Capital Planning - Over the years, we have reported that prudent capital planning can help agencies to make the most of limited resources, and failure to make timely and effective capital acquisitions can result in increased long-term costs. GAO, Congress, and OMB have identified the need to improve federal decisionmaking regarding capital investment. Our Executive Guide, OMB’s Capital Programming Guide, and its revisions to Circular A-11 have attempted to provide guidance to agencies for making capital investment decisions. However, agencies are not required to use the guidance. Furthermore, agencies have not always developed overall goals and strategies for implementing capital investment decisions, nor has the federal government generally planned or budgeted for capital assets over the long term. As you know, GSA is required by law to charge agencies for renting space in federal office buildings, courthouses, and other assets GSA owns. The rental receipts are deposited into the Federal Buildings Fund (FBF), a revolving fund used to fund GSA real property services, including space acquisition and asset management for federal facilities that are under GSA’s control. Over the years, there have been various efforts to restrict or exempt agencies from paying rent to GSA for some or all of their space. This, however, can have a negative impact on the government’s ability to “re-invest” in its portfolio. Currently, the federal judiciary is seeking such an exemption. This is a very important issue, since it would serve to provide a precedent with significant governmentwide implications. More specifically, GSA has historically been unable to generate sufficient revenue through FBF and has thus struggled to meet the requirements for repairs and alterations identified in its inventory of owned buildings. We reported in 2003 that the estimated backlog of repairs had reached $5.7 billion, and consequences included poor health and safety conditions, higher operating costs, restricted capacity for modern information technology, and continued structural deterioration. Restrictions imposed on the rent GSA could charge federal agencies have compounded the agency’s inability to address its backlog in the past. Consequently, we recommended in 1989 that Congress remove all rent restrictions and not mandate any further restrictions, and most rent restrictions have been lifted. The GSA Administrator has the authority to grant rent exemptions, and all of the current exemptions are limited to single buildings or were granted for a limited duration. Together, these current exemptions represent about $170 million, a third of the $483 million permanent exemption the judiciary is requesting from GSA. The judiciary has requested the exemption, equal to about half of its annual rent payment, because of budget problems that it believes its growing rent payments have caused. GSA data show that one reason the judiciary’s rent is increasing is that the space it occupies is also increasing. We are currently studying the potential impact of such an exemption on FBF, however our past work shows that rent exemptions were a principal reason why FBF has accumulated insufficient money for capital investment. The magnitude of real property-related problems and the complexity of the underlying factors that cause them to persist put the federal government at significant risk in this area. Real property problems related to unneeded property and the need for realignment, deteriorating conditions, unreliable data, costly space, and security concerns have multibillion-dollar cost implications and can seriously jeopardize mission accomplishment. Because of the breadth and complexity of the issues involved, the long-standing nature of the problems, and the intense debate about potential solutions that will likely ensue, current structures and processes may not be adequate to address the problems. In addition, a governmentwide perspective regarding the extent of excess or underutilized space, deferred maintenance, and the costs of real property would improve transparency. That is, all stakeholders would know the condition of the problem and overall, the government could better manage its real property. Given this, we concluded in our high-risk report and in our update in January 2005, and still believe that a comprehensive and integrated transformation strategy for federal real property is needed. Such a strategy could build upon the executive order by providing decisionmakers with a road map of actions for addressing the underlying obstacles, assessing progress governmentwide, and for enhancing accountability for related actions. Based on input from agencies, the private sector, and other interested groups, the strategy could comprehensively address these long-standing problems with specific proposals on how best to realign the federal infrastructure and dispose of unneeded property, taking into account mission requirements, changes in technology, security needs, costs, and how the government conducts business in the 21st century; address the significant repair and restoration needs of the federal portfolio; ensure that reliable governmentwide and agency-specific real property data—both financial and program related—are available for informed decisionmaking; resolve the problem of heavy reliance on costly leasing; and consider the impact that the threat of terrorism will have on real property needs and challenges, including how to balance public access with safety. To be effective in addressing these problems, it would be important for the strategy to focus on minimizing the negative effects associated with competing stakeholder interests in real property decisionmaking; providing agencies with appropriate tools and incentives that will facilitate businesslike decisions—for example, consideration should be given to what financing options should be available; whether agencies should keep some of the disposal proceeds to recoup the costs of preparing properties for disposal; what process would permit comparisons between rehabilitation/renovation and replacement and among construction, purchase, lease-purchase, and operating lease; and how public-private partnerships should be evaluated; addressing federal human capital issues related to real property by recognizing that real property conditions affect the federal government’s ability to attract and retain high-performing individuals and the productivity and morale of employees; improving real property capital planning in the federal government by helping agencies to better integrate agency mission considerations into the capital decision-making process, make businesslike decisions when evaluating and selecting capital assets, evaluate and select capital assets by using an investment approach, evaluate results on an ongoing basis, and develop long-term capital plans; and ensuring credible, rational, long-term budget planning for facility sustainment, modernization, or recapitalization. The transformation strategy should also reflect the lessons learned and leading practices of organizations in the public and private sectors that have attempted to reform their real property practices. Over the past decade, leading organizations in both the public and private sectors have been recognizing the impact that real property decisions have on their overall success. For example, we at GAO are currently leasing space to the U.S. Army Corps of Engineers to better utilize our space, generate revenue, and reduce the Corps’ need to lease space from the private sector. The revenue we receive provides us with an incentive to efficiently manage our space. Better managing real property assets in the current environment calls for a significant departure from the traditional way of doing business. Solutions should not only correct the long-standing problems we have identified but also be responsive to and supportive of agencies’ changing missions, security concerns, and technological needs in the 21st century. If actions resulting from the transformation strategy comprehensively address the problems and are effectively implemented, agencies will be better positioned to recover asset values, reduce operating costs, improve facility conditions, enhance safety and security, recruit and retain employees, and achieve mission effectiveness. In addition to developing a transformation strategy, it is critical that all the key stakeholders in government—Congress, OMB, and real property­ holding agencies—continue to work diligently on the efforts planned and already under way that are intended to promote better real property capital decisionmaking, such as enacting reform legislation, assessing infrastructure and human capital needs, and examining viable funding options. Congress and the administration could continue to work together to develop and enact additional reform legislation to give real property­ holding agencies the tools they need to achieve better outcomes, foster a more businesslike real property environment, and provide for greater accountability for real property stewardship. These tools could include, where appropriate, the ability to retain a portion of the proceeds from disposal and the use of public-private partnerships in cases where they represent the best economic value to the government. Congress and the administration could also elevate the importance of real property in policy debates and recognize the impact that real property decisions have on agencies’ missions. Regarding this Committee’s draft legislation known as the “Federal Real Property Disposal Pilot Program and Management Improvement Act of 2005,” we believe that the objectives of the legislation and several of its provisions have strong conceptual merit. For example, it would establish a pilot program for the expedited disposal of excess, surplus, or underutilized real property assets identified and would enact many of the requirements of Executive Order 13227 into law. In particular, pursuing this pilot program, as outlined in Title I, would allow for assessing lessons learned and help determine the merits of the program and whether it should continue. Furthermore, making the requirements of the executive order law, as outlined in Title II, would serve to elevate their importance and show that Congress and the administration are unified in pursuing real property reform. We would respectfully suggest that the Committee give consideration to including a requirement that a transformation strategy for federal real property be developed, as we have recommended. Solving the problems in this area will undeniably require a reconsideration of funding priorities at a time when budget constraints will be pervasive. Without effective incentives and tools; top management accountability, leadership, and commitment; adequate funding; full transparency with regard to the government’s real property activities; and an effective system to measure results, long-standing real property problems will continue and likely worsen. However, the overall risk to the government and taxpayers could be substantially reduced if an effective transformation strategy is developed and successfully implemented, reforms are made, and property­ holding agencies effectively implement current and planned initiatives. Since our high-risk report was issued, OMB has informed us that it is taking steps to address the federal government’s problems in the real property area. Specifically, it has established a new Federal Real Property Council to address these long-standing issues. To assist OMB with its efforts, we have agreed to meet regularly to discuss progress and have provided OMB with specific suggestions on the types of actions and results that could be helpful in justifying the removal of real property from the high-risk list. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. For further information on this testimony, please contact Mark Goldstein on (202) 512-2834 or at goldsteinm@gao.gov. Key contributions to this testimony were made by Christine Bonham, Daniel Hoy, Anne Izod, Susan Michal-Smith, and David Sausville. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In January 2003, GAO designated federal real property as a high-risk area due to long-standing problems with excess and underutilized property, deteriorating facilities, unreliable real property data, and costly space challenges. Federal agencies were also facing many challenges protecting their facilities due to the threat of terrorism. This testimony discusses the problems with federal real property, particularly those relating to excess and deteriorating property, and what needs to be done to address them. The federal real property portfolio is vast and diverse--over 30 agencies control hundreds of thousands of real property assets worldwide, including facilities and land worth hundreds of billions of dollars. Unfortunately, many of these assets are no longer effectively aligned with, or responsive to, agencies' changing missions. Further, many assets are in an alarming state of deterioration; agencies have estimated restoration and repair needs to be in the tens of billions of dollars. Compounding these problems are the lack of reliable governmentwide data for strategic asset management, a heavy reliance on costly leasing, instead of ownership, to meet new needs, and the cost and challenge of protecting these assets against terrorism. In February 2004, the President added the Federal Asset Management Initiative to the President's Management Agenda and signed Executive Order 13327. The order requires senior real property officers at specified executive branch departments and agencies to, among other things, prioritize actions needed to improve the operational and financial management of the agency's real property inventory. A new Federal Real Property Council at OMB has developed guiding principles for real property asset management and is also developing performance measures, a real property inventory database, and an agency asset management planning process. In addition to these reform efforts, some agencies such as the Departments of Defense (DOD) and Veterans Affairs (VA) have made progress in addressing long-standing federal real property problems. For example, DOD is preparing for a round of base realignment and closures in 2005. Also, in May 2004, VA announced a wide range of asset realignment decisions. These and other efforts are positive steps, but it is too early to judge whether the administration's focus on this area will have a lasting impact. The underlying conditions and related obstacles that led to GAO's high-risk designation continue to exist. Remaining obstacles include competing stakeholder interests in real property decisions, various legal and budget-related disincentives to optimal, businesslike, real property decisions, and the need for better capital planning among agencies.
The Army National Guard has approximately $38 billion worth of equipment assigned to its 54 separate state and territorial military commands. The equipment is mostly used during peacetime to train units in the event that they are needed to reinforce or replace active force components during wartime. Equipment predominantly used for units’ 2-week annual training is located at Mobilization and Training Equipment Sites (MATES). There are 24 MATES located throughout the United States, and almost half of them will have equipment that belongs to more than one unit. During the last several years, the Guard has spent over $756 million annually to maintain its equipment. However, this amount has not been enough to fund required scheduled maintenance and repairs on equipment that has deteriorated. As a result, the Guard had a maintenance backlog of 2.3 million labor hours as of September 1996. To help reduce this backlog, the Guard developed the Controlled Humidity Preservation Program. The goal of the program is to preserve up to 25 percent of the Guard’s combat-ready ground equipment, including tanks, Bradley Fighting Vehicles, self-propelled howitzers, and recovery vehicles, in a controlled humidity environment for up to 5 years. The program will eliminate the need to perform scheduled and unscheduled maintenance on the preserved equipment, which will permit the Guard to concentrate its limited maintenance resources on the remaining equipment and gradually reduce the maintenance backlog. The Guard has selected 890 equipment items for preservation under the Controlled Humidity Preservation Program. Equipment preserved under the program will meet all technical and mission capability requirements and will be available when needed for mobilization or training rotation purposes. The Guard is testing program techniques at several locations throughout the United States and is focusing on equipment that has high maintenance costs and humidity-sensitive electronic components, such as the M1A1 tank. Preliminary test results have been positive. Even though final results are not anticipated until January 1998, the Guard is moving forward with implementation. Currently, 17 states have taken actions to implement controlled humidity techniques, and 16 more states plan to do so before the end of fiscal year 1997. Appendix I contains a more detailed discussion of the Controlled Humidity Preservation Program. Guard units generally do not share their equipment and would only use equipment from another unit when they do not have sufficient quantities of their own to meet training needs. Our analysis of equipment usage at the Fort Stewart, Georgia, and Camp Shelby, Mississippi, MATES confirmed that the five units that train at these locations share very little equipment. However, it would be feasible for these units, as well as other units that use the same training site, to pool and share equipment. More than enough equipment is already located at these MATES to create a pool of equipment to meet unit training needs. The equipment not needed for the pool could be put in preserved storage. Further, the unit equipment located at the Fort Stewart and Camp Shelby MATES is predominately used during the units’ 2-week annual training period. Because units train at different times during the summer, this equipment could be made available to other units for use during their 2-week training period or put in preserved storage. In fact, more equipment than the Guard’s 25-percent goal can be preserved. National Guard Regulation 750-2 (Oct. 1, 1996) requires that units draw and train with their own equipment, if possible, during their annual 2-week training period. Units generally do not share their equipment with another unit and only use equipment from another unit when they do not have sufficient quantities of their own to meet training needs. Concerns about equipment sharing have been expressed because all units do not have the same types of equipment and personnel believe they need to train with their own equipment. Guard units train about 39 days each year, but the equipment located at MATES is used mostly during the 2-week annual training period, which is normally conducted during the summer months. For the remaining 50 weeks, the equipment is used little and generally sits outside exposed to the weather elements. The Guard requires units to place 50 percent of selected equipment items, such as M1A1 Abrams tanks and Bradley Fighting Vehicles, at MATES, since such equipment is generally needed and used only when units conduct their 2-week annual training. Our analysis of equipment usage at the Fort Stewart and Camp Shelby MATES showed that the five units that train at these locations share very little equipment. For example, in 1996 the three brigades from Fort Stewart withdrew equipment 31 times, but in only 6 instances (19 percent) did any of the equipment belong to another brigade. At the Camp Shelby MATES, officials stated that units use only their own equipment during their annual 2-week training period and do not share equipment with other units. In addition, the tanks stored at the Fort Stewart, Camp Shelby, and Fort Hood, Texas, MATES are used very little. We analyzed the engine hour usage data for 246 M1A1 and M1IP tanks and found that engines were running an average of about 52 hours, or 6-1/2 days, per year (assuming an 8-hour training day). Establishing equipment pools and requiring units that train at the same site to share the minimum quantities of equipment needed for training would enable the Guard to preserve more equipment in controlled environments. Although concerns about equipment sharing have been expressed, Guard officials at the Army National Guard Bureau, state, MATES, and unit levels believe that units can share equipment and use the controlled humidity concept for preserving equipment. Generally, the officials agreed that units do not have to use their own equipment in training, equipment can be shared to a greater extent, and a paradigm change is necessary. According to officials, equipment ownership and sharing are leadership issues that can be managed. Further, according to the Guard’s modernization plans, high-priority units will generally have the same equipment by the end of 1999. Several factors have a significant impact on determining the size of equipment pools. These factors include a unit’s assigned personnel, number of unit personnel that actually attend annual training with their unit, the quantities of equipment items drawn from a MATES and the quantities actually needed to accomplish training tasks, and annual training scheduling intervals. The number of individual equipment items required in a pool will vary by type and size of units training at a particular location. MATES officials stated that the size of the equipment pool would also need to reflect their ability to repair nonmission-capable (NMC) equipment after units complete training. Equipment turnaround time and the amount of time a MATES has to fix NMC equipment between unit training periods are key to determining the equipment pool size and the quantity of equipment that can be placed in long-term preservation. With the exception of major problems, such as a blown engine for a M1A1 tank or equipment awaiting parts availability, most equipment items can be fixed and returned to the pool within 2 weeks, the officials said. Scheduling annual training with the greatest interval between unit training periods would allow MATES personnel more time to repair equipment for reissuance and thus allow greater equipment quantities to be preserved. Appendix II contains information on how we determined the size of the pools used in our analysis. Several MATES officials stated that unit commanders draw more equipment than they need for annual training. More equipment could be preserved if unit commanders would draw only the equipment quantities needed to accomplish training tasks. For example, officials at one MATES said unit commanders generally draw one M1A1 tank and one Bradley Fighting Vehicle for each of the tank and Bradley crews that show up for an annual training event. The officials said that the commanders wanted to have each crew experience some driving time and therefore had extra tanks and Bradleys available so that training would not be delayed or interrupted because of maintenance. MATES officials understood this rationale but pointed out that a unit generally has only two training ranges available at any one time and that only two crews can train on a range. Therefore, a typical M1A1 tank or a Bradley unit trying to qualify in gunnery operations can train only four crews at the same time. The officials believe that these units can achieve their training tasks with about one-half of the tank and Bradley vehicles drawn from MATES and still have enough extra equipment in case of maintenance losses. A Bradley Fighting Vehicle battalion commander stated that his battalion could achieve training goals with about one-half the Bradleys drawn for annual training. The commander also stated that other commanders could achieve their training goals with the same amount of equipment but that this method of operating would require a change in the way training is currently done. MATES officials had other suggestions to reduce the amount of equipment needed to accomplish training goals and increase preservation of equipment. These suggestions include (1) minimizing home station assets, (2) improving maintenance operations in units by making maintenance a priority, (3) splitting annual training by having half of the brigade rotate in and out of annual training, and (4) scheduling training over a longer period of time to better utilize equipment availability at MATES. The Guard’s training equipment is costly to maintain. In fact, the Guard spent over $756 million during fiscal years 1995 and 1996 to maintain equipment, but this amount was insufficient to perform all required maintenance. Our analysis of the nine equipment items showed that the Guard could avoid up to $10.3 million annually in maintenance costs if it preserved 25 percent of this equipment in a controlled humidity environment. Our analysis also showed that the Guard could avoid an additional $4.4 million to $9.7 million each year in maintenance costs if it required the three units that train at the Fort Stewart MATES and the two units that train at the Camp Shelby MATES to pool and share equipment. The portion of each unit’s training equipment that is not pooled could then be preserved. The cost avoidance we identified is the minimum that the Guard can achieve because many equipment items other than the ones used in our analysis could be pooled and shared. Also, our analysis included only eight Guard units, and additional maintenance costs could be avoided if other state and territorial Guard military commands pooled and shared training equipment. In fact, in May 1997, the U.S. Army Cost and Economic Analysis Center endorsed the Guard’s Controlled Humidity Preservation Economic Analysis and stated that similar benefits were likely in the Army Reserves, active component, and other services. According to the economic analysis of the Controlled Humidity Preservation Program, the required scheduled maintenance for the 890 ground equipment items in the program would cost the Guard about $1.1 billion annually. Much of this required maintenance, however, is not funded, which has forced trade-off decisions. During fiscal years 1995 and 1996, the Guard spent over $756 million to maintain equipment. This amount was focused on maintaining priority equipment items rather than performing other required maintenance. Scheduled periodic maintenance accounts for much of the annual maintenance expense. For example, annual scheduled maintenance for one M1A1 Abrams tank costs $61,555 and takes 995 hours to complete. For the Guard’s 472 M1A1 tanks, these figures translate to an annual expense of over $29 million and about 470,000 labor hours. The annual scheduled maintenance cost for the Guard’s tracked vehicles alone is $363 million. The Guard anticipates that annual scheduled maintenance costs of $277 million could be deferred by placing 25 percent of the 890 equipment items in long-term preservation. However, more maintenance costs can be deferred than the Guard anticipates because additional equipment can be preserved. For example, if the Guard preserved 25 percent of the equipment used in our analysis, it could avoid up to $10.3 million annually in maintenance costs. However, if the Guard established equipment pools and required units training at the same site to share this equipment, it could avoid $4.4 million to $9.7 million more each year. As a result, the Guard could preserve more equipment in controlled environments and avoid spending up to $20 million annually. More details on how we estimated the potential cost avoidance by pooling and sharing equipment is in appendix II. The additional cost avoidance would occur if the 48th and 218th Infantry Brigades and the 278th Armored Cavalry Regiment were to share the equipment they have located at the Fort Stewart MATES and the 155th Armor and 31st Armored Brigades were to share the equipment at the Camp Shelby MATES. The additional cost avoidance is attainable because the five units that conduct annual training at the Fort Stewart and Camp Shelby MATES train at different times during the summer. Therefore, a portion of each unit’s training equipment could be pooled and designated as common use equipment, and the remaining equipment could be preserved in a controlled humidity environment. Units reporting for training would draw the necessary equipment to complete their 2-week training cycle from the pool of common use equipment. The equipment would then be returned to the pool and be made ready for the next unit. Equipment could be rotated in and out of the pool to equalize use so that the equipment in the pool is not subjected to overuse. Table 1 shows the incremental maintenance cost avoidance if the three units at the Fort Stewart MATES and the two units at the Camp Shelby MATES were to share the nine equipment items for training purposes and place their remaining equipment in long-term preservation. More details concerning NMC rates and training intervals are on page 31 in appendix II. Additional maintenance costs could be avoided if unit commanders used only the minimum quantities of equipment needed for annual training. According to MATES officials, unit commanders draw whatever equipment quantities they deem necessary to accomplish annual training because they are not responsible for the maintenance costs of this equipment. If unit commanders used the minimum equipment required, the potential size of an equipment pool could be smaller, enabling more equipment to be preserved. Changing the annual training sites of some units to allow multiple units with like or comparable equipment to train at the same site would facilitate greater equipment sharing. If sharing were optimized, maximum maintenance cost avoidance could be achieved. Various scenarios exist to achieve optimum equipment sharing and training goals. Additional travel time, costs to transport equipment to another training site, and the impact of equipment density reductions on maintenance personnel requirements are concerns associated with changing annual training sites. We developed three scenarios to demonstrate how equipment sharing could result in an avoidance of greater maintenance costs. The scenarios we present may not necessarily reflect the optimum combinations of units and annual training sites to achieve the greatest benefits to the Guard. However, all three scenarios reflect greater potential benefits to the Guard than those that are presently being achieved or anticipated through the implementation of the Guard’s Controlled Humidity Preservation Program. According to our analysis of nine equipment items and eight Guard units, we determined that the Guard could reduce scheduled annual maintenance cost by an additional $23.1 million to $39.2 million annually if as few as three units changed their annual training location and share equipment. These figures are $5.3 million to $18 million more than the Guard’s current program could achieve. Our scenarios for changing annual training sites are detailed in figure 1. We recognize that the scenarios presented will require units to travel farther to train and therefore incur more transportation costs. Also, there would be one-time equipment relocation costs in each scenario. However, the annual maintenance cost avoidance to be achieved through sharing and preserving equipment is greater than these additional costs. For example, under scenario 2, transportation to annual training would cost approximately $4.2 million and equipment relocation would cost $888,000, for a total of $5.1 million. The minimum cost avoidance the Guard could achieve by pooling and sharing equipment under this scenario would be $25.6 million, as shown in table 3. Even though maintenance personnel requirements are based on the quantities of equipment located at MATES, changes in equipment quantities would be offset from one annual training site to another. We recognize the economic impact such changes would have, but the maintenance cost avoidance to be realized would be greater to the Guard as a whole. This scenario involves seven Guard units and maximizes equipment sharing among the 48th and 218th Infantry Brigades and the 278th Armored Cavalry Regiment, which train at Fort Stewart, and the 155th Armor Brigade and 31st Armored Brigade, which train at Camp Shelby. The annual training site of the 256th Infantry Brigade is changed from Fort Polk to Fort Hood to maximize equipment sharing with the 49th Armored Division, which is located there. The 256th Infantry Brigade stores much of its equipment at the Fort Polk MATES, and units accomplish their weekend training at Fort Polk. Fort Hood is the infantry brigade’s mobilization training site, and the 49th Armored Division provides the opposing forces for the brigade’s annual training. This scenario allows the Guard to preserve up to an additional 488 pieces of equipment over its current goal. Even though an estimated one-time cost of about $269,000 would be incurred to move equipment, an additional $5.3 million to $11.3 million in costs would be avoided annually, as shown in table 2. According to III Corps officials at Fort Hood, from a training and logistical support standpoint, Fort Hood can accommodate an additional brigade for annual training. Also, according to Fort Hood MATES officials, facilities are adequate to accommodate and maintain the equipment of another brigade-size unit. Louisiana State Area Command and 256th Infantry Brigade officials were not in favor of having the brigade change annual training sites. The concerns expressed by these officials primarily focused on the additional transportation and equipment movement costs and potential loss of training time associated with changing the brigade’s annual training site to Fort Hood. Brigade officials were also concerned with their units’ inability to conduct weekend training, especially gunnery, at Fort Polk if 50 percent of their tanks and Bradleys were moved to Fort Hood and the Fort Polk MATES were to lose maintenance personnel. However, the officials recognized the benefits of pooling and sharing equipment. This scenario involves seven Guard units and changes the annual training site of the 30th Infantry Brigade from Fort Bragg to Fort Stewart and the 278th Armored Cavalry Regiment’s training site from Fort Stewart to Fort Hood. These changes allow for optimum equipment sharing among the 48th, 218th, and 30th Infantry Brigades at Fort Stewart; the 49th Armored Division and the 278th Armored Cavalry Regiment at Fort Hood; and the 155th Armor and 31st Armored Brigades at Camp Shelby. This scenario allows the Guard to preserve up to an additional 572 pieces of equipment over its current goal. Even though the Guard would incur a one-time transportation cost estimated at $888,000 to relocate equipment, the changes enhance sharing and preservation of equipment and achieve an annual maintenance cost avoidance ranging from $7.4 million to $15 million more than currently anticipated, as shown in table 3. In addition, this scenario provides the 278th Armored Cavalry Regiment with larger range facilities for its tanks, and the 30th Infantry Brigade would join two other infantry brigades at Fort Stewart that train with the same equipment. Officials from the North Carolina State Area Command, 30th Infantry Brigade, and Fort Stewart MATES indicated that changing the infantry brigade’s annual training site from Fort Bragg to Fort Stewart would be feasible. The infantry brigade has previously trained at Fort Stewart and is scheduled to conduct annual training there in 1998. Although both locations have similar maneuver areas, Fort Stewart has better gunnery ranges than Fort Bragg. According to the officials, Fort Bragg does not have the gunnery ranges to qualify tank and Bradley crews to the required proficiency level (gunnery table VIII). Fort Stewart MATES officials stated that it would be easier to support three infantry brigades than the current two infantry brigades and one armored cavalry regiment because the three brigades have the same types and quantities of equipment. Concerns were expressed over the increased annual training travel costs to Fort Stewart and the initial costs to move 50 percent of certain equipment from Fort Bragg to Fort Stewart. The Commander of the 30th Infantry Brigade said that all of the brigade’s equipment was needed at Fort Bragg for weekend training requirements. The Commander thought that, without the equipment, the unit would not be able to train to standards and, as a result, unit readiness would suffer. Further, the Commander believed that retention would also suffer because personnel like to use the equipment currently available. In addition, the Fort Bragg MATES General Foreman was concerned about losing maintenance personnel if the equipment were moved to Fort Stewart because less equipment would be at Fort Bragg. The official suggested, as an alternative, preserving 50 percent of the equipment at Fort Bragg, which would save the movement costs and provide equipment needed for weekend training. The 30th Infantry Brigade could then use the equipment already located at Fort Stewart for annual training needs. Officials from Fort Hood, the 278th Armored Cavalry Regiment, and the Tennessee State Area Command stated that changing the regiment’s annual training site to Fort Hood would be feasible. III Corps officials at Fort Hood stated that, from a training and logistics support standpoint, Fort Hood could accommodate the regiment for annual training. The Commander of the 278th Armored Cavalry Regiment pointed out that Fort Hood has excellent training ranges and MATES facilities. Officials raised concerns about the additional travel time to Fort Hood; however, the 278th Armored Cavalry Regiment has trained at Fort Hood in the past and would be amenable to training there in the future. The Commander also recognized that the regiment would have to move a portion of its equipment to Fort Hood to receive priority for range use. This scenario involves eight Guard units and changes the annual training site of three units. The 278th Armored Cavalry Regiment would train with the 49th Armored Division at Fort Hood, and the 256th Infantry Brigade would train with the 155th Armor and 31st Armored Brigades at Camp Shelby. As in scenario 2, the 30th Infantry Brigade would train at Fort Stewart with the 48th and 218th Infantry Brigades. The one-time transportation cost to relocate equipment under this scenario is estimated at $1,134,000. However, this scenario is the most beneficial in avoiding maintenance cost. The three annual training site changes would enhance sharing and preservation of equipment and result in an annual maintenance cost avoidance ranging from $11 million to $18 million more than currently anticipated, as shown in table 4. This scenario also shows the added benefits of having as many as three units training and sharing equipment at the same annual training site. Officials from the 256th and 30th Infantry Brigades were concerned about the reduction in maintenance personnel that would be required at their respective MATES because of the changes in training locations. About 50 percent of each brigade’s tracked equipment would be moved to the new training locations. An official from the Guard’s Personnel Directorate confirmed that the amount of equipment determines maintenance personnel requirements and authorizations. However, the official also said that a loss in equipment at a MATES would not necessarily result in a loss of assigned personnel. The Guard develops personnel requirements to accomplish all of the work that Guard members in a particular state are required to do and prioritizes authorizations against those requirements. These requirements and authorizations, along with funds to support the authorizations, are allotted to the state. However, according to one Personnel Directorate official, the Army National Guard Bureau does not provide the adjutants general sufficient funds or authorizations to meet all the requirements, and as a result, they have flexibility within certain limits to use the authorizations and funds for those activities that are most needed to accomplish the state’s mission. Because maintenance personnel requirements at MATES are based on the amount of equipment, the Fort Polk and Fort Bragg MATES would lose personnel authorizations, but the adjutants general would ultimately decide whether the MATES would actually lose people. A Personnel Directorate official said that the Guard would probably offer affected personnel any unfilled positions elsewhere in those states or that it would allow attrition to occur to preclude personnel from losing their jobs. The requirements and authorizations would not be lost because the Guard redistributes requirements and authorizations every year. The authorizations lost by one state are gained by another. The Guard’s personnel system is expected to adjust to the movement of equipment with minimal confusion and turbulence. According to the personnel official, the Guard already makes such adjustments when a force structure change occurs. The Army National Guard’s Controlled Humidity Preservation Program can result in a more effective maintenance workforce, and the Guard should be commended for its work thus far. However, the Guard could avoid even greater maintenance costs and achieve greater workforce efficiencies if it developed a strategy to pool and share more equipment than the current 25-percent goal and changed the training sites of some units. The cost avoidance amounts presented in this report are substantial; however, they reflect the minimum amounts the Guard can avoid because many more equipment items can be pooled and shared and many other state and territorial Guard commands can pool and share equipment. To optimize the avoidance of annual equipment maintenance costs and achieve the resulting benefits of having a more effective maintenance workforce and increased equipment availability for mobilization, we recommend that the Secretary of Defense direct the Director of the Army National Guard Bureau to develop and implement a strategy, along with the modernization of Guard units, to provide controlled humidity facilities at the training sites that will achieve the greatest cost avoidance benefit; incorporate the concept of equipment sharing as the way of doing business in the Guard; and change the annual training locations of Guard units where feasible to achieve maximum cost avoidance benefits through greater equipment sharing while achieving training objectives. In written comments on a draft of this report, the Department of Defense concurred with our recommendation. The Department said that, based on the results of the Army National Guard’s study (due in January 1998), and our recommendations, the Army National Guard will develop and present its strategy and an implementation plan to meet the recommendations. To determine the feasibility for sharing equipment and changing the annual training sites for some units, we interviewed cognizant officials and obtained and analyzed documents from the Army National Guard in Washington, D.C.; U.S. Army Forces Command, Fort McPherson, Georgia; and state area commands in Georgia, Louisiana, Mississippi, North Carolina, South Carolina, Texas, and Tennessee. The units included in our review were the 48th Infantry Brigade, Georgia; 218th Infantry Brigade, South Carolina; 278th Armored Cavalry Regiment, Tennessee; 155th Armor Brigade, Mississippi; 256th Infantry Brigade, Louisiana; 30th Infantry Brigade, North Carolina; 49th Armored Division, Texas; and 31st Armored Brigade, Alabama. To determine the extent of equipment sharing and the likelihood that an additional unit could train at a MATES, we visited the MATES at Fort Stewart, Georgia; Fort Hood, Texas; and Camp Shelby, Mississippi. We chose these MATES because they store and maintain equipment and host annual training for multiple units. Camp Shelby is also a test site for the Controlled Humidity Preservation Program, and we observed equipment stored under controlled humidity conditions and discussed the status of the program with MATES officials. To show the impact of NMC equipment turned in to a MATES after annual training, we used 30- and 15-percent NMC rates that assume no intervals and 2-week intervals between annual training periods. In determining quantities of equipment available for annual training, our analysis assumed that 90 percent of unit authorized personnel were assigned and that 70 percent of assigned personnel actually attended annual training with their units. We did not determine whether commanders could actually accomplish annual training tasks with less equipment than they requested. To determine the maintenance cost avoidance achieved through equipment sharing and preservation, we conducted an analysis of nine equipment items that have high annual scheduled maintenance costs. These items are the Abrams Combat Tank, Bradley Infantry Fighting Vehicle, Bradley Cavalry Fighting Vehicle, Self-Propelled Howitzer, Recovery Vehicle, Armored Vehicle Launch Bridge, Armored Fire Support Personnel Carrier, Armored Personnel Carrier, and Command Post Carrier. We accepted the types and quantities of equipment that are authorized for the units included in our review as being needed to carry out the units’ mission. Further, we did not set up our three scenarios in a way that would adversely impact the units’ annual training objectives. To determine the cost to move equipment from one annual training site to another for those units in our analysis that could change annual training sites, we visited the Military Traffic Management Command, Arlington, Virginia, and obtained the transportation costs to move the equipment. We did not analyze the impacts that changing annual training sites would have on morale, the added travel time and transportation cost to another training site, or the actual maintenance personnel impacts associated with changing the amount of equipment at affected MATES. We conducted our review from May 1996 to September 1997 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen of the Senate and House Committees on Armed Services and the Senate Committee on Appropriations, the Secretary of the Army, and the Director of the Office of Management and the Budget. Copies will also be made available to other interested parties on request. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement on actions taken on our recommendations to the Senate Committee on Governmental Affairs and the House Committee on Government Operations not later than 60 days after the date of the report. A written statement must also be submitted to the Senate and House Committees on Appropriations with an agency’s first request for appropriations made more than 60 days after the date of the report. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report were Reginald L. Furr, Jr.; Dudley C. Roache, Jr.; Bradley D. Simpson; and Karen S. Blum. In fiscal year 1994, the Army National Guard began the Controlled Humidity Preservation (CHP) Program. The purpose of the CHP Program is to avoid the annual scheduled maintenance cost of 25 percent of 890 selected equipment items and reduce the maintenance backlog throughout the Guard. Solutions under study include placing a portion of the Guard’s vehicle fleet in either enclosed long-term preservation or dehumidified operational preservation. These techniques are projected to lengthen the service life of vehicle components. The program’s objectives are to (1) reduce the number of labor hours used to maintain equipment, (2) reduce the quantity of repair parts used, (3) decrease the quantity of consumables used for periodic servicing of equipment, and (4) decrease Guard-wide operating tempo (OPTEMPO) costs. The program is not intended to eliminate the maintenance backlog. The concept of dehumidified preservation is not new. The use of this technique dates to the 1930s. According to a Logistics Management Institute study, dehumidified preservation of operational weapon systems has been used effectively abroad as a maintenance technology but has not been broadly implemented in the Department of Defense (DOD). Relative humidity is an expression of the moisture content of the air as a percentage of what it can hold when saturated. The main problems caused by humidity are corrosion, mold, moisture regain, and condensation. Most materials absorb moisture in proportion to the relative humidity of the surrounding air. Therefore, the greater the moisture in the air, the greater the absorption rate of materials. Moisture has particularly hazardous effects on military electronic, optical, communication, and fire control equipment. Moisture in optics and fire control components clouds the vision of the crew and damages electronics. Communication and computer systems are especially sensitive to moisture, and machine surfaces, such as the main gun recoil system, are susceptible to corrosion. Corrosion generally remains a significant problem unless relative humidity is reduced to less than 45 percent. The Logistics Management Institute study stated that moisture degradation of DOD weapon systems and equipment represents an important cost issue. Current costs are estimated to range between $3 billion and $12 billion annually. There are also numerous nonfinancial impacts, the most important being the reduced readiness and sustainability of DOD weapon systems and equipment. An approach to mitigating moisture damage is to control the relative humidity in the air. By extracting moisture from the air, the relative humidity can be reduced to a level at which damaging moisture cannot form. The CHP Program consists of three parts. The first part, long-term preservation, is the process of storing selected equipment in an enclosure and maintaining the internal environment’s relative humidity at the optimal range of 30 to 40 percent. If the relative humidity is controlled, the optimal humidity range can be reached, and corrosion will cease. With the use of this process, the Guard can defer all scheduled maintenance for up to 5 years. This process has been extensively evaluated and is now widely applied by many nations as a maintenance technology for operational weapon systems. The Logistics Management Institute study stated that weapon systems and components can be dehumidified by utilizing a mechanical dehumidifier to process moisture-laden air into properly dehumidified air with a desired level of relative humidity. This processed air is recirculated into and around the equipment or system being preserved. A highly efficient data acquisition and control system provides continuous monitoring and control of the long-term preservation program to evaluate and maintain the environment stabilization system, characterize the relative severity of the site environment, and confirm site compatibility with seasonal atmospheric changes. The system is designed to ensure that a stable, corrosion-free, low-humidity environment is sustained within each enclosure. The second part of the CHP Program, modified long-term preservation, is similar to long-term preservation except that equipment may be taken out of the CHP environment and used as required. Maintenance may be deferred while equipment is within the CHP environment but maintenance requirements will accrue for that period of time the equipment is removed. The third part of the program is operational preservation. Equipment is attached to central dehumidifiers and can be parked outside or within an enclosure. Dehumidified air is provided to the internal spaces of the equipment. No maintenance is deferred by this method; however, the dehumidification process reduces moisture-induced corrosion to the point at which a substantial reduction in electronic, optic, and fire control systems faults is achieved. The equipment remains available for frequent training events but is connected to a dehumidification system during the intervening periods to dry the engine and crew compartments. Equipment items put into long-term preservation are preserved at Technical Manual-10/-20 standards, thereby enhancing the combat readiness of Guard forces. The goal of the Guard is to put 25 percent of selected equipment into long-term preservation over a 5-year fielding schedule. The equipment will be placed in preservation for a minimum of 3 years and a maximum of 5 years. After this period, the equipment is to be put back into operation and replaced with similar equipment. In addition, the Guard anticipates that operational preservation will reduce faults in selected equipment by 30 percent, resulting in a significant reduction in unscheduled maintenance. The Guard is testing the various preservation treatments to validate the Guard’s CHP concept and evaluate the physical benefits of different alternatives that control humidity on equipment that is sensitive to moisture. The test will measure the average maintenance labor hours and repair parts cost for selected equipment located in six different sites within six different treatment conditions. Three of the conditions (long-term, modified long-term, and operational preservation) use preservation treatments, and the other three do not. The test period is planned to last 1 year, and test results are expected in early 1998. According to the test plan, Camp Ripley, Minnesota, and Camp Shelby, Mississippi, are testing sites for long-term preservation. Modified long-term preservation is being tested at these sites and at the Western Kentucky Training Site and the Unit Training and Equipment Site in Oregon. Camp Ripley, Camp Shelby, Western Kentucky, and Fort Stewart, Georgia, are test sites for operational preservation. CALIBRE Systems, Incorporated, under contract to the Guard, performed an economic analysis of the test to validate the benefits of the CHP Program. The analysis compared CHP strategies to identify the strategy that provides the greatest overall benefit to the Guard. CALIBRE examined the following alternatives: (1) status quo, (2) long-term preservation, (3) operational preservation, and (4) a combination of long-term and operational preservation. Status quo, which is storing equipment in an ambient environment with minimum corrosive protection, is the method currently used by Mobilization and Training Equipment Sites (MATES). Long-term preservation encloses equipment inside a regulated humidity environment with relative humidity between 30 and 40 percent. Operational preservation achieves the same results as long-term preservation but on a more limited scale. When not in use, equipment is externally attached to a central dehumidification system but remains parked without external protection from the environment. The combination of long-term and operational preservation places a defined quantity of equipment in both environments. Operational preservation reduces corrosive faults while the vehicles remain available for training; vehicles not required on a frequent or recurring basis for training are placed in long-term preservation. The analysis assumed that the usage of equipment not placed into long-term preservation would increase by no more than 10 percent. Guard CHP Program officials told us that this estimate was based on their visits to states and discussions with Guard personnel about training activities. The officials found that an average of about 65 percent of personnel attended annual training. Therefore, the 10-percent figure is overestimated because much of the equipment is not currently being used for training. Officials agreed that, if 25 percent of the equipment were placed in CHP, usage of the remaining equipment would not increase, and the 10-percent estimate would be adequate even if 40 percent of the equipment were placed in CHP. In fact, studies show that increased equipment usage actually decreased the need for repairs because the equipment was used and did not sit idle. The analysis concluded that all three preservation alternatives would provide benefits to offset implementation costs. The benefit-to-investment ratios for alternatives 2, 3, and 4 are 9, 7.6, and 8.9, respectively. All three alternatives have a break-even point of 1 year. The analysis recommended that the Guard implement alternative 4, the combination long-term and operational preservation. The alternative of long-term preservation by itself provided a slightly larger benefit-to-investment ratio; however, that alternative would not provide the Guard with the greater flexibility of placing equipment into either long-term or operational preservation. Many states believe that the CHP Program will be beneficial in terms of avoided maintenance costs and increased equipment availability and readiness. Therefore, states are moving forward to implement the program, even though testing has not been completed. Kentucky and New York are 2 of 17 states with long-term preservation or operational preservation systems. Kentucky has 180,000 square feet of CHP space and about 84 tanks in operational preservation. New York has 96,000 square feet for long-term preservation and 120 vehicles in operational preservation. In fiscal year 1997, 16 more states will add CHP systems. Officials in each of the states we visited recognize the benefits of storing equipment using the CHP concept. They believe that CHP will avoid maintenance costs and improve equipment availability and readiness. Officials from Georgia and Tennessee stated that about one-third of the equipment at the Fort Stewart MATES could be put into CHP; Fort Stewart MATES officials agreed because the equipment is not needed for training. South Carolina officials also said that equipment not needed for training could be stored in CHP. Officials from North Carolina and Texas noted that with decreasing OPTEMPO funds, the Guard will be using equipment less, and CHP is a good technique for storing equipment not used for training. Officials at the Camp Shelby MATES, which is one of the CHP test sites, stated that they have seen a 30- to 40-percent reduction in electronic components needing repair because preservation has prevented corrosion on them. According to our analysis, Army National Guard units can preserve more than 25 percent of their equipment in controlled humidity environments if units at the same annual training site pool and share equipment. Further, changing the location where some units are annually trained could maximize the amount of equipment that can be preserved. For our analysis, we identified units (1) at the same training location that could pool and share equipment and (2) that could change their annual training sites to maximize the amount of equipment that could be stored in controlled environments. Those units in our analysis included 6 of the 15 Separate Brigades, the 49th Armored Division, and the 31st Armored Brigade. The units and their current MATES are shown in table II.1. 48th Infantry Brigade (Mechanized) Fort Stewart, Ga. 218th Infantry Brigade (Mechanized) Fort Stewart, Ga. Fort Stewart, Ga. 30th Infantry Brigade (Mechanized) Fort Bragg, N.C. Camp Shelby, Miss. 256th Infantry Brigade (Mechanized) Fort Polk, La. Fort Hood, Tex. Camp Shelby, Miss. The nine tracked equipment items selected for our analysis are shown in table II.2. These items have high annual costs for scheduled maintenance. Except for the Armored Vehicle Launch Bridge, Guard units are required to put 50 percent of these items at a MATES that facilitates mobilization and use by units training at the MATES location. For these nine tracked equipment items, we determined the types and quantities of authorized equipment that the eight units in our analysis are scheduled to have on hand in fiscal year 1999 or funded through 2008. By 2008, all of the eight units are to have similar equipment, which will facilitate sharing among the units. Several of these units currently have these equipment items on hand, and pooling and sharing can begin after CHP facilities are in place. We developed four scenarios that offer the opportunity to maximize equipment sharing and preservation. These four scenarios have eight units that train annually at either Fort Stewart, Georgia; Fort Hood, Texas; or Camp Shelby, Mississippi. As shown in table II. 3, the current scenario does not require any of the units to change annual training sites, but scenarios 1 through 3 require that up to three of the units change training sites. On the basis of the units’ authorized equipment, we determined the amount of equipment for each of the nine items that would be needed if the units training at the same location pooled and shared their equipment. We assumed that the entire unit (i.e., brigade or division) went to annual training during a 2-week period; 90 percent of a unit’s authorized personnel would be assigned; 70 percent of assigned personnel would actually attend annual training with the unit; and the unit would need an additional quantity of 5 percent to allow for equipment replacement in case some equipment broke down during the 2-week annual training period. We used the 90-percent figure for the amount of authorized personnel assigned based on discussions with Guard officials, statistics on assigned strength, and the fact that Guard units normally do not have 100 percent of their authorized strength. The 70-percent figure for annual training attendance is based on actual attendance statistics, a RAND study, and discussions with Guard officials. The 5-percent additional quantity is based on discussions with Guard maintenance officials. Because some unit-shared equipment would be turned in to MATES in a nonmission-capable (NMC) status at the end of the 2-week annual training period, we determined the amount of extra equipment that would be needed to have sufficient quantities of equipment on hand for the next unit to use for training. Because of the Guard’s lack of historical information on the quantity of NMC equipment that is turned in to MATES, we asked MATES officials to provide us with an estimate. The average estimate for several equipment items from two MATES ranged from 12 to 16 percent. The average estimate from four MATES ranged from 21 to 36 percent. On the basis of these estimates, we chose to use rates of 15 and 30 percent. We also considered the capability of MATES personnel to repair this equipment in time for the next unit to use it for annual training. Lacking information on the capability of MATES to repair equipment, we assumed for each of the nine items used in our analysis that the MATES could repair no more than 10 of the items in a 2-week period. We did not consider the impact of MATES maintenance personnel having to spend time issuing and receiving equipment from units that were training at the MATES rather than spending this time repairing equipment. In addition, we analyzed the effect on the maintenance quantity of units having consecutive training and a 2-week period between training. For each scenario, we calculated the quantities of the nine equipment items that would be placed in long-term preservation at each of the MATES in our analysis based on (1) the quantities currently located there and (2) 50 percent of the units’ authorized equipment, which is required by Guard regulation to be located at MATES. We used the greater of these two quantities in our analysis as the quantity located at the MATES. The total quantity of equipment needed for training and the additional quantity needed to compensate for equipment undergoing maintenance determines the pool size needed at each of the three annual training sites. The difference in quantities between the equipment that is located at the MATES and the amount needed for the pool becomes available for CHP long-term preservation. Of that equipment, we allocated 25 percent to meet the Guard’s 25-percent goal. The remaining quantity represents additional equipment that can be put into preservation based on sharing equipment and changing annual training sites. Analysis of each of the four scenarios shows that the Guard can place more than 25 percent of its equipment in long-term preservation by sharing unit equipment at annual training sites and changing some units’ training sites. For each scenario, we determined the total quantity of the nine equipment items that can be placed into long-term preservation at the three training sites and the resulting maintenance cost avoidance. The quantities and cost avoidance are divided to show the results of the cost avoidance of the Guard’s 25-percent goal and the additional cost avoidance resulting from increased sharing among units. The results for each scenario are based on units turning in 30 percent of the 9 equipment items in an NMC condition with no break between units coming to annual training, 15 percent of the 9 equipment items in an NMC condition with no break between units coming to annual training, 30 percent of the 9 equipment items in an NMC condition with a 2-week break between units coming to annual training, and 15 percent of the 9 equipment items in an NMC condition with a 2-week break between units coming to annual training. We analyzed the five units that train annually at Fort Stewart and Camp Shelby. This scenario does not require any of the units to change their annual training site; therefore, the additional equipment and cost avoidance over the Guard’s 25-percent goal that could be put into long-term preservation would result from greater sharing among the units. Tables II.4 through II.7 show the quantity of equipment that could be placed in long-term preservation using different assumptions and the resulting benefits. The total cost avoidance ranges from $14.6 million to $20 million, of which $4.4 million to $9.7 million is based on the benefits of having units pool and share equipment at annual training. For this scenario, we analyzed seven units training at Fort Stewart, Fort Hood, and Camp Shelby. The 256th Infantry Brigade changes its annual training site to Fort Hood and shares equipment with the 49th Armored Division. Therefore, the additional equipment that could be put into long-term preservation would be a result of more units sharing equipment because of a change in training sites. Tables II.8 through II.11 show the equipment that could be placed in long-term preservation under different assumptions and the resulting benefits. The total cost avoidance ranges from $23.1 million to $30.9 million, of which $5.3 million to $11.3 million is based on the benefits of having units pool and share equipment at annual training and changing the 256th Infantry Brigade’s annual training site to Fort Hood. For this scenario, we analyzed seven units training at Fort Stewart, Fort Hood, and Camp Shelby. The 30th Infantry Brigade changes its training site to Fort Stewart and shares equipment with the 48th and the 218th Infantry Brigades. Thus, three infantry brigades will have the same types and quantities of equipment at Fort Stewart. The 278th Armored Cavalry Regiment also changes its training site from Fort Stewart to Fort Hood and shares equipment with the 49th Armored Division. Therefore, the additional equipment that could be put into long-term preservation would be a result of similar units sharing equipment because of a change in training sites. Tables II.12 through II.15 show the equipment that could be placed in long-term preservation under different assumptions and the resulting benefits. The total cost avoidance ranges from $25.6 million to $33.8 million, of which $7.4 million to $15 million is based on the benefits of having units pool and share equipment at annual training and changing the 30th Infantry Brigade’s and 278th Armored Cavalry Regiment’s annual training sites to Fort Stewart and Fort Hood, respectively. For this scenario, we analyzed eight units training at Fort Stewart, Fort Hood, and Camp Shelby. The 30th Infantry Brigade changes its training site to Fort Stewart and shares equipment with the 48th and the 218th Infantry Brigades. Thus, as in the last scenario, three infantry brigades would have the same types and quantities of equipment at Fort Stewart. The 278th Armored Cavalry Regiment changes its training site from Fort Stewart to Fort Hood and shares equipment with the 49th Armored Division. The 256th Infantry Brigade also changes its annual training site to Camp Shelby and shares equipment with the 155th Armor Brigade and the 31st Armored Brigade. Therefore, the additional equipment that could be put into long-term preservation would be a result of similar units sharing equipment because of a change in training sites. Tables II.16 through II.19 show the equipment that could be placed in long-term preservation under different assumptions and the resulting benefits. The total cost avoidance ranges from $30.6 million to $39.2 million, of which $11 million to $18 million is based on the benefits of having units pool and share equipment at annual training and changing annual training sites for the 30th Infantry Brigade, 278th Armored Cavalry Regiment, and 256th Infantry Brigade to Fort Stewart, Fort Hood, and Camp Shelby, respectively. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO determined the: (1) feasibility of Army National Guard units that annually train at the same site to pool and share equipment; (2) maintenance costs that the Guard would avoid by pooling and sharing equipment; and (3) ways the Guard can maximize equipment sharing at annual training sites. GAO found that: (1) according to GAO's analysis of nine equipment items with high annual scheduled maintenance costs and eight Guard units, it is feasible for units that annually train at the same site to pool and share equipment; (2) for the eight units GAO reviewed, more than enough equipment is already located at Mobilization and Training Equipment Sites to create a pool of equipment for unit training needs; (3) the equipment not needed for the pool could be preserved in a controlled humidity environment; (4) more equipment than the Guard's 25-percent goal can be preserved; (5) other than during the 2-week annual training period, the unit equipment located at some training sites is used little; (6) because units train at different times during the summer, this equipment could be made available to other units for use during their 2-week training period or put in preserved storage; (7) GAO's analysis showed that the Guard could avoid up to $10.3 million annually in maintenance costs if it preserved 25 percent of these items in a controlled humidity environment; (8) the Guard could avoid up to $20 million annually in maintenance costs if three units at one training site and two units at another training site pooled and shared their equipment and preserved their unused equipment; (9) the cost avoidance GAO identified is the minimum that the Guard can achieve because many equipment items other than the ones used in the GAO analysis could be pooled and shared; (10) additional maintenance costs could be avoided if other state and territorial Guard military commands pooled and shared training equipment; (11) changing the annual training site of as few as three units will maximize equipment sharing, cause more equipment to be available for preservation, and allow the Guard to more efficiently use scarce maintenance resources; (12) under this scenario, Guard units could place as much as 49 percent of their equipment in preserved storage and reduce maintenance costs by $38.1 million in the first year and $39.2 million each year thereafter, which is $18 million more than the $21.2 million cost avoidance using the Guard's 25-percent goal; and (13) although the Guard would incur additional facility costs to preserve more than 25 percent of its equipment, the benefits of avoiding annual maintenance costs for this equipment would more than offset the facility costs.
Since the September 11, 2001, attacks, the federal government, state and local governments, and a range of independent research organizations have agreed on the need for a coordinated intergovernmental approach for allocating the nation’s resources to address the threat of terrorism and improve our security. The National Strategy for Homeland Security, released in 2002 following the proposal for DHS, emphasized a shared national responsibility for security involving close cooperation among all levels of government and acknowledged the complexity of developing a coordinated approach within our federal system of government and among a broad range of organizations and institutions involved in homeland security. The national strategy highlighted the challenge of developing complementary systems that avoid unintended duplication and increase collaboration and coordination so that public and private resources are better aligned for homeland security. The national strategy established a framework for this approach by identifying critical mission areas with intergovernmental initiatives in each area. For example, the strategy identified such initiatives as modifying federal grant requirements and consolidating funding sources to state and local governments. The strategy further recognized the importance of assessing the capability of state and local governments, developing plans, and establishing standards and performance measures to achieve national preparedness goals. In addition, many aspects of DHS’ success depend on its maintaining and enhancing working relationships within the intergovernmental system as it relies on state and local governments to accomplish its mission. In our view, intergovernmental and interjurisdictional coordination in managing federal first-responder grants is as important in the NCR as it is anywhere in the nation. As noted in our May 2004 report and June 2004 testimony, the creation of DHS was an initial step toward reorganizing the federal government to respond to some of the intergovernmental challenges identified in the National Strategy for Homeland Security. ONCRC was created by the Homeland Security Act. According to NCR emergency management officials we contacted during the time of our previous reviews, ONCRC could play a potentially important role in assisting them to implement a coordinated, well planned effort in using federal resources to improve the region’s preparedness. As we stated in the past, meeting the office’s statutory mandate would fulfill those key responsibilities. The Homeland Security Act established ONCRC within DHS to oversee and coordinate federal programs for, and relationships with federal, state, local, and regional authorities in the NCR. The ONCRC’s responsibilities are primarily ones of coordination, assessment, and advocacy. With regard to coordination, the ONCRC was mandated to: coordinate the activities of DHS relating to the NCR, including cooperation with the DHS’ Office for State and Local Government Coordination; coordinate with federal agencies in the NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of the federal role in domestic preparedness activities; coordinate with federal, state, local, and regional agencies and the private sector in NCR on terrorism preparedness to ensure adequate planning, information sharing, training, and execution of domestic preparedness activities among these agencies and entities; serve as a liaison between the federal government and state, local, and regional authorities, and private sector entities in the NCR to facilitate access to federal grants and other programs. ONCRC also has responsibilities related to resource and needs assessments and advocating for needed resources in the NCR, including: assessing and advocating for resources needed by state, local, and regional authorities in the NCR to implement efforts to secure the homeland; and submitting an annual report to Congress that (1) identifies resources required to fully implement homeland security efforts in the NCR, (2) assesses progress in implementing homeland security efforts in the NCR, and (3) includes recommendations to Congress regarding additional resources needed to fully implement homeland security efforts in the NCR. (According to the ONCRC, the first annual report is now with the Office of Management and Budget for review). We recognize that ONCRC’s missions and tasks are not easy. The overall job of promoting domestic preparedness in a large area with a huge federal presence is daunting. The NCR is a complex multijurisdictional area comprising the District of Columbia and surrounding county and city jurisdictions in Maryland and Virginia. Coordination within this region presents the challenge of working with numerous jurisdictions that vary in size, political organization, and experience in managing large emergencies. As we noted in our May 2004 report on the management of funds in the NCR, effectively managing first responder grant funds requires the ability to measure progress and provide accountability for the use of the funds. To do this, it is necessary to: 1. Develop and implement strategies for the use of the funds that identify key goals and priorities; 2. Establish performance baselines; 3. Develop and implement performance goals and data quality standards; 4. Collect reliable data; 5. Analyze those data; 6. Assess the results of that analysis; 7. Take action based on those results; and 8. Monitor the effectiveness of actions taken to achieve the designated performance goals. This strategic approach to homeland security includes identifying threats and managing risks, aligning resources to address them, and assessing progress in preparing for those threats and risks. At the same time, it is important to recognize that the equipment, skills, and training required to prepare for and respond to identified terrorist threats and risks may be applicable to non-terrorist risks as well. For example, the equipment, skills, and training required to respond effectively to a discharge of lethal chlorine gas from a rail car is much the same whether the cause of the discharge is an accidental derailment or a terrorist act. As we reported in May 2004, in fiscal years 2002 and 2003, the Departments of Homeland Security, Justice, and Health and Human Services awarded about $340 million through 16 first-responder grants to NCR jurisdictions to enhance regional emergency preparedness. Of these funds, $60.5 million were from the UASI grant, designated for regionwide needs. The remaining funds, about $279.5 million, were available to local jurisdictions for a wide variety of needs, such as equipment and training, and local jurisdictions determined how these funds were to be spent. Local jurisdictions used or planned to use money from those grants to buy equipment and to implement training and exercises for the area’s first responders, as well as improve planning for responding to a terrorist event. We have not reviewed how funds were spent since the issuance of our May 2004 report; however, spending could not be based on a coordinated plan for enhancing regional first responder capacities and preparedness because such a plan does not yet exist, although one is being prepared. In May 2004, we reported that ONCRC and the NCR faced 3 interrelated challenges in managing federal funds in a way that maximizes the increase in first responder capacities and preparedness while minimizing inefficiency and unnecessary duplication of expenditures. These were the lack of (1) preparedness standards; (2) a coordinated regionwide plan for establishing first responder performance goals, needs, and priorities, and assessing the benefits of expenditures; and (3) readily available, reliable source of data on the federal grant funds available to first responders in the NCR and their use. Without the standards, a regionwide plan, and data on available funds and spending, it will be extremely difficult to determine whether NCR first responders have the ability to respond to threats and emergencies with well-planned, well-coordinated, and effective efforts that involve a variety of first responder disciplines from NCR jurisdictions. Moreover, without such data, it is not clear how the ONCRC can fulfill its statutory mandate to assess and advocate for resources needed by state, local, and regional authorities in the NCR to implement efforts to secure the homeland. During our review we could identify no reliable data on preparedness gaps in the NCR, which of those gaps were most important, and the status of efforts to close those gaps. The baseline data needed to assess those gaps had not been fully developed or made available on a NCR-wide basis. We also noted that at the time our May 2004 report was released, DHS and ONCRC appear to have had a limited role in assessing and analyzing first responder needs in NCR and developing a coordinated effort to address those needs through the use of federal grant funds. ONCRC has focused principally on developing a plan for using the UASI funds—funds that were intended principally for addressing region wide needs. In its comments on a draft of our May 2004 report, DHS said that a governance structure approved in February 2004 would accomplish essential regionwide coordination. We agree that this structure has the potential to accomplish essential regionwide coordination, but it is not clear how it can do so effectively without comprehensive data on funds available for enhancing first responder skills and capabilities in the NCR, their use, and their effect on meeting identified performance goals. To help ensure that emergency preparedness grants and associated funds are managed in a way that maximizes their effectiveness, our May 2004 report included three recommendations to the Secretary of the Department of Homeland Security. As discussed in more detail below, some progress has been made in implementing these recommendations, but none has yet been fully implemented. Recommendation 1: Work with the NCR jurisdictions to develop a coordinated strategic plan to establish goals and priorities for enhancing first responder capacities that can be used to guide the use of federal emergency preparedness funds. Actions taken: According to an ONCRC official, a final draft for review has been circulated to key stakeholders. According to the Director, ONCRC, the plan will feature measurable goals, objectives, and performance measures. Recommendation 2: Monitor the strategic plan’s implementation to ensure that funds are used in a way that promotes effective expenditures that are not unnecessarily duplicative. Actions taken: Monitoring implementation of the strategic plan cannot be accomplished absent a plan. Importantly, to monitor the plan’s implementation, data will be needed on funds available and spending from all first responder grants available to jurisdictions in the NCR, such as the State Homeland Security Grant Program. The NCR, through the D.C. Office of Homeland Security, has a system for tracking the use of UASI funds in the NCR and other homeland security grant funds available to D.C., such as the State Homeland Security Grants. However, at this time, it does not have an automated, uniform, system to track non-UASI grant funds available and used by other NCR jurisdictions. Information on the projects funded in NCR jurisdictions by funds other than UASI is obtained through the monthly meetings and weekly conference calls of the Senior Policy Group and full-day quarterly meetings of jurisdictions in the Mid-Atlantic area, sponsored by the Office of Domestic Preparedness (ODP). These meetings provide contacts for obtaining information, as needed, on grant allocations and expenditures in jurisdictions both within and outside the NCR in the mid-Atlantic region. The ONCRC recognizes the need to develop a more systematic means of capturing all homeland security grant funds available and used through the NCR. Recommendation 3: Identify and address gaps in emergency preparedness and evaluate the effectiveness of expenditures in meeting those needs by adopting standards and preparedness guidelines based on likely scenarios for NCR and conducting assessments based on them. Actions taken: To date, no systematic gap analysis has been completed for the region as a whole. The NCR plans to use the Emergency Management Accreditation Program (EMAP) as a means of conducting a gap analysis and assessing NCR jurisdictions against EMAP’s standards for emergency preparedness—an effort expected to be completed by March 2006. How this effort would be integrated with DHS’ capabilities-based planning and assessments for first responders has not yet been determined, pending the issuance of DHS’ final version of the National Preparedness Goal in October 2005. At the national level, DHS’ efforts to develop policies, guidance, and standards that can be used to assess and develop first responder skills and capabilities have included three policy initiatives: (1) a national response plan (what needs to be done to manage a major emergency event); (2) a command and management process—the National Incident Management System—to be used during any emergency event nation-wide (how to do what needs to be done); and (3) a national preparedness goal (how well it should be done). Since our May 2004 report, DHS, as part of developing the national preparedness goal, developed 15 scenarios (12 terrorist events, a flu pandemic, a hurricane, and an earthquake) of “national significance” that would require coordinated federal, state, and local response efforts; the critical tasks associated with these scenarios; and the capabilities—in terms of planning, training, equipment, and exercises—that first responders would need to develop and maintain to effectively prepare for and respond to major emergency events. The 300 critical tasks and 36 capabilities were intended as benchmarks first responders could use to assess their relative level of preparedness and capacity to prevent, mitigate, respond to, and recover from major emergency events, including terrorist attacks. Because no single jurisdiction or agency would be expected to perform every task, possession of a target capability could involve enhancing and maintaining local resources, ensuring access to regional and federal resources, or some combination of the two. The January 25, 2005 proposal for the EMAP assessment program does suggest one way in which the NCR may include the DHS scenarios, critical tasks, and capabilities in the EMAP assessment project. The proposal states: “Should the NCR or local jurisdictions within the region desire to conduct (a) hazard identification, risk assessment, and impact analysis activities, and/or (b) capabilities assessment against catastrophic scenarios using federally provided technical assistance during the period of this project, EMAP representatives will coordinate with local and regional personnel to ensure that assessment activities and products are complementary.” The need for comprehensive, coordinated emergency planning and preparedness is important in the National Capital Region. As we noted in the recent past, the ongoing security risk to the NCR requires a comprehensive, coordinated, and carefully planned approach to the expenditure of federal first responder grants. This requires a regionwide strategic plan, performance goals, an assessment of preparedness gaps to guide priority setting, and continuing assessments of the progress made in closing identified gaps. The NCR has completed a draft strategic plan and has established a process for assessing existing preparedness gaps. But it still needs to develop a means of routinely obtaining reliable data on all funds available for enhancing emergency preparedness in the NCR and their uses. It is important to know how all first responder funds are being spent in the NCR for setting priorities and assessing the results of funds spent. The NCR has selected the EMAP emergency preparedness standards as its performance standards for the region, but it will be necessary to integrate the EMAP standards with the set of 36 performance capabilities for first responders that DHS has developed as part of its National Performance Goal. The NCR, in common with jurisdictions across the nation, faces the challenge of implementing DHS requirements for its three key policy initiatives—the National Incident Management System, National Response Plan, and the National Preparedness Goal. Successfully accomplishing all of these things will require a sound strategic plan; effective coordination; perseverance; and reliable data on available funds, their use, and the results achieved. As we noted in our September 2004 report, the NCR’s UASI Governance Structure represents a positive step towards instituting a collaborative, multijurisdictional, regionwide, planning structure. Fully implementing the recommendations in our May 2004 report would, in our view, be a major step toward developing the structure, processes, and data needed to assess current first responder skills and capabilities in the NCR and monitor the success of efforts to close identified gaps and achieve designated performance goals for the NCR. That concludes my statement, Mr. Chairman. I would be pleased to respond to any questions you or other members of the Committee may have. For questions regarding this testimony please contact William O. Jenkins, Jr. on (202) 512-8777. Ernie Hazera also made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
After the tragic events of September 11, 2001, the National Capital Region (NCR)--the District of Columbia and nearby jurisdictions in Maryland and Virginia--was recognized as a significant potential target for terrorism. In fiscal years 2002 and 2003, about $340 million in emergency preparedness funds were allocated to NCR jurisdictions. In May 2004, GAO issued a report (GAO-04-433) that examined (1) the use of federal funds emergency preparedness funds allocated to NCR jurisdictions, (2) the challenges within the NCR to organizing and implementing efficient and effective preparedness programs, (3) any emergency preparedness gaps that remain in the NCR, and (4) the Department of Homeland Security's (DHS) role in the NCR. The report made recommendations to the Secretary of DHS to enhance the management of first responder grants in the NCR. We also reported in September 2004 (GAO-04-1009) that the NCR's Governance Structure for the Urban Area Security Initiative could facilitate collaborative, coordinated, and planned management and use of federal funds for enhancing emergency preparedness, if implemented as planned DHS agreed to implement these recommendations. A coordinated, targeted, and complementary use of federal homeland security grants is important in the NCR and elsewhere. These grants are one means of achieving an important goal: enhancing the ability of first responders to prevent, prepare for, respond to, and recover from terrorist and other incidents with well-planned, well-coordinated, and effective efforts that involve a variety of first responders from multiple jurisdictions. To oversee and coordinate federal emergency preparedness programs among federal, state, local, and regional authorities in the NCR, the Homeland Security Act established the Office for National Capital Region Coordination (ONCRC) within DHS. The ongoing security risk requires a comprehensive, coordinated, and carefully planned approach to the expenditure of federal first responder grants. This requires a NCR-wide strategic plan, performance goals, an assessment of preparedness gaps to guide priority setting, and continuing assessments of the progress made in closing identified gaps This testimony summarizes our prior work and provides information on the implementation of the three recommendations in our May 2004 report. First, we recommended that DHS work with the NCR jurisdictions to develop a coordinated strategic plan. DHS and NCR jurisdictions have completed a final draft for review that has been circulated to key stakeholders. Second, we recommended that DHS monitor's the plans implementation, which must await a final plan. To implement and monitor the future plan, data will be needed regarding the funding available and used for implementing the plan and enhancing first responder capabilities in the NCR--data that is not currently routinely available. The NCR, through the District of Columbia's Office of Homeland Security, has a system for tracking the use of Urban Area Security Initiative funds in the NCR as well as other homeland security grant funds available to Washington, D.C. However, the NCR does not currently track non-Urban Area Security Initiative funds available to and used by other NCR jurisdictions in an automated, uniform way. Rather, it obtains information about those funds through a variety of means, including teleconferences involving senior emergency preparedness officials. Third, we recommended that DHS identify and address preparedness gaps and evaluate the effectiveness of expenditures by conducting assessments based on established guidelines and standards. No systematic gap analysis has been completed for the region; however, by March 2006, the NCR plans to complete an effort to use the Emergency Management Accreditation Program (EMAP) as a means of conducting a gap analysis and assess NCR jurisdictions against EMAP's national preparedness standards. The result would be a report on the NCR's compliance with EMAP standards for emergency preparedness and an analysis of areas needing improvement to address in the short- and long-term. The ONCRC has not determined how this effort would be integrated with DHS' capabilities-based planning and assessments for first responders, pending the issuance of DHS' final version of the National Preparedness Goal in October 2005.
In 2003, the Department of Homeland Security was created and tasked with integrating numerous agencies and offices with varying missions from the General Services Administration; the Federal Bureau of Investigation; and the Departments of Agriculture, Defense, Energy, Health and Human Services, Justice, Transportation, and Treasury; as well as the Coast Guard and the Secret Service. Eight DHS components have internal procurement offices with a Head of Contracting Activity (HCA) who reports directly to the component head and is accountable to the CPO. The Office of Procurement Operations (OPO) also has an HCA who provides contracting support to all other components and reports directly to the CPO. The HCA for each component has overall responsibility for the day-to-day management of the component’s acquisition function. Figure 1 shows the organizational relationship between the HCAs and the CPO. We reported in September 2006 that DHS planned to fully implement its acquisition oversight program during fiscal year 2007. The plan is composed of four recurring reviews: self-assessment, operational status, on-site, and acquisition planning. The CPO has issued an acquisition oversight program guidebook, provided training on self-assessment and operational status reviews, and began implementation of the four reviews in the plan (see table 1). The acquisition oversight plan generally incorporates basic principles of an effective and accountable acquisition function and includes mechanisms to monitor acquisition performance. Specifically, the plan incorporates DHS policy, internal controls, and elements of an effective acquisition function: organizational alignment and leadership, policies and processes, human capital, knowledge and information management, and financial accountability. While it is too early to assess the plan’s overall effectiveness in improving acquisition performance, initial implementation of the first self-assessment has helped most components prioritize actions to address identified weaknesses. In addition, the CPO has helped several components implement organizational and process changes that may improve acquisition performance over time. For example, one component, with the assistance of the CPO, elevated its acquisition office to a level equivalent to its financial office. However, the acquisition planning reviews are not sufficient to determine if components’ adequately plan their acquisitions. Federal acquisition regulations and DHS directives require agencies to perform acquisition planning in part to ensure good value, including cost and quality. Component HCAs are responsible to ensure that acquisition plans are completed in a timely manner, include an efficient and effective acquisition strategy and the resulting contract action or actions support the component’s mission. Inadequate procurement planning can lead to higher costs, schedule delays, and systems that do not meet mission objectives. Several recent reviews have identified problems in DHS’s acquisition planning. In 2006, we reported that DHS often opted for speed and convenience in lieu of planning and analysis when selecting a contracting method and may not have obtained a good value for millions of dollars in spending. As part of a 2006 special review of Federal Emergency Management Agency’s (FEMA) contracts, DHS’s CPO found significant problems with the requirements process and acquisition strategy and recommended in part that FEMA better plan acquisitions. For example, the CPO reported that FEMA’s total cost for its temporary housing program could have been significantly reduced if FEMA had appropriately planned to acquire temporary homes before the fiscal year 2005 hurricane season. A 2006 internal review of the Office of Procurement Operation’s contracts similarly found little evidence that acquisition planning occurred in compliance with regulations. While a key goal of the oversight program is to improve acquisition planning, we found potential problems with each of the three elements of the acquisition planning reviews, as shown in table 2. DHS faces two challenges in achieving the goals of its acquisition oversight plan. First, the CPO has had limited resources to implement the plan reviews. When implementation of the plan began in 2006, only two personnel were assigned acquisition oversight as their primary duty. The CPO received funding for eight additional oversight positions. However, officials told us that they have struggled to find qualified individuals. As of June 2007, seven positions had been filled. As part of the Department’s fiscal year 2008 appropriation request, the CPO is seeking two additional staff, for a total of 12 oversight positions. The CPO will also continue to rely on resources from components to implement the plan, such as providing staff for on-site reviews. Second, while the CPO can make recommendations based on oversight reviews, the component head ultimately determines what, if any, action will be taken. DHS’s organization relies on cooperation and collaboration between the CPO and components to accomplish departmentwide acquisition goals. However, to the extent that the CPO and components disagree on needed actions, the CPO lacks the authority to require compliance with recommendations. We have previously reported that the DHS system of dual accountability results in unclear working relationships between the CPO and component heads. DHS policy also leaves unclear what enforcement authority the CPO has to ensure that acquisition initiatives are carried out. In turn, we recommended that the Secretary of Homeland Security provide the CPO with sufficient enforcement authority to effectively oversee the Department’s acquisitions—a recommendation that has yet to be implemented. CPO officials believe that there are other mechanisms to influence component actions, such as providing input into HCA hiring decisions and performance appraisals. Making the most of opportunities to strengthen its acquisition oversight program—along with overcoming implementation challenges—could position the agency to achieve better acquisition outcomes. We identified two such opportunities: periodic external assessments of the oversight program and sharing knowledge gained from the oversight plan reviews across the department. Federal internal control standards call for periodic external assessments of programs to help ensure their effectiveness. An independent evaluation of DHS’s acquisition oversight program by the Inspector General or an external auditor could help strengthen the oversight conducted through the plan and better ensure that the program is fully implemented and maintaining its effectiveness over time. In particular, an external assessment with results communicated to appropriate officials can help maintain the strength of the oversight program by alerting DHS to acquisition concerns that require oversight, as well as monitoring the plan’s implementation. For example, the plan initially called for components to complete the self-assessment by surveying their acquisition staff; however, the level of staff input for the first self-assessments was left to the discretion of the HCAs. Specifically, CPO officials advised HCAs to complete the questions themselves, delegate the completion to one or more staff members, or select a few key people from outside their organization to participate. While evolution of the plan and its implementation is to be expected and can result in improvements, periodic external assessments of the plan could provide a mechanism for monitoring changes to ensure they do not diminish oversight. Federal internal control standards also call for effective communication to enable managers to carry out their responsibilities and better achieve components’ missions. The CPO has been assigned responsibility for ensuring the integrity of the oversight process—in part by providing lessons learned for acquisition program management and execution. While the CPO intends to share knowledge with components by posting lessons learned from operational status reviews to DHS’s intranet, according to CPO officials, the Web site is currently limited to providing guidance and training materials and does not include a formal mechanism to share lessons learned among components. In addition to the Web site, other opportunities may exist for sharing knowledge. For example, CPO officials indicated that the CPO meets monthly with component HCAs to discuss acquisition issues. The monthly meetings could provide an opportunity to share and discuss lessons learned from oversight reviews. Finally, knowledge could be regularly shared with component acquisition staff through internal memorandums or reports on the results of oversight reviews. Integrating 22 federal agencies while implementing acquisition processes needed to support DHS’s national security mission is a herculean effort. The CPO’s oversight plan generally incorporates basic principles of an effective acquisition function, but absent clear authority, the CPO’s recommendations for improved acquisition performance are, in effect, advisory. Additional actions are needed to achieve the plan’s objectives and opportunities exist to strengthen oversight through enhanced internal controls. Until DHS improves its approach for overseeing acquisition planning, the department will continue to be at risk of failing to identify and address recurring problems that have led to poor acquisition outcomes. To improve oversight of component acquisition planning processes and the overall effectiveness of the acquisition oversight plan, we recommend that the Secretary of Homeland Security direct the Chief Procurement Officer to take the following three actions: Reevaluate the approach to oversight of acquisition planning reviews and determine whether the mechanisms under the plan are sufficient to monitor component actions and improve component acquisition planning efforts. Request a periodic external assessment of the oversight plan implementation and ensure findings are communicated to and addressed by appropriate officials. Develop additional opportunities to share lessons learned from oversight reviews with DHS components. We provided a draft of this report to DHS for review and comment. In written comments, DHS generally agreed with our facts and conclusions and concurred with all of our recommendations and provided information on what actions would be taken to address them. Regarding the recommendation on acquisition planning, DHS stated that during on-site reviews they will verify that CPO comments on acquisition plans have been sufficiently addressed. DHS is also developing training for component HCAs and other personnel to emphasize that CPO comments related to compliance with applicable laws and regulations must be incorporated into acquisition plans. The CPO will also annually require an acquisition planning review as part of the oversight plan’s operational status reviews. Additionally, DHS intends to change the advanced acquisition planning database so that historical data are available for review. With regard to the recommendation for periodic external assessment of the oversight plan, DHS stated they plan to explore opportunities to establish a periodic external review of the oversight program. However, the first priority of the acquisition oversight office is to complete initial component on-site reviews. For the recommendation to develop additional opportunities to share lessons learned from oversight reviews, DHS stated it intends to share consolidated information from operational status and on-site reviews in regular meetings with component HCA staff. In addition, the CPO plans to explore further opportunities for sharing oversight results with the entire DHS acquisition community. DHS also responded to a 2005 GAO recommendation on the issue of the CPO lacking authority over the component HCAs. DHS commented that it is in the process of modifying its acquisition lines of business management directive to ensure that no DHS contracting organization is exempt. In addition, DHS stated that the Under Secretary for Management has authority as the Chief Acquisition Officer to monitor acquisition performance, establish clear lines of authority for making acquisition decisions, and manage the direction of acquisition policy for the department, and that these authorities also devolve to the CPO. Modifying the management directive to ensure no DHS contracting organization is exempt is a positive step. However, until DHS formally designates the Chief Acquisition Officer, and modifies applicable management directives to support this designation, DHS’s existing policy of dual accountability between the component heads and the CPO leaves unclear the CPO’s authority to enforce corrective actions to achieve the department’s acquisition goals, which was the basis of our earlier recommendation. DHS’s letter is reprinted in appendix II. DHS also provided technical comments which were incorporated as appropriate. We are sending copies of this report to interested congressional committees and the Secretary of Homeland Security. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have questions regarding this report, please contact me at (202) 512-4841 or huttonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Principal contributors to this report were Amelia Shachoy, Assistant Director; William Russell; Tatiana Winger; Heddi Nieuwsma; Lily Chin; Karen Sloan; and Sylvia Schatz. To determine actions taken by DHS to implement the acquisition oversight plan and challenges DHS faces, we reviewed prior GAO and DHS Office of the Inspector General reports pertaining to acquisition oversight as well as relevant DHS documents, such as the oversight plan, documents of completed reviews and guidance to components, including training materials. We interviewed officials in the CPO’s office and the nine DHS components with acquisition offices. We compared efforts undertaken to implement the plan by DHS officials against established program policies, the fiscal year 2007 implementation schedule, and other guidance materials. We reviewed four component self-assessments that were provided to us and also reviewed areas in which the CPO provided assistance to components based on self-assessment results. We also reviewed Standards for Internal Control in the Federal Government. We conducted our work from February 2007 to June 2007 in accordance with generally accepted government auditing standards.
The Department of Homeland Security (DHS), the third largest department in federal procurement spending in fiscal year 2006, has faced ongoing cost, schedule, and performance problems with major acquisitions and procurement of services. In December 2005, DHS established an acquisition oversight program to provide insight into and improve components' acquisition programs. In 2006, GAO reported that DHS faced challenges in implementing its program. Congress mandated that DHS develop an oversight plan and tasked GAO with analyzing the plan. GAO (1) evaluated actions DHS and its components have taken to implement the acquisition oversight plan and (2) identified implementation challenges. GAO also identified opportunities for strengthening oversight conducted through the plan. GAO reviewed relevant DHS documents and GAO and DHS Inspector General reports and interviewed officials in the office of the Chief Procurement Officer (CPO) and nine DHS components. The CPO has taken several actions to implement DHS's acquisition oversight plan--which generally incorporates basic principles of an effective and accountable acquisition function. The plan monitors acquisition performance through four recurring reviews: self-assessment, operational status, on-site, and acquisition planning. Each component has completed the first self-assessment, which has helped components identify and prioritize acquisition weaknesses. In addition, each component has submitted an initial operational status report to the CPO and on-site reviews are being conducted. Despite this progress, the acquisition planning reviews are not sufficient to determine if components adequately plan their acquisitions--in part because a required review has not been implemented and the CPO lacks visibility into components' planning activities. DHS faces two key challenges in implementing its acquisition oversight plan. First, the CPO has had limited oversight resources to implement plan reviews. However, recent increases in staff have begun to address this challenge. Second, the CPO lacks sufficient authority to ensure components comply with the plan--despite being held accountable for departmentwide management and oversight of the acquisition function. GAO has previously recommended that DHS provide the CPO with sufficient enforcement authority to enable effective acquisition oversight. In addition to these challenges, GAO identified two opportunities to strengthen internal controls for overseeing the plan's implementation and for increasing knowledge sharing. Specifically, independent evaluations of DHS's oversight program could help ensure that the plan maintains its effectiveness over time. Sharing knowledge and lessons learned could provide DHS's acquisition workforce with the information needed to improve their acquisition processes and better achieve DHS's mission.
As of January 2003, Congress had provided about $38 billion, through a total of four appropriations, for DOD’s emergency response needs related to the war on terrorism. In September 2001 and December 2001, Congress enacted two emergency supplemental appropriations to quickly provide initial funds to meet the emergency needs of DOD and other federal agencies to recover from and respond to the September 11 terrorist attacks. These supplementals, enacted in two separate fiscal years, fiscal year 2001 and fiscal year 2002, provided about $17.5 billion to DOD. Given the urgent circumstances, Congress sought an expeditious mechanism to transfer funds and, therefore, provided funds to DOD through the Defense Emergency Response Fund. This fund is a distinct account and DOD manages it separately from its regular appropriations accounts. Of the initial $17.5 billion, DOD received about $15 billion in the Defense Emergency Response Fund, $2.3 billion was provided to other accounts, and $0.2 billion was rescinded. Shortly after the September 11, 2001, attacks, OMB and DOD agreed on certain parameters for managing funds placed in the Defense Emergency Response Fund, including that funds would be obligated in 10 funding categories. OMB stipulated that DOD was to manage the allocation of funds within the 10 categories and could not transfer these funds to its regular appropriations accounts. Moreover, OMB used these categories in its reports to Congress on the expenditure of the funds. Based on DOD’s limited estimates regarding requirements for each category, OMB apportioned funds to the Defense Emergency Response Fund. According to DOD officials, these estimates had to be prepared quickly, within days of the attacks, and reflected the best judgment of DOD’s needs at the time without knowing the exact nature of the U.S. response to the attacks. For each category, DOD also identified multiple line items for which expenses could be incurred. Figure 1 identifies the 10 categories and identifies some of the line items that DOD established for one of the categories. The 10 categories do not correlate with DOD’s existing appropriation accounts (see app. II). However, the expenses related to a category would be similar to the types of expenses funded under several appropriation accounts. For example, DOD may incur operation and maintenance, and procurement expenses under the categories of improved command and control and enhanced force protection. Because the 10 funding categories established by OMB and DOD did not correlate with DOD’s existing appropriation account structure, a dual system of accounting emerged, which some believed to be cumbersome for tracking purposes. Therefore, for subsequent appropriations—a second supplemental appropriation in fiscal years 2002 and DOD’s regular appropriation in fiscal year 2003—Congress changed its method of providing funds. Specifically, in these appropriations, DOD received about $20.5 billion in funds either through the Defense Emergency Response Fund (fiscal year 2002) to its regular appropriation accounts or directly to its appropriation accounts (fiscal year 2003). Appendix II provides additional details on these two appropriations. On September 14, 2001, OMB issued specific guidelines and criteria for federal departments and agencies to apply in identifying and evaluating requirements to be funded under the initial emergency supplemental appropriations. This guidance covered two areas—response and recovery, and preparedness and mitigation—and outlined 15 conditions to be met. Among other things, these conditions stipulated that requirements must be known, not speculative; urgent, not reasonably handled at a later time; and unable to be reasonably met through the use of existing agency funds. Appendix III lists OMB’s guidelines and criteria. Because expenses related to contingency operations could be funded with emergency response funds, DOD also relied on its existing financial management regulation for guidance. Specifically, volume 12, chapter 23 of this regulation requires that costs incurred in support of contingency operations be limited to the incremental costs of the operation—costs that are above and beyond the baseline costs for training, operations, and personnel. The regulation further states that incremental costs are additional costs that would not have been incurred had the contingency operation not been supported. DOD adhered to OMB guidance in managing the allocation of $15 billion in initial emergency response funds placed in the Defense Emergency Response Fund after the September 11th attacks. While DOD instructed its components to follow OMB guidelines and internal DOD guidelines and financial regulations in obligating emergency response funds, it did not provide specific internal guidance to assist the components in determining allowable expenses. As a result, command officials were sometimes uncertain on the appropriateness of expenses and often had to rely on their best judgment in obligating these funds. In accordance with OMB guidance, DOD reported on its allocation of funds to its components in 10 funding categories and did not transfer these funds into its regular appropriation accounts. As of December 2002, DOD reported it had obligated about $14 billion of the $15 billion provided in the emergency supplementals of fiscal years 2001 and 2002 (see table 1). The data shown in table 1 are based on monthly obligation reports from DOD’s defense financial accounting system database. These funds do not expire and, therefore, are available until used. We did not verify the accuracy or completeness of this data. As table 1 shows, as of December 31, 2002, DOD reported data shows over $1 billion of the funds in the Defense Emergency Response Fund remains unobligated. Over half of that amount, $526 million, had not been allocated to the 10 funding categories. According to DOD officials, in March 2003, DOD plans to review the status of the unobligated funds and validate whether requirements for the funds continue to exist. For each of the 10 funding categories established by OMB and DOD, DOD identified line items that could be funded under the 10 categories. However, these line items were broad in nature and DOD did not identify the specific types of expenses that could be funded within each line item. In addition, OMB, among other things, directed that any requirement to be funded must reflect an urgent and known need. However, DOD did not establish any specific parameters to define the meaning of urgent and known. Also, in the event that funds in the Defense Emergency Response Fund would be needed to meet the requirements of contingency operations, DOD stipulated that funds would be used to cover the incremental costs of contingency operations. DOD directed components to use its existing financial management regulation in reporting incremental costs, but it did not offer any further guidance as to how commands were to distinguish incremental from baseline costs. In May 2002, we reported that DOD’s financial management regulation did not provide sufficient information on what types of costs met DOD’s definition of incremental costs, which resulted in various interpretations among the services—and even among units within a service—as to appropriate and proper expenditures. As a result, we recommended that DOD expand its financial management regulation to include more comprehensive guidance governing the use of contingency funds. DOD agreed with our recommendation and, as of April 2003, is still working on revisions to its guidance. Because DOD is in the process of improving its guidance based on recommendations from our prior work, we are not making a new recommendation in this report. In the absence of detailed guidance, command officials often had to use their best judgment in deciding how to spend the defense emergency response funds, and we found the same type of uncertainty among commands as we reported in May 2002. For example, command officials told us that determining what could be purchased from each category and line item was often difficult because the categories and line items were broad and generally differed from DOD’s regular appropriation accounts. For example, DOD designated mobilization of guard and reserves as an allowable line item for the category of increased worldwide posture. Mobilization involves many factors, such as special pay, transportation, and equipment, but DOD did not specify which could be appropriately funded. We also found differing interpretations existed as to whether requirements were urgent or known. Some commands used emergency response funds on items that could not be delivered in a reasonable time frame to be considered urgent. For example, one command purchased a RC-135 Rivet Joint aircraft for intelligence, communications, and reconnaissance. Typically, this aircraft would not be fielded for 8 years because it needed multiple contractors to install and test its integrated electronics suite. In another example, one command, before it knew its specific role in supporting the war on terrorism, obligated $52 million for spare parts based on an analysis of prior usage. By contrast, another command was reluctant to obligate funds until its specific role had been determined. Moreover, in some cases, command officials were unclear about how to determine the incremental costs to their regular appropriations. For example, commands used emergency response funds to pay for accelerated ship maintenance that was already planned for future budgets and to purchase computer and communication upgrades that were previously unfunded from the regular appropriation. DOD officials told us that they had to quickly develop funding requirements after the terrorists attacks and used OMB guidance and available DOD instructions to instruct their components on how the funds could be used. The officials said that obligating funds in 10 categories and related line items that were not directly related to their appropriation accounts was confusing. In recognizing the lack of detailed guidance, DOD officials also told us that they maintained constant communication among all levels of DOD, especially at the command and unit levels, in order to review and clarify the use of emergency response funds. Furthermore, the officials said that they believe most of the funds were obligated for appropriate purposes. DOD’s ability to track funds appropriated for the war on terrorism has varying limitations depending on the appropriation. For funds provided under the two emergency supplemental appropriations of fiscal years 2001 and 2002 and managed out of the Defense Emergency Response Fund, DOD is able to report a breakdown of obligations by the 10 categories, but found that tracking these obligations was cumbersome because the categories do not correlate with its regular appropriations account structure. For the two subsequent appropriations in fiscal year 2002 and 2003, DOD cannot separately identify obligations funded with emergency response funds because these funds are commingled with funds appropriated for other purposes, and DOD’s accounting system does not distinguish among obligations. For example, in its fiscal year 2003 appropriation, Congress appropriated about $3.7 billion for the Air Force’s operation and maintenance subactivity group related to primary combat forces, including about $389 million in emergency response funds for expenses related to the war on terrorism, and about $3.3 billion for expenses not related to the war on terrorism. All of these funds were commingled in the Air Force’s operation and maintenance account. Within DOD’s accounting system, DOD records obligations, but does not identify the source of funds. Therefore, at any given time, DOD is only able to track and report total obligations for operation and maintenance purposes and cannot separately identify obligations funded from emergency response funds. DOD officials agreed that DOD’s accounting system does not separately track obligations funded with emergency response funds, but they emphasized that DOD has established procedures intended to track obligations for contingency operations, including operations associated with the war on terrorism such as in Afghanistan. Under DOD’s financial management regulation, DOD components are required to track and report the incremental costs (obligations) for each contingency operation. DOD established a special code for each operation, and components track obligations in a management tracking system separate from DOD’s accounting system. The components report the total obligations for each contingency operation according to four specific cost categories: personnel, personnel support, operating support, and transportation—and by appropriation account. This information is reported monthly and is provided to Congress. However, the contingency cost categories do not correlate with DOD’s appropriation accounting structure. Also, funding for contingency operations comes from both special funding sources such as emergency response funds, as well as, the regular peacetime appropriations given to components. Because DOD’s accounting system does not separately track obligations by funding source, DOD’s reporting does not identify the portion of contingency operations-related obligations funded with emergency response funds. During our review, DOD acknowledged the limitations of its ability to track the war on terrorism obligations and acknowledged the continued interest of Congress, OMB, GAO, and other organizations regarding the use of the funds. Starting in December 2002, DOD expanded its reporting on obligations associated with the war on terrorism and contingency operations. Specifically, in addition to continuing the separate tracking of fiscal year 2001 and fiscal year 2002 emergency supplemental funds contained in the Defense Emergency Response Fund, components are now required to report more detailed data on obligations associated with the two subsequent appropriations in fiscal year 2002 and 2003. The additional reporting requirements are as follows: For the fiscal year 2002 supplemental, components are, on a monthly basis, to describe the purpose of the obligation, provide the amount, and identify the appropriation account. For the fiscal year 2003 appropriation, components are, on a monthly basis, to identify which funds they are obligating from their peacetime budget to directly support the global war on terrorism, i.e., components are using their baseline budget for the war on terrorism obligations. The report is to describe the purpose of the obligation and the amount and identify what activities were not being accomplished and the appropriation account affected. This is referred to as “cash flowing.” Furthermore, components are to start compiling and reporting on four additional cost categories for contingency operations: reconstitution of forces and capability, recapitalization, classified programs, and working capital fund. DOD officials stated the data are compiled from individual command and unit management tracking systems, which are not linked with DOD’s accounting system, referred to as parallel tracking. According to DOD officials, the additional reporting is expected to provide the management information that DOD needs to better manage and oversee the war on terrorism obligations and that Congress and others need to exercise oversight responsibilities. Officials believe that requiring components to provide additional data on obligations is preferable to modifying the accounting system to distinguish war on terrorism-related obligations from other obligations. Officials told us that modifying the accounting system would be too costly, time consuming, and the effort would not justify the value added at this time. Also, officials point out that the additional work involved and learning curve associated with a modified accounting system would pose problems because of the complexity involved with additional reporting, the time involved with obtaining staff competency, and the need to retrain staff due to assignment rotations. As of February 2003, components were still compiling data; therefore, we are not making a recommendation in this report, but will continue to review DOD’s expanded reporting efforts in our ongoing review of contingency operations costs. In written comments on a draft of this report, DOD partially concurred with the report (see app. IV). Specifically, DOD disagreed that guidance provided to components on how to use emergency response funds was not sufficient. DOD stated that components were clearly instructed to treat expenses as incremental costs as defined in DOD’s financial management regulation and that subsequent meetings were held to clarify this guidance. DOD also noted that, because the category structure used for emergency response funds was unique, some confusion existed among the components. DOD stated that the confusion dissipated as the components became more familiar with the structure. In our report, we recognized that DOD directed components to rely on the financial regulation, as well as other guidelines, and acknowledged DOD’s view that it maintained constant communication to review and clarify the use of emergency response funds. However, in our May 2002 report, we noted that the financial regulation does not provide sufficient guidance on the types of costs that are defined as incremental, which resulted in various interpretations among the services. DOD agreed with the recommendation made in that report that the regulation be expanded to include more comprehensive guidance. During our work conducted for this report, we found that command officials were sometimes uncertain about whether certain expenses were allowable, including how to determine incremental costs, and sometimes had to use their best judgment in obligating emergency response funds. We continue to believe that more comprehensive guidance is warranted. Because DOD is still revising the guidance based on prior GAO work, we are not making a new recommendation in this report. While DOD stated our report correctly said that DOD cannot correlate the funding categories for emergency response funds with its appropriation accounts, it believed we were only partially accurate in stating that DOD is unable to track all emergency response funds in its accounting system. DOD noted that it had implemented a process to track incremental costs related to the war on terrorism and, in particular, the Defense Finance Accounting Service collects cost information on contingencies from components. DOD also noted that it is implementing procedures to capture the incremental costs of Operation Iraqi Freedom. Our report specifically recognizes DOD has established procedures intended to track incremental costs for contingency operations including operations associated with the war on terrorism, and that the components report this type of information. However, we note that this information is compiled in a management tracking system separate from DOD’s accounting system. Furthermore, funding for contingency operations comes from both special funding sources such as emergency response funds, as well as regular peacetime appropriations. Because DOD’s accounting system does not separately track obligations by funding source, DOD’s reporting does not identify the portion of contingency-operations related obligations funded with emergency response funds. Further, DOD partially agreed that its accounting system cannot report on the $20.5 billion in emergency response funds provided for the war on terrorism in fiscal years 2002 and 2003. DOD noted that it received only $13.5 billion and that, except for $305 million appropriated for Pentagon repairs, these funds went directly to component accounts and their execution is captured in accounting reports. DOD noted that components report separately on obligations of these funds. In subsequent discussions, a DOD comptroller official confirmed the accuracy of our calculation that DOD had received a total of $20.5 billion. As discussed previously, our report recognizes that components report separately on obligations related contingency operations, but that these reports do not distinguish the portion of contingency operations-related operations funded with emergency response funds. Unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to interested congressional committees with jurisdiction over DOD’s budget. Also at that time, we will send copies of this report to the Secretary of Defense; the DOD Comptroller; the Secretaries of the Army, the Navy, and the Air Force; the Director of the Defense Finance and Accounting Service; the Director of the OMB, and others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov/. If you have any questions regarding this report, please contact me at (202) 512-9619 or pickups@gao.gov, or Gary Billen, Assistant Director, at (214) 777-5703 or billeng@gao.gov. Major contributors to this report are acknowledged in appendix V. To determine the extent that the Department of Defense (DOD) adhered to the Office of Management and Budgets (OMB) guidance for managing funds provided separately for the Defense Emergency Response Fund (appropriated in the first two emergency supplemental appropriations) and the sufficiency of DOD’s guidance to its components on the use of these funds, we reviewed the guidance provided by OMB to federal government departments and agencies and the guidance provided by DOD to its defense components for justifying their obligations funded through the emergency supplementals of fiscal years 2001 and 2002. We interviewed knowledgeable DOD officials responsible for implementing this guidance, obtained DOD reports of emergency response fund allocations to DOD component commands, and used these reports to select sites for our subsequent visits. At DOD’s component commands, we interviewed officials and obtained reports or examples of obligations (purchases). We compared selected examples of obligations to OMB and DOD guidance. We also relied on prior GAO work regarding DOD’s guidance and reporting for contingency operations. To assess DOD’s ability to track the use of emergency funds provided to DOD in the emergency supplementals of fiscal years 2001 and 2002, the supplemental for fiscal year 2002, and the DOD appropriation for fiscal year 2003, we analyzed relevant DOD financial documents, including the Office of the Secretary of Defense monthly reports allocating the funds to services and commands and the Defense Finance and Accounting Service monthly obligation reports and accounting manuals. We did not verify the accuracy and completeness of this data. We also reviewed budget and accounting procedures and documents and interviewed knowledgeable DOD officials. We performed our work at the Office of the Secretary of Defense; the Office of the Comptroller; the headquarters of the Army, the Army Reserve, the Army National Guard, the National Guard, the Navy, and the Air Force; and the following commands and centers: Transportation Command, Scott Air Force Base, Ill. Army Forces Command, Fort McPherson, Ga. Army Central Forces Command, Fort McPherson, Ga. Air Force Aeronautical Services Center, Wright-Patterson Air Force Base, Ohio Air Force Air Armament Center, Eglin Air Force Base, Fla. Air Force Air Combat Command, Langley Air Force Base, Va. Navy Atlantic Fleet Command, Norfolk Naval Base, Va. Army Tank-Automotive and Armaments Command, Warren, Mich. Army Materiel Command, Alexandria, Va. Air Force Materiel Command, Wright-Patterson Air Force Base, Ohio Air Mobility Command, Scott Air Force Base, Ill. Special Operations Command, MacDill Air Force Base, Tampa, Fla. Pacific Command, Pearl Harbor, Hawaii Air Force Special Operations Command, Hurlburt Field, Fla. Army Special Operations Command, Fort Bragg, N.C. We performed our review between March 2002 and February 2003 in accordance with generally accepted government auditing standards. As of January 2003, Congress appropriated a total of about $38 billion in fiscal years 2001, 2002, and 2003 to fund DOD’s expenses related to the war on terrorism. As table 2 shows, Congress provided these emergency response funds in four appropriations—two emergency supplementals (fiscal years 2001 and 2002), a fiscal year 2002 supplemental, and the fiscal year 2003 Defense appropriation—and used different methods to transfer funds to DOD. Congress appropriated about $17.5 billion to fund DOD’s emergency needs in the aftermath of the September 2001 terrorist attacks during fiscal years 2001 and 2002. Of this amount, about $15 billion was eventually transferred to DOD’s Defense Emergency Response Fund. OMB, in conjunction with DOD, identified 10 broad funding categories to govern the use of these funds. While funds in the Defense Emergency Response Fund were obligated for similar types of requirements funded under several of DOD’s regular appropriations account, such as for operation and maintenance and military personnel expenses, the 10 categories do not directly correlate with DOD’s existing appropriation account structure. Figure 2 lists the Defense Emergency Response funding categories and provides examples of DOD’s regular appropriation accounts. In an emergency supplemental appropriation for fiscal year 2002, Congress appropriated $13.4 billion in emergency response funds, of which $11.3 billion was placed in the Defense Emergency Response Fund for subsequent transfer to DOD’s regular appropriation accounts. Furthermore, Congress designated the distribution of these funds by DOD component, appropriation account, and purpose. Figure 3 provides an example of how Congress designated the use of fiscal year 2002 emergency response funds for the Air Force. In fiscal year 2003, Congress appropriated $7.1 billion in emergency response funds to DOD as part of DOD’s regular appropriation, and these funds were appropriated directly to DOD’s regular appropriation accounts. In contrast to the fiscal year 2002 emergency supplemental, Congress provided more detail in designating the distribution of fiscal year 2003 emergency response funds. In the conference report accompanying the fiscal year 2003 appropriation act, Congress designated specific funding levels by appropriation account, DOD component, budget activity, and subactivity group. Figure 4 provides an example of how Congress designated funding for the Air Force. In a September 14, 2001, memorandum, OMB provided the heads of federal departments and agencies with the following guidelines and criteria for requesting emergency funding related to the terrorist attacks of September 11, 2001. (1) The damage to be repaired must have been directly caused by the terrorist acts. (2) The absence of funding, and consequently a delay in damage repair, protection or other activities, would result in significant economic loss/hardship, attack risk or human endangerment/suffering, including the cost of enhanced security and relocation of employees to secure sites. (3) Any action ordered by the President to respond to the national security consequences of the events of September 11, 2001. (4) The requirement is known, i.e., not a speculative need. (5) The requirement is urgent, i.e., could not reasonably be handled at a later time. (6) The activity to be performed is an appropriate federal role and reflects an appropriate sharing of responsibility among state, local, private, and federal entities. (7) The level of funding is limited to the amount necessary to restore the entity/facility to current standards and requirements (e.g., damage to a 1950s building would be repaired using current building codes and standards and guidelines for counter-terrorism defense). (8) The requirement is not competitive with or duplicative of activities of other agencies with statutorily mandated disaster assistance programs such as Small Business Administration and Federal Emergency Management Agency. (9) The requirement cannot reasonably be met through the use of existing agency funds, e.g., through reprogramming actions or the use of other emergency funds. (10) Funds should address specific deficiencies, encountered or identified to prevent events such as those that occurred on September 11, 2001, and may include expenditures for: law enforcement and investigative activities; general preparation and response (planning, training, equipment, and personnel); physical protection of government facilities and employees; physical protection of the national populace and infrastructure; and governmental awareness of potential threats. (11) Funds can be used to enhance U.S. abilities to interdict terrorist threats. (12) The activity to be performed is an appropriate federal role and reflects an appropriate sharing of responsibility among state, local, private, and federal entities. (13) The requirement is urgent, i.e., could not reasonably be handled at a later time. (14) Activities are not competitive with or duplicative of activities of other agencies with statutorily mandated preparation programs such as DOD and Federal Emergency Management Agency. (15) The requirement cannot reasonably be met through the use of existing agency funds, e.g., through reprogramming actions or the use of other emergency funds. In addition to the names above, the following individuals made significant contributions to this report: Nancy Benco; Bruce Brown; George Duncan; Harry Jobes; Tom Mahalek; Charles Patton, Jr.; Kenneth Patton; and James Reid.
As of January 2003, Congress had provided a total of $38 billion to the Department of Defense (DOD) to cover emergency response costs related to the war on terrorism. Appropriated in different ways in fiscal years 2001, 2002, and 2003, these funds are meant to pay for expenses that DOD would not normally incur, such as contingency military operations and Pentagon building repairs. Because our prior work raised questions about DOD's oversight of contingency fund spending, GAO was asked to review DOD's management of emergency response funds, specifically: (1) DOD's adherence to OMB guidance in managing funds and the sufficiency of DOD's guidance on the use of these funds, and (2) DOD's ability to track the use of emergency response funds in general. We limited our review of DOD's guidance to the initial funds placed in the Defense Emergency Response Fund. We did not verify the accuracy of the data contained in DOD's obligation reports or the appropriateness of individual expenditures. While DOD followed the Office of Management and Budget's (OMB) guidance in managing the initial $15 billion in war on terrorism funds that were placed in the Defense Emergency Response Fund in fiscal years 2001 and 2002, DOD provided its components with limited guidance on how to use these funds. DOD allocated the funds according to OMB's 10 funding categories. However, DOD's designations of allowable line items for each category were broad and, thus, could be interpreted in different ways. Also, while OMB directed that the funds were to be used for urgent and known needs, DOD did not define those needs further. Finally, DOD directed the components to use an internal financial management regulation for contingency funding to determine if costs were incremental or not; however, as we have reported previously, these regulations are insufficient for this purpose. In the absence of detailed guidance military officials sometimes had to use their best judgment in obligating emergency response funds. DOD's ability to track the use of emergency response funds has varying limitations depending on the appropriation. For the fiscal years 2001 and 2002 emergency response funds managed separately in the Defense Emergency Response Fund ($15 billion), DOD can report a breakdown of obligations by its 10 funding categories, but cannot correlate this information with its appropriation account structure. For emergency response funds provided in fiscal years 2002 and 2003 ($20.5 billion) that were transferred into or placed directly into DOD's regular appropriations accounts, DOD cannot use its accounting system to track the use of these funds because they are commingled with those appropriated for other purposes. While DOD has an alternative process intended to track obligations for contingency operations related to the war on terrorism, it cannot identify the portion of obligations that are funded with emergency response funds. DOD acknowledged these limitations and, in December 2002, began requiring additional reporting on the use of these funds. DOD partially concurred with this report, noting it clearly told components to use DOD's financial regulation for guidance and also held meetings for clarification. DOD agreed funds were commingled, but noted it had a process to track incremental costs for the war on terrorism.
Chairman Stevens, Chairman McHugh, and Members of the Subcommittees: We appreciate the opportunity to participate in this hearing on how the reform experiences of other countries’ postal administrations may relate to ideas and proposals for reform of the U.S. Postal Service. We will discuss experiences of other postal administrations that are particularly relevant to any future decisions by Congress affecting (1) public service obligations, such as universal service and uniform rates; (2) the postal monopoly; and (3) regulation of postal prices. My testimony is based primarily on our past and ongoing work relating to the responsibility of the U.S. Postal Service to provide uniform service to all communities in an increasingly competitive postal environment, as well as on issues involving the postal monopoly and postal rate setting in this country. We have also done limited work on other countries’ postal administrations. To date, we have focused most of our attention on Canada Post. Canada’s experience is especially relevant because of its proximity to the United States and its similarities in geographic size, business environment, and market-oriented economic systems. I will also refer to postal administrations in seven other countries on which we obtained data: Australia, France, Germany, the Netherlands, New Zealand, Sweden, and the United Kingdom. These countries, along with Canada, have been described by Price Waterhouse in a recent study as among the most “progressive postal administrations,” and most of them have undergone reforms that changed their structure and operations in the past decade. Our testimony relating to other countries’ experiences is based primarily on that study as well as data readily available from the other countries’ postal administrations. While we believe that the overall experiences of other countries’ postal administrations are relevant to the current discussions of postal reform in the United States, meaningful comparisons of the specific operational practices followed and performance results can be difficult. Compared to each of the eight other postal administrations, the U.S. Postal Service has at least seven times the mail volume, and at least twice the number of employees. All eight postal services combined have only one-half of the U.S. Postal Service mail volume, and just slightly more than the total number of its employees. The U.S. Postal Service handled about 180 billion pieces of mail in fiscal year 1995 and had over 850,000 employees in December 1995. By comparison, Canada Post has about 6 percent of the U.S. Postal Service’s mail volume and about 6 percent of its number of employees. I have appended to my statement two graphics that illustrate the differences in mail volume and employment between the U.S. Postal Service and the other eight postal administrations. Notwithstanding the differences in workforce size and mail volume, other countries’ experiences with granting their postal administrations greater commercial freedom are relevant to current consideration for granting such freedom in the United States. For example, in 1992, we issued a report describing how the competition from both private firms and electronic communication, particularly in the expedited-service mail and package-delivery markets, may create the need for statutory changes. Similarly, according to Price Waterhouse’s February 1995 report, while many factors are driving postal reform in other countries, the increase in competition in the delivery and communications markets has, above all else, driven the changes. Various parties, including some Members of Congress and the Postmaster General, have called for fundamental changes in the laws and regulations governing the U.S. Postal Service. The Postmaster General has said that the Postal Service needs greater freedom to set postage rates, manage the postal workforce, and introduce new products and services. Private delivery firms and U.S. mailers say they want more freedom to deliver letters now protected by the statutory monopoly. In recent hearings, Congress has been presented with many ideas and some specific proposals for reforming and privatizing the Postal Service. The 1970 Postal Reorganization Act, which created the U.S. Postal Service, was the most recent major change to the laws governing the structure and operation of the postal administration in the United States. Major change has occurred more recently for some foreign postal administrations. In the past decade, a number of other countries have restructured postal administrations from entities subject to close governmental control to entities that are still owned by the government, but with less governmental control over day-to-day practices. For example, in 1981 Canada established the Canada Post Corporation, an entity owned by the Canadian government but freed from many government regulations. Reform of postal administrations also took place in New Zealand in 1987, in Australia and the Netherlands in 1989, in France in 1991, in Sweden in 1994, and in Germany in 1995. Following these reforms, postal administrations in many of these countries reported significant improvements in financial performance and service delivery. We will not discuss their performance or the effects of postal reform. However, I will highlight a key common feature—universal service—of the U.S. and other postal administrations after reform. I will also highlight variances in the characteristics of their monopolies and their ability to set postal prices. We believe that these three areas—universal service, the mail monopoly, and ratemaking—will be among the most challenging for Congress to address in any future reform of the U.S. Postal Service. The primary mission of the U.S. Postal Service, as it now exists in law, is to provide mail delivery service to persons in all communities and access to the mail system through post offices and other means. The rate for First Class mail, i.e. letters “sealed against inspection,” must be uniform for delivery anywhere in the United States. The U.S. Postal Service generally offers delivery to both urban and rural addresses six days a week. Any consideration of reforming the U.S. Postal Service will require a careful review of, and no doubt much debate on, how the current universal service mandate will be affected. In all of the other eight countries, the postal administrations provided certain services widely to their citizens and at uniform rates before reform and continued to provide them following reform. However, the definition of universal mail service varies somewhat from country to country. Some of the countries provided the same level of service for urban and rural customers, while some others had different service standards for urban and rural areas. For example, although Canada Post is required by law to maintain service that meets the needs of Canadian citizens, the service only needs to be similar for communities of the same size. Canadian citizens in very remote areas in the far north may receive mail delivery less frequently each week than those in some other areas of Canada. In some countries, changes in universal service practices, involving such areas as the frequency of delivery and access to post office services, have been controversial. For example, in New Zealand, citizens in rural communities were upset when they learned New Zealand Post wanted to discontinue delivery services to rural addresses. The Post then increased a long-standing rural delivery fee for service, paid by the addressee; this decision proved unpopular, and the fee was eliminated in 1995. There continues to be no rural delivery fee in New Zealand. Accessibility to postal services, which includes maintenance of local post offices in the United States, is also part of the public service obligation of postal administrations in some other countries. The U.S. Postal Service must follow strict legal criteria in determining whether to close post offices. In New Zealand, the postal administration has negotiated a written agreement with the government that specifies the minimum number of postal retail outlets. In the Netherlands, Dutch law specifies minimum requirements regarding the density of post offices in urban and rural areas. Five of the eight countries’ postal administrations differ from the U.S. Postal Service in that a majority of their postal retail outlets are privately owned and operated, according to the February 1995 Price Waterhouse report. This group includes Australia, Canada, the Netherlands, New Zealand, and the United Kingdom. Except for the French postal administration, all of the eight foreign postal administrations have some form of franchising policy for postal retail services. Like the U.S. Postal Service, other postal administrations have also continued to provide certain subsidized services. For example, in Canada, the government compensates Canada Post for providing subsidized rates for publications, parliamentary mail, and literature for the blind. In Sweden, the government subsidizes certain services, such as free delivery of literature to the blind, while the postal service subsidizes the distribution of certain newspapers and provides discounts on association letters. We plan to issue a report shortly on the U.S. Postal Service’s role in the international mail market, including issues that have been raised by both the U.S. Postal Service and its major competitors, such as Federal Express and DHL Airways. The Postal Service participates in the Universal Postal Union, a specialized agency of the United Nations that governs international postal services. Its basic purpose is to help postal administrations fulfill statutory universal service obligations on an international level. A total of 189 Universal Postal Union member countries have agreed to accept mail from each other and to deliver the international mail to its final destination. The Postal Service has said that current universal service obligations and related public service mandates can only be met if its markets continue to be statutorily protected by the Private Express Statutes that provide the Service with a monopoly over letter mail. We plan to issue a report in the coming months that discusses the Postal Service’s monopoly in detail, including the growth since 1970 of private delivery firms that are competing and will likely compete more strongly in the future for some of the Service’s First-Class, Priority, and Third-Class mail. The postal monopoly is defined differently and varies widely in scope among the eight foreign postal administrations. In this country, the letter mail monopoly helps ensure that the Postal Service has sufficient revenues to carry out public service mandates, including universal service. The U.S. postal monopoly covers all letter mail, with some key regulatory exceptions being “extremely urgent” letters (generally next-day delivery) and outbound international letters. Postal Service data indicates that, in fiscal year 1995, at least 80 percent of the Postal Service’s total mail volume was covered by the postal monopoly. All but one (Sweden) of the eight countries’ postal administrations have monopolies over some aspects of the letter mail. Generally, the letter monopolies in other countries are defined according to price, weight, urgency of delivery, or a combination of these factors. For example, in Canada, the postal monopoly covers letters, with a statutory exclusion for “urgent” letters transmitted by a messenger for a fee that is at least three times Canada’s regular rate of postage. In Germany, the monopoly covers letters priced up to 10 times Germany’s standard letter rate. The postal monopoly in France covers letters and those parcels weighing less than 1 kilogram (2.2 pounds). In the United Kingdom, the monopoly is defined by price, covering those letters and parcels with postage up to one British pound. Australia and New Zealand narrowed the scope of their postal monopolies after reform. For example, in Australia, the monopoly price threshold was reduced in 1994 from 10 times the basic stamp price to 4 times the price. Other changes were also made, such as reducing the weight threshold from 500 grams to 250 grams and the excluding of outbound international mail. Australia Post reported in its 1994 annual report that these changes “will reduce the proportion of total business revenue from reserved services from around 60 percent to about 50 percent.” It now receives a majority of its revenues from services open to competition. Australia plans a review of the remaining postal monopoly during 1996-1997. In New Zealand, the monopoly price threshold was reduced in phases over 3 years, and the government announced in November 1994 that it would introduce legislation to completely deregulate the postal market. While no final decision has been made, New Zealand Post officials said last year that they had shaped their business plans to expect an open, competitive environment. Sweden has eliminated its postal monopoly. Full competition for all postal and courier services, including the delivery of letters and parcels, has been allowed in Sweden since January 1, 1994. Sweden Post officials told us that its monopoly offered little protection of postal revenue and enforcement was not cost-effective. The Swedish government, not the postal administration, has the obligation to provide universal mail service. The U.S. Postal Service and some other postal administrations have made efforts to enforce their postal monopoly. The U.S. postal monopoly has proved difficult to enforce for a number of reasons, including objections by both mailers and competitors to the Postal Service’s audits and other enforcement actions. We were informed by Canada Post officials that Canada Post also finds its monopoly difficult to enforce. They said that, while Canada Post has taken legal action against major violators of its postal monopoly, it prefers to use other means of persuasion to get violators to comply with the law. Enforcement problems can also be related to the way the postal monopoly is defined. For example, in France, an exclusion limits the letter mail postal monopoly to private correspondences. Since letters are sealed against inspection, thus making it impossible to determine whether they are private correspondences, enforcement is difficult. Finally, a monopoly on mail box access in the United States is related to the Postal Service monopoly on delivery of letter mail. By law, mail box access is restricted to the Postal Service. In contrast, none of the eight countries we reviewed have laws that give their postal administrations exclusive access to the mail box. We issued a report late last year on postal ratemaking that updated our 1992 report, saying that, if the Postal Service is to be more competitive, it will need more flexibility in setting postal rates. In our opinion, legislative changes to the 1970 Act’s ratemaking provisions may be necessary in order to give the Postal Service greater flexibility in setting rates. In our 1992 report, we said that Congress should reexamine the 1970 Act to (1) determine whether volume discounting by the Postal Service would be considered a discriminatory pricing policy and (2) clarify the extent to which demand pricing should be considered in postal ratemaking. In our latest report, we reiterated these points and also discussed alternatives which Congress could consider for improving the ratemaking process. Postal administrations in the other eight countries appear to have greater freedom to establish and change postal rates than does the U.S. Postal Service. In Canada, only certain rates, mainly those for full-price letter mail and the mailing of publications at government-subsidized rates, must be approved by the Canadian government. In addition, rate proposals are not subject to an independent regulatory body as they are in the United States. In Canada, interested parties have an opportunity to provide information, but the rate-setting process is not public, and parties do not have access to costing data or other information underlying postal rates. In Sweden, the postal administration is free to set all prices except for the standard domestic letter; the government and the postal administration have agreed to a price cap on the domestic letter rate equal to the standard consumer rate of inflation. Similarly, in New Zealand, the postal administration is free to set prices except for standard letters, which are subject to a price cap of the country’s Consumer Price Index minus 1 percent. The Australian postal administration sets its own prices. The government can “disapprove” of the basic postage rate proposed by Australia Post. In addition, Australia Post must notify an independent authority of proposed increases in the prices of monopoly services. The authority has only an advisory role and in the past has instituted inquiries into proposed increases lasting up to 3 months. Finally, while we have focused on the three complex and interrelated issues of universal mail service, the postal monopoly, and postal rate setting, there are other issues that will also require reexamination in any future reform initiative. These include, but are not limited to, the quality of the Postal Service’s labor relations. We previously reported that Congress may need to reconsider the collective bargaining provisions of the 1970 Act if the Postal Service and its major employee organizations are unable to resolve some long-standing problems. As the Congress continues its deliberations on postal reform, we believe that it is important to examine the interrelationships of these issues and how changes addressing them may affect postal operations and related services to the American public and business. This concludes my prepared statement. I would be happy to respond to your questions. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed other countries' postal service reform efforts and how their experiences may relate to U.S. Postal Service reform. GAO noted that: (1) the U.S Postal Service handles over seven times the mail volume and has at least twice as many employees as the 8 foreign countries reviewed; (2) increased competition from private delivery and communications networks has caused changes in U.S. postal law to allow more competitive flexibility; (3) the 8 foreign governments have reduced their control over their postal administration's day-to-day practices, which has led to improved service and financial performance for some of them; (4) universal mail service remains a common goal among these administrations, but their definition of it varies; (5) unlike the United States, 7 of the countries have franchising policies for postal retail services; (6) all of the countries provide certain subsidized mail services; (7) all of the countries except Sweden have postal monopolies, which vary widely in definition and scope; (8) some foreign countries have given their postal administrations greater freedom to set postal rates; (9) the overall experience of other countries' postal administrations are relevant to any U.S. postal reforms; and (10) any reform initiative must consider the quality of the Postal Service's labor relations.
Social Security benefits are based on a worker’s lifetime earnings in covered employment. As the agency responsible for issuing SSNs and paying retirement, survivors, and disability benefits to insured persons, SSA must have accurate records of every worker’s earnings. Inaccurate earnings records can create benefit payment errors. Through a process known as enumeration, SSA assigns a unique SSN to each individual who meets the requirements for one. Currently, SSNs are issued to most U.S. citizens at birth. They are also available to noncitizens lawfully admitted to the United States with permission to work. Lawfully admitted noncitizens may also qualify for an SSN for nonwork purposes when a federal, state, or local law requires that they have an SSN to obtain a particular public benefit or service. SSA must obtain documentary evidence from such applicants regarding their age, identity, U.S. citizenship, or lawful alien status, and if they were previously assigned an SSN. Thus, SSA maintains a historical record of each worker’s annual earnings, which is identified by the worker’s name and Social Security number. The earnings reporting process begins at the end of each calendar year, when employers submit reports of their workers’ earnings to SSA on IRS Form W-2 (Wage and Tax Statement). To prepare the W-2, employers generally use certain information that workers provide on Form W-4, which is the document that determines the amount of federal income taxes that will be withheld from the worker’s pay. If the SSN and name on an earnings report submitted by the employer do not match information in SSA’s Master Earnings File (MEF), the reported earnings are placed in the ESF, which is a repository for earnings reports for unidentified workers. The ESF is an online file that can be updated throughout the day by all SSA field offices and various centralized components, although the updates are performed via batch mode. Removal of earnings reports from the ESF occurs only when a report can be matched and posted to a worker’s MEF record. This process is termed “reinstatement” by SSA. Thus, the number of reports in ESF on a given day fluctuates as earnings are reinstated to the correct Social Security records. Table 1 reflects the ESF reports remaining, listed by decade, corresponding to the tax year of earnings for which each report applied, since inception of the Social Security program. SSA uses various processes to post reported workers’ earnings to valid Social Security records. Generally, employers send SSA one W-2 each year that reports the annual earnings for each of their workers. Upon receipt of these earnings reports, SSA electronically validates whether it has established a Social Security record for the reported SSN and surname shown on the W-2. SSA does this by electronically matching the worker’s surname and SSN on the W-2 to information in its number identification file (Numident) that contains demographic information about every SSN holder. When the SSN and the first seven characters of the surname are identical on the W-2 and the Numident file, SSA posts those earnings to the indicated record in its MEF. SSA is able to place about 90 percent of employer-submitted earnings reports received each year in an appropriate MEF record. (Fig. 1 shows SSA’s process in more detail.) For the 10 percent of the reports that fail this initial validation test, SSA performs more than 20 of what it calls “front-end validation routines” that manipulate either the reported name or the SSN in a variety of ways to correct common reporting mistakes so that SSA can find an MEF record and prevent the posting of the earnings to the ESF. SSA’s front-end routines identify Social Security records for about 60 percent of the reports that are initially categorized as mismatches each year. In manipulating worker information to find a valid record, the automated front-end routines assume either the SSN is correct and there is a problem in the reported name or vice versa. For example, one front-end routine for reported name errors tests whether the first name and the surname have been reversed on the employer-filed W-2. The name reversal routine compares the first name from SSA’s Numident file with the surname on the W-2. If they are the same and the first initial for the middle and surname match the information on the W-2, then SSA assumes it has found the proper record and posts the earnings. Other front-end routines check whether digits in the SSN are transposed or inaccurate or whether the name on the report contains transposed or missing letters. Another front- end routine involves searching for reinstated items that in the past have included the same reporting error occurring in the current year’s earnings report. Over the past several years, such routines have allowed SSA to post an annual average of 15 million earnings reports to individual MEF records, rather than to the ESF. Table 2 summarizes the performance of these front-end routines for the past 5 reporting years. It shows that SSA found about 76 million valid records for reported earnings for tax years 1998 to 2002. If the front-end routines do not identify a valid record, SSA posts the earnings in the ESF. SSA subsequently performs what it calls “back-end” processes on the items, consisting of electronic and manual actions to match the earnings to a worker’s MEF record. For one such process, SSA uses corrections of reported names and SSNs generated under IRS’s automated routines. SSA then attempts to find the W-2 in the ESF and validate the corrected name and SSN that IRS provides against SSA’s records; when both conditions are met, SSA accepts IRS’s corrections and reinstates the item to the worker’s record. Under another process, which SSA calls decentralized correspondence, or DECOR, SSA sends letters to addresses listed on each invalid W-2, seeking information to resolve the identity issue (or to the employer if the W-2 lacks a valid address). If the worker does not respond, SSA then sends a letter to the employer that filed the report soliciting assistance in resolving the problem. Other types of correspondence involve Young Child Earnings Records and Earnings After Death, for which SSA sends letters to employers and/or the persons whose SSNs appear on the reports; SSA automatically posts such earnings reports to the ESF because the persons named in the reports are, respectively, either (a) 6 years of age or younger (thus unlikely to have earnings through employment) or (b) have a date of death recorded on their Numident record for a year prior to the tax year for which earnings on Form W-2 have been reported. Upon receiving satisfactory documentation clarifying the earnings and linking them to the proper SSN, SSA reinstates earnings reports to the individuals’ MEF record. SSA also uses yet another process, known as FERRET, that compares worker addresses on the W-2 with addresses that IRS has from individual tax returns. In its Operation 30 routine, SSA staff compare ESF earnings reports with valid SSNs with information in the Numident record. Staff check whether nicknames, surname derivations, and other obvious mistakes in spelling might be the cause of the posting problem. Table 3 shows that in 2001 (the last year for which data were available), selected back-end routines reinstated almost 600,000 earnings reports totaling almost $4 billion. SSA has two other electronic back-end routines that have produced a large number of reinstatements. In a process called SWEEP, SSA periodically reruns ESF items through its records to determine whether updated information has been added to the Numident or whether newly developed validation routines might permit reinstatements. In 2003, SSA reinstated 123,741 items through SWEEP, covering tax years 1977-2001. GAP SWEEP is a newly developed routine that scans earnings records for valid SSNs in the ESF and assesses whether yearly gaps in earnings exist in the MEF record and might be linked to similar earnings in the ESF. If a link can be made, SSA uses slightly less stringent name match rules; if the name can be validated, the item is reinstated. As of May 2003, SSA has reinstated through the GAP SWEEP routine over 1.5 million items (across all tax years back to 1937), representing $6.1 billion in earnings. Still another back-end process involves a manual review of worker- submitted evidence and a check of automated data. Workers (and their dependents and survivors) may visit local SSA offices to have earnings reinstated through the Item Correction (ICOR) process. Individuals provide SSA staff with evidence, such as W-2s, earnings statements, and tax returns, to document earnings that are missing from their Social Security record. Upon receiving adequate proof that links an earnings report to the individual, SSA field staff manually reinstate the earnings, subject to an accuracy check by a peer or supervisor. SSA provided information indicating that in fiscal year 2003, field staff had made about 244,000 earnings reinstatements through the ICOR process. Furthermore, each year SSA mails a Social Security statement to workers and former workers age 25 and over who are not yet receiving benefits. The statement lists the amount of earnings posted to the person’s Social Security record by year and encourages persons to contact SSA about any missing earnings. Such earnings might have been placed in the ESF because of a name or SSN mismatch. Reinstatements related to Social Security statements are included in the ICOR data discussed above. Earnings reports in the ESF have serious data problems. Such data problems include missing SSNs or names, never issued numbers, and employer use of the same SSN to report earnings for multiple workers in a single tax year. In addition, a small portion of employers account for a disproportionate number of ESF reports, and employers in certain industry categories are more likely than others to submit reports with invalid worker identity information. Out of the 84.6 million reports in the ESF for the 16 tax years that we examined (1985-2000), some of the more serious or obvious problems were that 8.9 million had all zeros in the SSN field and 1.4 million had reported SSNs that were never issued. In addition, over 270,000 of the reports had various name problems. For example, 60,476 had no surname; 261,744 had no first name; 3,760 reports contained nonalphabetic characters in the name field, such as ?, /, %, <, &, *, @, even though SSA has developed automated routines to delete such characters from the name field. For the 16-year period we examined, we also found that some employers used one SSN to report earnings for more than one worker in a given tax year. Table 4 depicts the number of times that employers used one SSN to report earnings of multiple workers in a tax year. For example, one case we found involved 10 W-2s in the ESF under one SSN for tax year 2000 from one employer. Each of the reports under the SSN had different names and different earnings, and together the earnings on the 10 reports totaled about $44,000. Table 4 shows that employers used one SSN in 10 different reports in a single tax year and did this 308 times over the 16- year period of analysis. These employer reports accounted for 3,080 W-2s with $4.7 million in earnings recorded in the ESF. Table 4 also shows that most employers using one SSN to report earnings for multiple workers did this for relatively few reports (from 2 to 9). However, there were a few employers who used one SSN for over 100 reports (128 separate occurrences—see shaded area of table 4). The most egregious case that we identified involved an employer who used one SSN for 2,580 different earnings reports in a tax year. Some employers exhibited a pattern of such errors year after year. Between 1985 and 2000, about 61,000 employers used one SSN for more than one worker in multiple tax years. Table 5 shows that most employers using one SSN to report earnings for multiple workers did this in a period ranging between 1 and 9 years. However, there were slightly over 1,000 employers who used one SSN to report the earnings of more than one worker in 10 or more of the 16 tax years that we analyzed (see shaded area of table 5). We found 43 employers did this every year of the entire 16-year period we analyzed. The majority of employers submitted a relatively small number of the total number of earnings reports in the ESF. For example, table 6 shows that 3.4 million employers had fewer than 10 reports for the period we analyzed. In contrast, while only about 8,900 employers (0.2 percent of all employers with reports recorded in the ESF for tax years 1985-2000) had 1,000 or more reports in the ESF, they accounted for over 30 percent of the total number of ESF reports (see shaded area of table 6). One measure of employer reporting problems is to identify those that, year after year, submit reports that are posted to the ESF. For example, relatively few employers—about 24,000—had a report in each of the 16 years, accounting for a total of 14.6 million reports. Although those 24,000 employers represented only 0.5 percent of all employers, they had submitted about 17 percent of the total number of reports. In addition, we found that employers with a high number of reports in the ESF had a consistent pattern of misidentifying their workers on their annual earnings reports to SSA. For example, one employer averaged about 13,300 reports placed in the ESF per year over the period we analyzed, ranging from a low of 5,971 to a high of 33,448. Finally, certain types of businesses appear to be disproportionately associated with earnings reports in the ESF. We obtained data from SSA that described the types of businesses for 1.8 of the 4.3 million employers with earnings reports in the ESF for the period examined. Figure 2 shows that of the 83 total broad industry categories, 5 of the categories alone accounted for 43 percent of these reports: eating and drinking establishments, construction and special trades, agricultural production- crops, business service organizations, and health service organizations. Our analysis of industry types may not be representative of all 4.3 million employers with reports in the ESF, because information on the industry categories for other 2.5 million employers was not available. However, it is consistent with an analysis reported by SSA’s OIG. In September 1999, the OIG examined earnings reports from 100 employers with the most suspended wage items. OIG reported that 67 percent of these employers were in industries that it categorized as services, restaurants, and agriculture. It also noted that SSA’s experience is that employers who rely on a workforce consisting of relatively unskilled or migrant workers are the major source of suspended earnings reports. SSA successfully reinstates a substantial number of earnings reports associated with frequently used SSNs in the ESF. Overall, the majority of reinstated earnings we examined were posted to the Social Security records of U.S.-born workers. In recent years, however, the number of foreign-born workers receiving reinstatements from these SSNs has significantly grown. Further, our analysis of data indicates that the reinstated earnings for foreign-born workers may often relate to unauthorized employment. To obtain information about reinstatements made to repeatedly used SSNs, we analyzed the 295 SSNs that appeared most frequently in the ESF for tax years 1985-2000. Each of these SSNs had 1,000 or more earnings records posted to them for these 16 tax years. Of the reports associated with these SSNs since 1937, SSA reinstated 13.1 million to the records of about 11.7 million individuals. Overall, the 295 SSNs have about 9.58 million reports for which the actual worker is still unidentified, representing about $14.5 billion in unposted earnings. Of these 9.58 million reports that remain in the ESF, about 8.9 million are under the all-zero SSN (000-00-0000). The average unposted earnings associated with the 9.58 million reports were about $1,513. However, the range was wide. Table 7 shows that almost 25 percent of the reports had unposted earnings of $100 or less, and about 3 percent of the reports had unposted earnings over $10,000. About 84 percent of the reports had earnings of $2,000 or less. Since 1937, SSA has made 13.1 million reinstatements of earnings from the 295 SSNs to 11.7 million different persons. SSA maintains limited data on the characteristics of persons who receive reinstatements. However, the data did allow us to document an individual worker’s gender, birth date, and country of birth, as well as when his or her SSN was issued. Overall, about 59 percent of these recipients were male. About 10.5 million, or 90 percent, of all persons receiving the reinstatements were born in the United States. Males represented about 59 percent of the U.S.-born population also. For those who are still living (10 million), the median age of U.S.-born persons with reinstatements was 49 years old. The remaining 1.2 million persons were born in other countries, with Mexico being the predominant country of birth (about 26 percent of all foreign-born). About 62 percent of the reinstatements to foreign-born workers went to men. The median age of the foreign-born recipients of reinstated earnings who were still living was 53 years old. The data show that U.S.-born workers are the primary recipients of reinstatements associated with the 295 SSNs we analyzed. However, when we examined reinstatement activity in later years, the percentage of foreign-born persons receiving such reinstatements has grown over time. For example, table 8 shows that in 1989 foreign-born recipients more than doubled from about 8 percent before 1986 and in 2003 grew to nearly 21 percent. This percentage is higher than the estimated 14 percent of foreign-born workers currently in the U.S. labor force. Our analysis also shows foreign-born recipients of recent reinstatements from the 295 SSNs we analyzed are predominantly male—about 65 percent. The top four countries of birth for workers who received reinstatements were Mexico, Canada, Germany, and Cuba. Workers from these four countries represented nearly 40 percent of all foreign-born individuals receiving reinstatements from the SSNs we analyzed. Table 9 shows the top 10 countries of birth for the foreign-born persons with reinstatements we analyzed. In addition to the growth in the percentage of foreign-born persons receiving reinstatements, the extent of probable unauthorized work related to such reinstatements has been growing. In order for a person to legally work in the United States, he or she must have a valid SSN. Thus, any earnings reports filed for a tax year before a worker’s valid SSN was actually issued by SSA are potential indicators of unauthorized employment. Data we analyzed show that historically about 7 percent of foreign-born workers with reinstatements from repeatedly used SSNs had earnings prior to SSN issuance. However, when we examined reinstatement activity associated with more recent work years and earnings, the percentage of reinstatements to foreign-born persons with work activity prior to SSN issuance is significantly higher—an average of about 32 percent of such reinstatements occurring between 1986-2003. (See table 10). Further, in some years, these reinstatements for potentially unauthorized work have been in excess of 50 percent of all reinstatements to foreign-born recipients. Current employer requirements for obtaining and reporting worker identity information create an environment in which inaccurate or false names and SSNs can be used for employment purposes, leading to difficulties associating reported earnings with the correct Social Security record. In addition, even though IRS can penalize employers for failing to file complete and correct information on Form W-2 and DHS can examine and penalize problem employers’ hiring practices, enforcement efforts have been limited and may facilitate careless reporting. Finally, although employers have access to several systems to verify worker names, SSNs, and work authorization status, these systems have limitations and are underutilized. Both the IRS and DHS have requirements that employers must follow when gathering or reporting key information supplied by newly hired workers. IRS requires workers to complete an IRS Form W-4, which identifies workers for tax withholding purposes, and DHS requires workers and employers to complete a DHS Form I-9 (Employment Eligibility Verification Form) for identity and work authorization. IRS regulations permit employers to use information on the I-9 to identify workers. However, these requirements are limited and do not provide reasonable assurance that workers’ names, SSNs, or work eligibility status will be accurately obtained and that earnings associated with these workers will be properly credited to valid Social Security records. Under IRS regulations, employers rely primarily upon newly hired workers to self-report information, such as their name and SSN, and are not required to corroborate this information. This process involves new workers filling out Form W-4, which includes the worker’s name, SSN, address, tax filing status (single or married), and number of tax exemptions claimed. While workers must report their names and SSNs to employers, under current IRS regulations, they do not have to present their Social Security card for inspection when they are hired. Also, the law does not require employers to independently corroborate the worker’s name and SSN information with SSA. Workers that do not have an SSN, however, must submit evidence that they have applied for one, such as a letter from SSA. As currently implemented, IRS’s limited requirements provide few safeguards to ensure that employers solicit and report accurate worker information. If examined by IRS, employers must simply show that they requested worker name and SSN information. Employers do not have to show that they attempted to corroborate this information with SSA. Lack of verification reduces the opportunity to detect worker misuse of SSNs and identity information and, ultimately, lead to earnings reports being placed in the ESF and possibly underpaid Social Security benefits. As our analysis shows, millions of reports that SSA receives each year from employers contain incomplete or incorrect information and cannot be posted to valid Social Security records. DHS also requires employers to solicit key information from workers to prevent unauthorized employment. The 1986 Immigration Reform and Control Act (IRCA) requires employers to verify the identity and work eligibility status of individuals hired after November 6, 1986, and prohibits employers from knowingly hiring or continuing to employ persons who are unauthorized to work in the United States. Form I-9 was created to obtain information from new workers so that their employers could verify the workers’ eligibility for employment, so as to preclude the hiring of individuals not authorized to work in the United States. In the section of the Form I-9 that newly hired workers must complete, the form asks new workers to list their name, address, and SSN. (According to DHS, providing the SSN is actually optional for workers). Such workers also must provide employers with specific documents as proof of identity and work authorization. These include state driver’s licenses to establish identity, Social Security cards and birth certificates to establish work authorization, and various types of immigration documents to establish a person’s identity and work status. If a furnished document appears to be genuine and appears to relate to the person presenting it, the employer must accept the document and record what was actually reviewed. Employers must also maintain a Form I-9 on file for 3 years from the date of hire or 1 year from the date of termination, whichever is longer. While the IRCA requirement is more demanding than IRS’s regulations, employers still rely primarily on visually examining numerous types of documents with no independent corroboration with the issuing agencies. Fraudulent identity and work authorization documents are widely available, can be of high quality, and are difficult to detect by employers who are not document experts. In prior work, we testified that DHS employer sanction data indicated that, between October 1996 and May 1998, about 50,000 unauthorized aliens had used 78,000 fraudulent documents to obtain employment. In June 2002, we again testified that hundreds of thousands of unauthorized workers have used fraudulent documents to circumvent processes designed to prevent their hire. Such documents would likely be associated with erroneous earnings reports later filed by employers and recorded in the ESF. Both IRS and DHS have authority to impose penalties on employers who fail to follow their regulations for obtaining and reporting key information for newly hired workers. However, IRS’s requirements are so limited that employers are unlikely to be penalized. While DHS has a worksite enforcement program to address unauthorized employment, its resources devoted to such activities have been minimal in recent years. Thus, those employers who do not want to prepare accurate report information have little incentive to do so and failure to prepare accurate reports can contribute to ESF postings. IRS has authority to assess penalties against employers who submit incomplete and inaccurate information on worker’s W-2s, including SSNs. However, IRS may waive such penalties if reporting problems are due to “reasonable cause.” For example, employers may demonstrate that a reporting error was due to events beyond their control—such as the worker provided false identity or SSN information. Employers must also demonstrate that they acted responsibly to avoid errors and correct them promptly. When employers are notified by IRS that a worker’s reported SSN is incorrect, the employer must make up to two annual solicitations for the correct SSN. If the worker does not comply with the requests, IRS requires no further actions. The reasonable cause standard, however, does not apply if an employer acts with “intentional disregard.” Intentional disregard applies, for example, when an employer knows or should know W-2 reporting requirements but demonstrates a pattern of ignoring them. We recently reported that IRS’s regulations for obtaining and verifying worker names and SSNs are so minimal that it is unlikely IRS would ever penalize employers. IRS’s own analysis bears this point out. In 2003, IRS reported on a review of 78 employers it defined as egregious filers of earnings reports: those who filed either a high number or a high percentage of their reports with incorrect worker names and SSNs. The report that was prepared covered 50 (large and mid-sized businesses) of the 78 businesses IRS reviewed. IRS’s evaluation concluded that all of the employers had met the reasonable cause standard because events beyond their control had caused the errors. That is, the workers had provided employers with incorrect information. IRS also concluded that the employers acted responsibly under current regulations because they solicited names and SSNs from workers and obtained signed W-4s or I-9s. We are concerned, however, that although the employers met IRS’s technical requirements for soliciting names and SSNs in these cases, there was no assurance that the information was accurate because they relied exclusively on worker-supplied information, with no independent corroboration with SSA or any other public or private data source. Thus, workers using fraudulent SSNs as identity information would go undetected. The IRS report detailed specific actions that could improve the accuracy of earnings reports, such as requiring employers to review the Social Security card for prospective new workers and verify the SSN with SSA. In discussing this issue, IRS officials expressed concern that requiring the verification of names and SSNs may cause some employers to cease withholding taxes and reporting income from unauthorized workers, rather than risk losing such workers. Further, increased compliance would likely come at the expense of other compliance activities. The net effect of such a response would be a decrease in tax collections and compliance. IRS also expressed concern that worker verification systems do not always supply timely responses and that mandating such a system could pose an undue administrative burden on employers. Nevertheless, IRS agreed with our August 2004 report recommendation that it analyze options and consider how best to increase the accuracy of employer reporting. Such an effort would include re-examining its reasonable cause standard and penalty process. DHS has primary responsibility for ensuring that employers verify the identity and work authorization status of newly hired workers, as required by IRCA and the Illegal Immigration Reform and Immigrant Responsibility Act (IIRIRA) of 1996. Under IRCA, employers are prohibited from hiring or continuing to employ a person not authorized to work in the United States, provided that the employer knows that the person is not authorized to work or has lost such authorization. Employers are also prohibited from hiring an individual for employment in the United States without verifying his or her employment eligibility via the Form I-9. A violation of either of these prohibited practices can subject an employer to civil monetary penalties. Employers engaging in a pattern or practice of knowingly hiring or continuing to employ unauthorized workers can be subject to fines and imprisonment. However, over time, DHS has devoted limited and decreasing resources to general worksite enforcement. For example, the number of employer investigations and intent to fine notices have dropped substantially (see fig. 3). The number of work years devoted to worksite enforcement has also dramatically declined in the past 5 years, from 278 in 1999 to 105 in 2003, a decrease of 62 percent (see fig. 4). DHS officials noted several reasons for these declines, including a change in its enforcement strategy around 1999 because of limited resources to arrest and detain millions of unauthorized workers. Fining employers also came to be viewed as ineffective by DHS because the process could take more than a year. DHS began to place more emphasis on cooperative efforts such as auditing employer Form I-9s to identify unauthorized workers and conducting seminars on fraudulent documents and worker verification services. DHS also told us that the events of September 11, 2001 caused a substantial redirection of worksite enforcement activity toward unauthorized workers in critical infrastructure facilities, such as airports, power plants, and military bases. DHS coordination with SSA to identify persons who do not have work authorization has been limited, despite a 1996 law requiring such activity. Section 414 of IIRIRA requires SSA to provide DHS with an annual listing of persons who have earnings but do not have SSNs authorizing them to work. For years, SSA has provided DHS with annual data on about 575,000 such persons. The data include annual earnings amounts, worker names and addresses, and employer names and addresses as well. However, DHS reported that it has made little use of this information in its worksite enforcement efforts. In explaining why, DHS officials noted that the data came from SSA in an electronic format that was incompatible with their systems. They also noted that DHS records do not usually contain SSNs for aliens and SSA’s records do not contain the DHS identification number assigned to aliens, so it is difficult to match SSA’s listing to DHS’s records. In order to facilitate the use of the data, in 2004, DHS and SSA agreed on a new format that SSA is to use to report data to DHS. Also, DHS told us that it has recently begun to use a contractor to make the SSA data more usable and available for possible enforcement actions. While Congress developed this IIRIRA provision for reasons other than reducing ESF earnings postings, some additional level of DHS activity in this area might provide a deterrent against individuals who use invalid or false SSNs that contribute to ESF earnings reports. Employers do not widely use worker verification services offered by both SSA and DHS. These services provide a valuable opportunity to prevent many unintended or careless mistakes when hiring new workers and reporting worker earnings. However, they have some limitations in detecting the misuse of another person’s name and SSN, and they remain underutilized. SSA began offering employers the ability to voluntarily verify the accuracy of worker-supplied SSNs and names to help them file more accurate annual earnings reports. SSA does not charge employers for this service, and over the years, it has developed several different verification methods to meet their needs. For example, employers may: Provide SSA with a magnetic tape or a diskette of their workers’ names and SSNs. SSA will verify the names and SSNs for up to 250,000 workers at a time. According to SSA, it takes about 30 days for SSA to respond, after it receives the request. SSA data show that about 6,000 of the 6.5 million U.S. employers sent SSA over 53 million verification requests in 2003. For about 12 percent of the requests, SSA could not verify the worker’s name and SSN. Call a toll-free 800 number to verify names and SSNs. SSA staff will immediately verify information for up to 5 workers at a time. In fiscal year 2003, SSA data show it received about 1.1 million calls from employers, but SSA does not track how many different employers used the service. SSA officials believe that a limited number of employers use the service. In fact, they believe that some larger employers with significant turnover have dedicated staff whose job is to call the 800 number throughout the day to bypass the 5-worker per call verification limit. Thus, these employers would represent a disproportionate number of calls to the service. Provide a hard-copy listing of workers’ names and SSNs that can be faxed, mailed, or hand-delivered to local Social Security offices. SSA staff verify information for a maximum of 50 workers at a time. Response times are subject to office workloads; SSA stated that, generally, such response takes 1 to 2 weeks to process, but may take longer. Our visits to 8 SSA field offices indicated that at these offices very few employers utilize this method of verification. SSA verifies the information received from employers by comparing it with information in its own records. SSA then advises the employer which worker names and SSNs do not match. While the service is an important tool to improve reporting accuracy, the information SSA cross-matches against varies depending upon the mode of verification employers select. For two of the methods (requests through the 800 number or at local offices), employers must provide SSA with a worker’s name, SSN, date of birth, and gender. In contrast, verifications for SSA’s most predominant mode of verification—electronic batch processing, do not include a match against workers’ date of birth and gender. Although employers do not have to submit dates of birth and gender, SSA will match against those two pieces of information if employers voluntarily submit them. By not requiring a match against dates of birth for this verification mode, SSA exposes itself to potential fraud and identity theft. In particular, persons using the name and SSN of persons much younger or older than themselves for employment purposes would remain undetected, despite the verification process. In discussing this limitation, SSA staff responsible for the verification services acknowledged that the requirements should be consistent, especially at a time when identity theft is a growing problem and homeland security is a major concern. However, as of November 2004, there was no initiative under way at SSA to address this inconsistency. SSA’s verification systems have other limitations. As previously noted, the response time varies among the different methods. Slow response times are a negative feature for businesses concerned about the competitive implications of using these systems. For example, some businesses fear that by using the service, they will give nonusing competitors an advantage in obtaining workers in a tight labor market. In an attempt to make verification more attractive to employers, SSA has been testing a Web- based system, which is designed to respond to employer requests within 24 hours. Requests of up to 10 worker names and SSNs will be instantaneous. SSA expects that this verification method will become available in 2005. DHS also operates a pilot program for employment eligibility confirmation in conjunction with SSA. To reduce employment of unauthorized alien workers, Congress required (in IIRIRA of 1996) that DHS develop and test three pilot programs to assist employers in verifying workers’ identity and work eligibility status. Accordingly, the Basic Pilot Program was developed and made available to employers in six states starting in 1997. The Basic Pilot requires participating employers to electronically verify the status of all newly hired workers within 3 days of hire. Verification requests are routed electronically to SSA to check the validity of the SSN, name, and date of birth provided by the worker and whether SSA has information indicating that the worker is a citizen or a noncitizen with permanent work authorization. If the submitted information matches SSA’s records, SSA immediately transmits an employment authorization response via DHS to the employer. If SSA is unable to verify the SSN, name, date of birth, or work eligibility status, a tentative nonconfirmation response is transmitted to the employer. The employer must notify the worker of the tentative nonconfirmation and check the accuracy of the information originally submitted. If the employer finds errors in either the Form I-9 that was completed or the data entered into the Basic Pilot system, the employer should resubmit the verification request with corrected data. If no such errors are found, however, the employer must advise the worker to visit an SSA field office within 8 federal workdays from the date of the response to resolve any discrepancies in his or her SSA record. If SSA is able to verify the SSN, name, and date of birth of a newly hired noncitizen, but is unable to verify the work eligibility status, it electronically refers the query to DHS, for a check against DHS’s automated records. If DHS confirms that the person is work authorized, the employer is immediately notified. If DHS cannot verify work authorization status for the submitted name and SSN, the query is referred to DHS field office “status verifiers” for additional research. According to DHS, responses for queries referred to the status verifiers generally occur within 24 hours. When the record searches cannot verify work authorization, DHS sends a tentative nonconfirmation response to the employer. If workers wish to contest such a response from DHS, they must call a toll-free telephone number provided by DHS within 8 federal workdays from the date of the response to resolve any discrepancies in their DHS record. If employment authorization cannot be verified, employers may terminate employment. A 2002 study of the Basic Pilot found that work authorization for queried workers was never resolved in about 13 percent of queried cases. In most of these cases, the workers never contested the tentative nonconfirmation response. However, like SSA’s verification service, the Basic Pilot has not been widely utilized. As of June 2004, about 2,500 of 2.1 million eligible businesses operating in the pilot states have actually registered to participate. Those participants made about 365,000 initial verification requests over a 2-year test period. The study also identified some problems with the pilot, such as erroneous nonconfirmation rates and program software that was not user friendly. In July 2004, DHS reported on actions being taken to address these weaknesses. These actions included improving federal data base accuracy to expedite data entry on persons entering the country as well as updating changes in immigrant work authorization status, switching to a Web-based verification system, providing better training for employers, and monitoring participating businesses. In 2003, Congress required DHS to expand the verification service to all 50 states by December 2004. The Basic Pilot Program became available to employers in all 50 states on December 20, 2004. Furthermore, to improve its effectiveness and increase participation, DHS recently converted the program to a Web-based system, which became available on July 6, 2004. While DHS staff recognize its potential value in identifying unauthorized workers, they noted that by law, employers cannot be charged for this service and that the agency lacks sufficient funds to operate a system that would be used extensively. DHS officials did not have operational cost data for the verification service. However, in June 2002, contractors that studied the program for DHS estimated that federal costs to make verification of work eligibility and identity mandatory would be about $159 million annually. Despite the various tools used by SSA to aid in the proper crediting of worker earnings, the number of earnings reports in the ESF is substantial. Having effective policies and processes for verifying key SSN, identity, and work authorization information for the nation’s workforce is critical to SSA, which is tasked with accurately paying retirement, survivors, and disability benefits. Sound verification processes are also critical to the administration of tax and immigration laws. However, at present, employers have few requirements to accurately identify their workers and file accurate and complete earnings reports. In fact, millions of earnings reports are submitted each year with erroneous or missing SSN and name information, and the same employers often file substantial numbers of such reports year after year, creating administrative problems for SSA and IRS and the possibility that Social Security benefits to such workers will not be accurately calculated. Under current IRS reporting requirements, employers who chronically and willfully file inaccurate earnings information will likely never be deemed noncompliant or penalized. We acknowledge IRS’s concern that more stringent employer reporting and verification requirements could have tax compliance implications and pose additional administrative burdens on the many employers who are already attempting to fulfill their reporting obligations. Although IRS’s regulations meet statutory requirements, we are concerned that its current requirements are so minimal that even the employers with a long standing history of chronically filing reports with critical errors are never sanctioned. In accordance with our prior recommendation, IRS is currently examining options for strengthening employer requirements for soliciting and verifying worker names and SSNs and developing a formal penalty program. As this effort progresses, SSA’s ESF data could be valuable to IRS in developing criteria as to what employer reporting patterns and activities constitute “intentional disregard” and improve IRS’s ability to target and penalize problem employers. At present, it is also unlikely that DHS will take enforcement action against employers and workers who submit inaccurate information to SSA to conceal unauthorized work activity. We recognize that in the post- September 11 environment, DHS enforcement resources have been needed in critical infrastructure industries and that data-sharing initiatives with SSA have thus received less priority in recent years. However, it is important that some level of coordination be reestablished to best leverage SSA’s data on potential unauthorized work activity and DHS staff resources to target the most egregious employers. Finally, any effort to verify worker-supplied identification and work authorization information warrants a thorough and accurate process. SSA currently offers several options for employers who choose to verify worker-provided information and has continually sought to upgrade its services. However, for the predominant mode of verification—electronic batch file—employers are not required to submit employee dates of birth for verification against SSA’s records. Thus, persons using the names and SSNs of persons much older or younger than themselves to seek employment would not be detected under current processes. This represents a critical flaw in SSA’s service. DHS’s Basic Pilot Program offers another option for addressing an important element affecting ESF postings—individuals who are not authorized to work in the United States. However, DHS officials believe they will likely experience capacity problems in the future if significantly more employers begin using the service, in part because of the number of cases requiring manual intervention to verify employment status. Accordingly, it is crucial that any deliberations pertaining to strengthening employer verification requirements include an informed discussion among the affected federal agencies as to the systems requirements and safeguards necessary to ensure the integrity, timeliness, and efficiency of the verification service. To better ensure that workers are accurately identified on Form W-2s necessary for the efficient administration of Social Security and tax laws, we recommend that the Commissioner of the Internal Revenue Service Coordinate its ongoing effort to reassess employer requirements for soliciting and verifying worker names and SSNs with SSA. This could include utilizing SSA’s ESF data to identify employer reporting patterns and activities that could constitute intentional disregard and using such data to develop criteria to better target and penalize only those employers who chronically submit inaccurate earnings reports or requiring such employers to verify worker identity information with SSA. Ensure the development of any new reasonable cause requirements occurs in consultation with SSA and DHS, which operate employee verification services. Such consultation could facilitate systems improvements to ensure the integrity, timeliness, and efficiency of existing verification services. We recommend that the Commissioner of the Social Security Administration require employers seeking verifications, via SSA’s electronic batch process, to submit the workers’ dates of birth, for matching against SSA’s records. We recommend that the Secretary of the Department of Homeland Security take steps to determine how DHS can best use SSA-supplied data on potential illegal work activity and specific industries associated most frequently with such activity to support its worksite enforcement efforts. We obtained written comments on a draft of this report from the Commissioners of SSA and IRS and the Department of Homeland Security. The comments have been reproduced in appendixes II, III and IV. Each agency also provided additional technical comments, which have been incorporated in the report as appropriate. SSA noted that it is continuing to assist the employer community in verifying worker information and welcomed the opportunity to work with IRS and DHS to improve verification operations. The agency also reiterated its commitment to continued outreach to employers and other federal agencies, as well as to facilitate accurate reporting via its Employer 800 number service and Employer Service Liaison Officers (ESLOs) located in each region. SSA agreed to investigate further our recommendation that it should require employers who use the electronic batch process to submit workers’ dates of birth for matching against its records. However, SSA noted that such a requirement could create additional burdens for the employer community and workload increases for SSA staff responsible for investigating mismatches. We acknowledge SSA’s concerns in this area. However, given the volume of verification requests processed through SSA’s batch process each year, and the potential vulnerabilities associated with not matching against workers’ dates of birth, we continue to believe prompt action is needed. Addressing our recommendation would also make SSA’s batch verification process consistent with the agency’s other modes of verification, whereby workers’ dates of birth are a required element for matching against SSA’s records. In its comments, IRS acknowledged that employers’ submission of earnings reports with inaccurate SSNs increases the quantity of suspense file postings, although it expressed the view that these mismatches stem from inaccurate or incomplete information provided by workers. The agency also noted that it is currently conducting compliance checks in coordination with SSA on those employers with the most egregious cases of reporting incomplete or inaccurate worker SSNs to SSA. Accordingly, IRS agreed with our recommendation that it work closely with SSA in its ongoing efforts to reassess current employer requirements for soliciting and verifying worker names and SSNs. IRS also concurred with our recommendation that the agency ensure that the development of any new reasonable cause requirements occurs in consultation with SSA and DHS, which operate worker verification services. DHS noted that while its general worksite enforcement program has had decreasing resources recently, since September 11 DHS has refocused its enforcement activities on removing unauthorized workers employed in critical infrastructure facilities. Such enforcement activities have resulted in removing over 5,000 unauthorized workers who were employed in industry categories that have been the historical targets of traditional worksite enforcement operations—food service, janitorial, agriculture, and construction, among others, but employed in critical infrastructure facilities. Therefore, DHS contends that although there is a decrease in the number of criminal cases and civil fines, there is still a significant effort under way to remove unauthorized workers. DHS also explained that although it will take the necessary steps to determine the best use of the annual SSA-provided listing of persons with earnings who lack work authorization, there are various impediments to accomplishing this task. Regarding DHS’s comments, we have summarized both the reasons for DHS’s switch in enforcement priorities and the impediments to using the SSA-provided listing. We acknowledge DHS’s current efforts to determine how it can best use the listing and we support cost effective ways by which the listing might identify illegal work activity and specific industries associated most frequently with such activity, to further worksite enforcement efforts. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Commissioner of the Social Security Administration, the Commissioner of the Internal Revenue Service, and the Secretary of the Department of Homeland Security, the Director of the Office of Management and Budget, appropriate congressional committees, and other interested parties. In addition, the report will be available at no charge on GAO's Web site at http://www.gao.gov/. If you have any questions concerning this report, please contact me at (202) 512-7215 or Dan Bertoni at (202) 512-5988. Other major contributors are listed in appendix V. To obtain information describing the various electronic processes that the Social Security Administration (SSA) uses to post earnings reports to worker records and resolve errors in reported worker names and Social Security Numbers (SSN), we reviewed numerous SSA Office of the Inspector General, GAO, and contractor reports on the Earnings Suspense File (ESF). We met with SSA officials who manage the earnings posting process and examined SSA’s Program Operations Manual System to identify and document processes and procedures for posting earnings to and reinstating earnings from the ESF. We reviewed information from SSA that described the various validation routines it uses in attempting to find valid matches of names and SSNs for earnings reports that do not initially match its records. We obtained management data on the number of earnings reports that these routines either posted to its records or reinstated from the ESF. We also visited eight SSA field offices located in New York, New Jersey, Virginia, and California that processed significant numbers of earnings reinstatements in 2003 to discuss their reinstatement activities and document procedures for reviewing and validating evidence submitted by individuals seeking to have earnings reinstated from the ESF. To determine the characteristics of earnings posted in the ESF, we obtained and analyzed an electronic copy of the ESF for tax years 1985 to 2000. We selected these years because they covered (1) a substantial period of time and (2) postings to and reinstatements from the ESF that occurred after legislation enacted in 1986 granted amnesty to unauthorized immigrants. Further, during this period, SSA also enhanced its earnings records in a way that provided more detailed information about reinstated earnings. The earnings records that we examined covered only reports on current wages, including reports that were filed late. The ESF contained 84.6 million records that met our criteria at the time we obtained the file in January 2003; these records were submitted by a total of 4.3 million different employers. The file we obtained contained information reporting employer’s identification number, the reported worker’s name and SSN on the invalid earnings report, the amount of unposted earnings, and the tax year of the report. From SSA, we were able to obtain Standard Industrial Classification codes for 1.8 million of the employers who had earnings in the ESF to identify the types of employers who had filed the earnings reports that we analyzed. (Because of confidentiality requirements, we were unable to arrange for timely access to similar codes for about 2.5 million employers in the file from the Census Bureau, which controls this information). To analyze the reinstatement of earnings reported under repeatedly used SSNs, we first examined the ESF to identify the frequency that each SSN appeared for tax years 1985 to 2000. On the basis of our examination, we selected SSNs that appeared most frequently in the ESF for our reinstatement analysis. Specifically, we selected 295 SSNs that had 1,000 or more reports in the ESF for the tax years analyzed. Overall, the 295 SSNs had about 9.6 million earnings reports representing about $14.5 billion in unposted earnings still in the ESF at the time that we received a copy of the file. We then obtained a complete copy of SSA’s earnings reinstatement file that contained over 142 million records across all SSNs that had received reinstatements to identify the reinstatements made from these 295 reported SSNs. By comparing the 295 SSNs with data in the reinstatement file, we identified that over the years SSA had reinstated about 13.1 million earnings reports from these 295 SSNs to 11.7 million individuals. For the 11.7 million individuals, we obtained selected information from SSA’s Numident, Master Earnings File, and Master Beneficiary Records. This information allowed us to identify the valid Social Security record that received each reinstated earnings report and obtain demographic information about each valid record holder receiving the reinstatement, such as age, gender, date when the person’s SSN was issued, and place of birth. To identify factors that contribute to ESF postings, we examined provisions of law that authorize (1) penalties for employers who file earnings reports with inaccurate SSNs and hire workers who are not authorized to work in the United States and (2) the disclosure of information on persons with nonwork SSNs to the Department of Homeland Security (DHS). We met with Internal Revenue Service (IRS) and DHS officials and obtained available enforcement data on the use of these penalties. We did not, however, evaluate their specific enforcement efforts. We analyzed information about worker verification tools that SSA offers to assist employers to report their workers’ earnings and DHS offers to identify their workers’ eligibility status under immigration laws. We also reviewed a detailed contractor study covering DHS’s implementation of the Basic Pilot Program. To assess the reliability of databases used, we reviewed reports provided by SSA and its Office of the Inspector General, which contained recent assessments of these databases. We also interviewed knowledgeable agency officials to further document the reliability of these databases. In addition, we checked the data for internal logic, consistency, and reasonableness. We determined that all the databases were sufficiently reliable for purposes of our review. Our work was conducted between October 2002 and December 2004 in accordance with generally accepted government auditing standards. In addition to those named above, the following team members made key contributions to this report throughout all aspects of its development: William Staab and Paul Wright. In addition, Jean Cook, Gerard Grant, Luann Moy, Daniel Schwimer, Vanessa Taylor, and Wayne Turowski made contributions to this report.
Each year, the Social Security Administration (SSA) receives millions of employer-submitted earnings reports (Form W-2s) that it is unable to place in an individual Social Security record. If the Social Security number (SSN) and name on a W-2 do not match SSA's records, the W-2 is retained in the Earnings Suspense File (ESF). SSA's ability to match earnings reports is essential to calculating Social Security benefits. Because of concerns about the size of the ESF, GAO was asked to determine (1) how SSA processes workers' earnings reports, (2) the types of errors in ESF reports and the characteristics of employers whose reports are in the ESF, (3) how often earnings from repeatedly used SSNs have been reinstated and who receives the earnings from theses reports, and (4) what key factors contribute to ESF postings. Upon receiving over 250 million earnings reports annually from employers, SSA uses various processes to post such reports to workers' Social Security records. For reports in which worker names and SSNs exactly match SSA's information, the earnings are credited to the appropriate Social Security record. When SSA encounters earnings reports that do not match its records, SSA attempts to make a match through various automated processes. Such processes have allowed SSA to identify valid records for an average of 15 million reports annually. However, about 4 percent of the reports still remain unmatched and are retained in the ESF. SSA uses additional automated and manual processes to continue to identify valid records. The most recent data show that SSA posted ("reinstated") over 2 million earnings reports in the ESF to valid records from such processes. Earnings reports in the ESF have serious data problems and are particularly likely to be submitted by certain categories of employers. Such problems include missing SSNs and employer use of the same SSN for more than one worker in the same tax year. Additional problems include missing surnames or names that include nonalphabetic characters. Forty-three percent of employers associated with earnings reports in the ESF are from only 5 of the 83 broad industry categories. Among these industry categories, a small portion of employers account for a disproportionate number of ESF reports. SSA has reinstated a substantial number of earnings reports with SSNs that appear repeatedly in the ESF. We analyzed the most frequently occurring 295 SSNs, which appeared in ESF 1,000 times or more between tax years 1985 and 2000. Of the earnings reports associated with these SSNs, SSA reinstated 13.1 million to the records of about 11.7 million workers. Although most reinstatements were for U.S.-born workers, in recent years the percentage of reinstatements to foreign-born workers has markedly increased. Also increasing is the percentage of foreign-born workers that received reinstatements for earnings in years prior to receiving a valid SSN--a potential indicator of unauthorized employment. Three major factors contribute to ESF postings. Under IRS regulations, employers must ask new hires to provide their name and SSN, but are not required to independently corroborate this information with SSA. DHS requires employers to visually inspect new workers' identity and work authorization documents, but employers do not have to verify these documents, and they can be easily counterfeited. Further, IRS regulations are minimal; IRS has no record of assessing a penalty for filing inaccurate earnings reports; and DHS enforcement efforts against employers who knowingly hire unauthorized workers has been limited in recent years because of shifting priorities following the events of September 11, 2001. Last, although SSA and DHS offer employers verification free of charge, these services are voluntary, have some limitations, and remain underutilized.
In 1995, we reported on a study of how three agencies collected and reported evaluative information about their programs to this Committee.We found that the agencies collected a great deal of useful information about their programs, but much of it was not requested and thus did not reach the Committee, and much of what the Committee did receive was not as useful as it could have been. We also found that communication between the Committee and agency staff on information issues was limited and afforded little opportunity to build a shared understanding of the Committee’s needs and how to meet them. At that time, we proposed a strategy for obtaining information to assist program oversight and reauthorization review: (1) select descriptive and evaluative questions to be asked about a program at reauthorization and in interim years, (2) explicitly arrange to obtain oversight information and results of evaluation studies at reauthorization, and (3) provide for increased communication with agency program and evaluation officials to ensure that information needs are understood and requests and reports are suitably framed. At the time, GPRA had recently been enacted, requiring agencies to develop multiyear strategic plans and annual performance plans and reports over a 7-year implementation period. In our 1995 report, we noted that annual reporting under GPRA was expected to fill some of the information gaps we described and that GPRA also emphasized the importance of consultation with Congress as evaluation strategies are planned, goals and objectives are identified, and indicators are selected. We suggested that our proposed process for identifying questions would be useful as agencies prepared to meet GPRA requirements and that consultation with Congress would help ensure that data collected to meet GPRA reporting requirements could also be used to meet the Committee’s special needs (for example, to disaggregate performance data in ways important to the Committee). We also saw a need for a useful complement to GPRA reports (and their focus on progress towards goals) that would provide additional categories of information, such as program description, side effects, and comparative advantage to other programs. The Committee had found such information to be useful, especially in connection with major program reauthorizations and policy reviews. Since its enactment, we have been tracking federal agencies’ progress in implementing GPRA by identifying promising practices in performance measurement and results-based management, as well as by evaluating agencies’ strategic plans and the first two rounds of performance plans.We found that although agencies’ fiscal year 2000 performance plans, on the whole, showed moderate improvements over the fiscal year 1999 plans, key weaknesses remained and important opportunities existed to improve future plans to make them more useful to Congress. Overall, the fiscal year 2000 plans provided general, rather than clear, pictures of intended performance, but they had increased their use of results-oriented goals and quantifiable measures. Although some agencies made useful linkages between their budget requests and performance goals, many needed to more directly explain how programs and initiatives would achieve their goals. Finally, many agencies offered only limited indications that their performance data would be credible, a source of major concern about the usefulness of the plans. This report does not directly evaluate the three agencies’ performance plans but rather looks more broadly at the types of information that authorizing and appropriations committees need from the agencies and how their unmet needs could be met, either through performance plans or through other means. We included program performance information available from sources other than annual performance plans because agencies communicate with congressional committees using a variety of modes—reports, agency Internet sites, hearings, briefings, telephone consultations, e-Mail messages, and other means. We did not assume that annual GPRA performance plans or performance reports are the best or only vehicle for conveying all kinds of performance information to Congress. We conducted our work between May and November 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretaries of Education, Labor, and Health and Human Services and the Director of the Office of Management and Budget. HHS and Labor provided written comments that are reprinted in appendixes II and III. The other agencies either had no comments or provided technical comments. The agencies’ comments are discussed at the end of this letter. We also requested comments from the congressional staff members we interviewed on our characterization of their concerns, and we incorporated the clarifying changes they suggested. Health Surveillance. The Centers for Disease Control and Prevention (CDC) in the Department of Health and Human Services (HHS) supports— through a number of programs—a system of health surveillance activities to monitor, and help prevent and control, infectious and chronic diseases. By working with the states and other partners, CDC—primarily the National Center for Infectious Diseases and the National Center for Chronic Disease Prevention and Health Promotion—provides leadership and funding through grants to state and local public health departments. Grants support research to develop diagnostic tests, prevention interventions, local and state public health laboratories, and information sharing and other infrastructure to facilitate a nationwide surveillance system. CDC centers support critical disease registries (such as the cancer registries) and surveillance tools (such as the Behavioral Risk Factor Survey) and disseminate public health surveillance data. Pensions Oversight. In the Department of Labor (DOL), the Pension and Welfare Benefits Administration (PWBA) oversees the integrity of private sector pensions (as well as health and other welfare benefits) and seeks to increase employer-sponsored pension coverage in the workforce. The Employee Retirement Income Security Act (ERISA) sets minimum standards to ensure that private employee pension plans are established and maintained in a fair and financially sound manner. Employers also have an obligation to provide promised benefits and to satisfy ERISA requirements for managing and administering private pension plans. PWBA tracks and collects annual reports by plan managers on the plan operations, funding, assets, and investments. It develops regulations and conducts enforcement investigations and compliance reviews to deter pension fund mismanagement. PWBA also provides information and customer assistance, such as brochures targeted to women, small businesses, and minorities with low participation rates in pension plans, to encourage the growth of employment-based benefits. Postsecondary Student Loans. The Department of Education’s Office of Student Financial Assistance (OSFA), a newly created performance-based organization, manages operations of the direct loan program (William D. Ford Federal Direct Student Loan Program) and guaranteed loan program (Federal Family Education Loan Program) that are major student financial assistance programs.These and other programs under the Higher Education Act of 1965, as amended, aim to help undergraduate and graduate students meet the cost of their education. The agency provides loans to students (or families) either directly through the direct loan program or under the guaranteed loan program, through private banks that lend the money at a federally subsidized rate. In the direct loan program, the student applies through the school to the agency that transfers funds to the school. Later, a loan servicer (under agency contract) tracks and collects payments on the loan. In the guaranteed loan program, the student applies for the loan through a private lender that then tracks and collects the loan payments. The agency subsidizes the interest rate paid by the borrower. If a borrower defaults, a local guaranty agency reimburses the bank for the defaulted loan, and the department pays the guaranty agency. Congressional staff identified a great diversity of information they wanted to have to enable them to address key questions about program performance—either on a regular basis, to answer recurring questions, or in response to ad hoc inquiries as issues arose. Agencies met some, but not all, of these information needs through a variety of formal and informal means, such as formal reports and hearings and informal consultations. Congressional staff identified a number of recurring information needs, some of which were met through annual documents, such as agencies’ budget justification materials, GPRA annual performance plans, or other annual reports. The recurring information needs fell into four broad categories: allocation of program personnel and expenditures across activities; data on the quantity, quality, and efficiency of operations or services; characteristics of the populations or entities served or regulated; and indicators of progress in meeting objectives and side effects. Both authorizing and appropriations staff wanted regular information on how personnel and expenditures were allocated across activities, both for the purpose of learning what was actually spent on a program or activity as well as to understand priorities within a program. This information was typically provided to their appropriations committees in the detailed budget justification documents that agencies submit each year with their budget requests. An appropriations staff member indicated that the routine data he wanted on PWBA’s program staffing and expenditures were provided by the agency’s budget justification documents, and that the agency was forthcoming in responding to requests for additional information. Congressional staff also described wanting information on the quantity, quality, and efficiency of the activities or services provided. This information was needed to inform them of the nature and scope of a program’s activities, as well as to address questions about how well a program was being implemented or administered. They said they found this kind of information in both agency budget justification documents and performance plans. For example, both authorizing and appropriations staff members noted that the Department of Education’s budget justification documents and its departmental performance plan met their needs for basic information on trends in program expenditures and the volume and size of student loans and grants-in-aid over time. This data provided them with information about the change over time in the use of different financing options, revealing the potential for an increase in student debt burden. In addition, the department’s performance plan included performance indicators and targets for OSFA’s response times in processing loan applications, an issue of concern to congressional staff because backlogs in loans being consolidated under the direct loan program had been identified and targeted for increased attention. In this case, Education officials said that a committee report required a biweekly report for 18 months on its loan processing so that the committee could monitor their progress in resolving the backlog. Officials said that this report was provided to a total of six committees—the authorizing, appropriations, and budget committees—in both the Senate and House. All three agencies also described their major programs (with some information on program activities and services provided) on their agency Internet sites. Similarly, congressional staff also wanted regular information on the characteristics of the persons or entities the programs serve or regulate. In addition to providing a picture of who benefits from the program, such information can help answer questions about how well program services are targeted to the population most in need of service and how well those targeted populations are reached. The congressional staff described PWBA as good at providing statistics on the private pension plans and participants covered by ERISA in an annual report issued separately from the GPRA requirements. This report, the Private Pension Plan Bulletin, provides their most recent as well as historical data on plans and participants and detailed data on employee coverage and other characteristics by employer size. Finally, the congressional staff also wanted regular information on the program’s progress in meeting its objectives and any important side effects that the program might have. The Department of Labor’s fiscal year 2000 performance plan supplied information on one of PWBA’ s goals—to increase the number of employees covered by private pension plans— derived from a survey conducted by the Bureau of the Census (Census). Congressional staff noted their satisfaction with the inclusion of program data on the student loan default rate and default recovery rate as performance measures in the Department of Education’s performance plan. The plan also provided data on whether low- and middle-income students’ access to postsecondary education was improving over time relative to high-income students’ access. These and other measures in the plan of unmet need for student financial aid, college enrollment rates, and size of debt repayments were derived from special surveys conducted by the Department of Education or by Census. Congressional staff identified a number of ad hoc information needs that arose periodically as “hot issues” came up for congressional consideration. Some of the needs were met through existing documents, and many others through informal consultations in response to a request from congressional staff, while still other needs were not met. The ad hoc information needs were similar to but somewhat different from recurring information needs and fell into five broad categories: details about a program’s activities and authority, news of impending change in the program, assessments of emerging issues, projected effects of proposed program changes, and effects and side effects of existing programs. Congressional staff often wanted details about the scope of a program’s activities and authority that were not readily available from the general documents they had. Questions might have been raised by a constituent request or a legislative proposal, in which case the staff member wanted a fairly rapid response to a targeted question. In such cases, congressional staff said they often called the agency’s congressional liaison office, which either handled the request itself or forwarded it to knowledgeable program officials who, in turn, either returned the call to the requester or forwarded the information through the liaison. CDC officials also described referring requesters to the brief program descriptions they maintain on their Internet site. Congressional staff noted that they wanted the agency to proactively inform them, in advance, when there was news of significant impending change in their Member’s district or to a program in which they had been involved. In one case, they wanted to have an opportunity to influence the policy discussions; in another case, they wanted to be prepared when the news appeared in the press. An authorizing committee staff member found that CDC’s targeted distribution of “alerts” provided a very useful “heads up” before the agency issued a press release about a public health concern. The alerts were distributed by e-Mail or faxed to the interested committee staff member or congressional members. During the recent appearance of a rare form of encephalitis in New York City, for example, CDC said that it informed congressional members and interested staff members from that region (as well as their authorizing and appropriations committees) about its findings regarding the source of the disease and explained what CDC was doing about it. Another type of ad hoc information request was for assessments of an issue’s potential threat. Congressional staff described several occasions when a negative incident—such as a disease outbreak—occurred that raised questions about how frequently such incidents occur, how well the public is protected against them, and whether a congressional or legislative response was warranted. Because of the highly specific nature of such requests, the staff said they were usually made by telephone to the agency’s congressional liaison and responded to with a brief, informal consultation or a formal briefing. On one occasion, CDC officials testified at a congressional hearing summarizing their research into antimicrobial-resistant diseases and how CDC’s surveillance programs track and respond to the problem. In another example, in response to a proposed merger of two large private corporations, a staff member wanted to know what the new owner’s obligations were to its holdover employees and how this would affect those employees’ pension benefits. In addition, in order to ensure the protection of those employees’ rights, the staff member wanted to know what enforcement options were available to the agency. The staff member indicated that PWBA officials provided this technical assessment and consultation in a timely manner. As either the legislative or executive branch proposed changes to a program, congressional staff wanted projections of the effects of those proposed changes, not only as to whether (and how) the change would fix the problem identified, but also whether it would have undesired side effects. As committee staff discuss proposals, they said they often asked agency officials for informal consultations. If hearings or other more formal deliberations were planned, some kind of formal document might be requested. When an agency proposed a regulation or amended regulation, the agency prepared a formal document for public comment that provided a justification for the change. For example, to reduce the cost of loans to student borrowers, a congressional committee considered reducing the interest rate. However, some lenders expressed concern that a rate reduction would cut into their profit margins, forcing some to drop out of the program. To assess the likelihood of this projected result, the committee staff turned to the estimates of lenders’ profit margins produced by the Office of Management and Budget (OMB) and the Treasury Department. Similarly, as new provisions are implemented, congressional staff might have questions about whether the provisions are operating as planned and having the effects hoped for or the side effects feared. In December 1998, OSFA was designated a performance-based organization (PBO), given increased administrative flexibility, and charged with modernizing the Department’s information systems and improving day-to-day operations. OSFA has provided authorizing and appropriations committee staff with regular reports on its Interim Performance Objectives (also available on its Internet site) that provide measures of efficiency in processing loan and loan consolidation applications and measures of borrower and institutional satisfaction. OSFA has also initiated cost accounting improvements to obtain better data on loans made, serviced, and collected under both the direct and guaranteed loan programs in order to provide baseline data against which to measure its progress in improving operational efficiency. Information needs that congressional staff reported as unmet were similar in content to, but often more specific or detailed than, those that were met. The information needs that congressional staff described as having been met tended to be general, descriptive information about a program’s activities and expenditures (such as those that might support their budget request) or descriptive information about the agency’s activities in response to a specific, often emerging, issue. This information was often provided in a formal report or presentation (such as a briefing or hearing). The information needs that congressional staff described as typically unmet were detailed information on the allocation of funds for activities, descriptive information about the program’s strategies and the issues they addressed, and analyses showing the program’s effects on its objectives. The key factors accounting for the gaps in meeting congressional information needs were the following: the presentations of information were not clear, sufficiently detailed, or the information was not readily available to congressional staff; or the information was not available to the agency. In some cases, information on the topics was available or provided, but its presentation was not as useful as it could have been. Congressional staff members noted that neither the budget submission nor the departmental strategic plan demonstrated the link between a CDC cancer screening program, the dollars appropriated for it in the budget, and how this program contributed to meeting the department’s strategic objectives. A CDC official noted that, in combination, CDC’s performance plan and budget submission did link the strategic objectives with the budget. They explained that this was in part due to CDC’s budget being structured differently from its organization of centers and institutes. A CDC budget work group, formed in early 1999 in response to similar concerns, met with its congressional stakeholders and program partners and is developing a revised budget display that the group hopes will make this information more understandable in CDC’s next budget submission. In another situation, congressional staff looked to the performance plan for a clear presentation of PWBA’s regulatory strategy that showed how the agency planned to balance its various activities—litigation, enforcement, guidelines, regulations, assistance, and employee education—and how those activities would meet PWBA’s strategic goals. The congressional staff wanted to know what PWBA’s regulatory priorities were, as well as how PWBA expected the different activities to achieve its goals. However, the departmental plan did not provide a comprehensive picture of PWBA and described only isolated PWBA activities to the extent that they supported departmental goals. Some agency reports did not provide enough detail on issues of concern to the committee. Congressional staff members concerned about PWBA’s enforcement efforts wanted detailed information on the patterns of violations to show how many were serious threats to plans and their financial assets, rather than paperwork filing problems. A PWBA official indicated that PWBA could disaggregate its data on violations to show the distribution of various types of violations, but that there would need to be some discussion with the committee staff about what constituted a “paperwork” rather than a “serious” violation. In another case, a congressional staff member was concerned that some patients were experiencing significant delays in obtaining cancer treatment after being screened under the National Breast and Cervical Cancer Early Detection Program. The program focuses on screening and diagnosis, while participating health agencies are to identify and secure other resources to obtain treatment for women in need. Staff wanted to see the distribution of the number of days between screening and beginning treatment, in addition to the median period, in order to assess how many women experienced significant delays. When this issue was raised in a hearing, CDC officials provided the median periods as well as the results of surveillance data that showed that 92 percent of the women diagnosed with breast cancer and invasive cervical cancer had initiated treatment. Some responses to congressional inquiries were not adequately tailored to meet congressional staff’s concerns. For example, in preparing legislation, a congressional staff member needed immediately very specific information about the scope and authority of a program in order to assess whether a proposed legislative remedy was needed. However, he said he received documents containing general descriptive information on the issue instead, which he did not consider relevant to his question. An agency official indicated that this response suggested that the congressional query may not have been specific enough, or that the responding agency official did not have the answer and hoped that those documents would satisfy the requestor. In other cases, staff indicated they obtained this type of information succinctly through a telephone call to the agency’s congressional affairs office, which might direct them to a brief description of the program’s authority, scope, and activities on the agency’s Internet site or refer them to a knowledgeable agency official. One authorizing committee staff person noted that, although the committee staff assigned to an issue develops background on these programs over time, there is rapid turnover in Members’ staff representatives to a committee. Moreover, because these staff are expected to cover a broad range of topics, she thought that they would find particularly useful brief documents that articulate the program’s authority, scope, and major issues, to draw upon as needed. Some congressional information needs were unmet because the information was not readily available, either because it was not requested or reported, or because staff were not informed that it was available. In one instance, concerned about the safety of multiemployer pension plans, congressional staff wanted disaggregated data on the results of enforcement reviews for that type of plan. PWBA officials explained that the ERISA Annual Report to Congress does not highlight enforcement results for particular types of plans. However, they said that they could provide this information if congressional staff specifically requested it. In several cases, the agencies thought that they had made information available by placing a document on the agency’s Internet site, but they had not informed all interested committee staff of the existence or specific location of those documents. For instance, an authorizing committee staff member had heard of long delays in PWBA’s responses to requests for assistance and wanted to know how frequently these delays occurred. In its own agency performance and strategic plans, PWBA included performance measures of its response times to customers requesting assistance and interpretations. But, because those measures were not adopted as part of the departmental performance plan and PWBA did not provide its own performance plan to the authorizing committee staff, this information was not available to those staff. Agency officials said that this information was available because they had posted their strategic plan on the agency’s Internet site. However, the committee staff person was unaware of this document’s presence on the site and thus was unaware that such a measure existed. In some instances, the desired information was not available to the agency. This was because either special data collection was required, it was too early to get the information, the data were controlled by another agency, or some forms of information were difficult to obtain. Where congressional questions extend across program or agency boundaries, special studies, coordinated at the department level, might be required to obtain the answers. For example, to address a policy question about how well prenatal services were directed to pockets of need, congressional staff wanted a comparison of the geographic distribution of the incidence of low birth-weight babies with areas served by prenatal programs and with the availability of ultrasound testing. HHS officials explained that although CDC and the National Center for Health Statistics had information on the regional incidence of low birth-weight babies through birth certificate data, these agencies did not have the information on the availability of prenatal services. The Health Resources and Services Administration (another HHS agency), which is concerned with such services, does not have information on the location of all prenatal programs or the availability of ultrasound equipment to link with the birth certificate data on low birth-weight. HHS officials indicated that, if this analysis were requested, the department would need to initiate a special study to collect data on the availability of services to match with existing vital statistics. Some congressional information needs extend beyond what a program collects as part of its operations and thus would require supplemental information or a special data collection effort to obtain. For example, because a student’s race is not collected as part of loan applications, the Department of Education supplements its own records on the use of different student finance options with periodic special studies of student borrowers that do collect racial information. Because the different student loan programs maintain their records in separate databases, the office relies on special studies, conducted every 3 years since school year 1986- 1987, to examine the full package of financial options students and their families use to pay for postsecondary education. The congressional staff also wanted to obtain trend data on the extent to which all forms of student aid received (e.g., grants, loans, and tax credits) cover the cost of school attendance for low-income students. Education officials said that if published data from these special studies were not adequate, specialized data tabulations could be obtained. In the meantime, OSFA issued a 5-year performance plan in October 1999 that showed how it plans to improve the information systems for the student loan programs in order to improve operations and interconnectivity among the programs. As programs are revised, questions naturally arise about whether the new provisions are operating as planned and having the desired effects or unwanted side effects. Congressional staff identified several questions of this type for the student loan programs due to changes created by the 1998 reauthorization of the Higher Education Act and the separate enactment of a new tuition tax credit: How many students will select each of the new loan repayment options? Which students benefit more from the new tax credit, low- or middle-income? Will the need to verify a family’s educational expenses create a new burden for schools’ financial aid offices? In our discussions with OSFA, officials told us that they will report information on use of the new repayment options in their next annual budget submission, and that they believed the Internal Revenue Service (IRS) would include analyses of who used the tuition tax credit (similar to its analyses of other personal income tax credits) in its publication series, Statistics of Income. Because OSFA does not administer the tax credit, OSFA officials suggested to us that IRS would be responsible for estimates of any reporting burden for schools related to the tax credit. Lastly, some information was not available because it is difficult to obtain. There has been congressional interest in whether a provision that cancels loan obligations for those who enter public school teaching or other public service leads more student borrowers to choose public service careers. Education officials said that a design for a special evaluation had been prepared, but that they had discovered that, because only a small number of student borrowers benefited from this provision, they were unable to obtain a statistically valid sample of these borrowers through national surveys. Determining the effectiveness of federally funded state and local projects in achieving federal goals can be challenging for federal agencies. A CDC official told us that CDC conducts many studies evaluating whether a specific health prevention or promotion practice is effective or not, but that it expects it will take a combination of such practices to produce populationwide health effects. However, it is much more difficult to measure the effects of a combination of practices, especially when such practices are carried out in the context of other state and local health initiatives, than to test the efficacy of one specific health practice at a time. In addition, measuring the effectiveness of health promotion and disease prevention programs related to chronic disease can be difficult in the short term, given the nature of chronic diseases. To help ensure that congressional stakeholders obtain the information they want requires communication and planning—to understand the form and content of the desired information as well as what can feasibly be obtained, and to arrange to obtain the information. Our analysis uncovered a range of options that agency and congressional staffs could choose from—depending on the circumstances—to improve the usefulness of agency performance information to these congressional staffs. Improved communication might help increase congressional access to existing information, improve the quality and usefulness of existing reports, and plan for obtaining supplemental data in the future. Agency officials said that increased communication between agency and congressional staff could have prevented some of the unmet information needs because they believed that, if requested, they could have provided most of the information congressional staff said they wanted, or arranged for the special analysis required. Increased two-way communication might also make clear what information is and is not available. Each agency has protocols for communication between congressional staff and agency officials, typically requiring the involvement of congressional liaison offices to ensure departmental review and coordination of policy. Agency congressional liaisons and other officials said that they answered some ad hoc inquiries directly or referred congressional staff to existing documents or program specialists. Congressional staff said that they were generally able to get responses to their formal and informal inquiries through these channels, but several noted that communication was often very formal and controlled in these settings. Some congressional staff and agency officials found that the informal discussions they had had were very helpful. In one case, agency officials were asked to discuss their program informally with appropriations committee staff; in another case, the incoming agency director scheduled a visit with a subcommittee chair and his staff to describe his plans and learn of their interests. It is our opinion that when key agency or committee staff changes occur, introductory briefings or discussions might help ensure continuity of understanding and open lines of communication that could help smooth the process of obtaining information on a recurring and on an ad hoc basis. Discussion of what might be the most appropriate distribution options for different types of documents might help ensure that the information agencies make available is actually found. For example, authorizing committees might want to routinely receive agencies’ annual budget justification documents, which contain detailed information on allocations of resources. Also, although the three agencies aimed to increase the volume of material that was publicly available by posting it on their Internet sites, the information was often not available to congressional staff unless they knew that it existed and where to look for it. For relatively brief and broadly applicable material, like CDC’s summary of cost-effective health promotion practices, an agency may decide, as CDC did, to send copies to all congressional offices. Alternatively, to avoid overwhelming congressional staffs with publications, CDC officials sent e- Mail or fax alerts to contacts at relevant committees about newly released publications and other recent or upcoming events of potential interest. Our analysis of the types of information the congressional staffs said they wanted on a recurring basis suggests ways the agencies might improve the usefulness of their performance plans and other reports to these committees. In addition, increased communication about the specifics of congressional information needs might help ensure that those needs are understood and addressed. The congressional staff said that they wanted a clear depiction at the program level of the linkages between program resources, strategies, and the objectives they aim to achieve. Of our three case studies, congressional staff indicated that only the Education Department’s performance plan provided adequate detail at the program level—the level that they were interested in. As we previously reported, most federal agencies’ fiscal year 2000 plans do not consistently show how the program activity funding in their budget accounts would be allocated to agencies’ performance goals.And, although most agencies attempted to relate strategies and program goals, few agencies indicated how the strategies would contribute to accomplishing the expected level of performance. One option would be for agencies to consider developing performance plans for their major bureaus or programs and incorporating this information in their department’s plan. For example, the HHS Fiscal Year 2000 Performance Plan consisted of a departmentwide summary as well as the annual performance plans developed by its component agencies and submitted as part of the agencies’ budget justifications. Alternatively, departments that prefer to submit a consolidated plan keyed to departmentwide goals could refer readers to where more specific data could be found in supplementary documents. OMB’s Circular No. A-11 guidance asks agencies to develop a single plan covering an entire agency but notes that, for some agencies, the plan will describe performance on a macro scale by summarizing more detailed information available at different levels in the agency. In these instances, OMB instructs agencies to have ready their more detailed plans specific to a program or component to respond to inquiries for more refined levels of performance information. The congressional staff also said that they wanted, on a recurring basis, data on the quantity, quality, and efficiency of a program’s activities; the characteristics of the population served; and indicators of a program’s progress in meeting its objectives. These categories are consistent with those identified in our 1995 report as the information Congress wants on a routine basis. (Appendix I contains the categories of information and the list of core questions that we proposed committees select from and adapt to meet their needs when requesting information.) Although all three agencies consulted with congressional committees on their strategic plans as required by GPRA, only one consulted with our congressional interviewees on the development of its performance plan and choice of indicators. As we previously reported, agency consultation with both authorizing and appropriations committees as performance measures are selected is likely to make the agencies’ performance plans more useful to those committees. The three agencies’ planned and ongoing efforts in data collection and analysis improvements may improve the quality and responsiveness of their reported information. However, without feedback from the congressional staffs on where presentations were unclear, or where additional detail or content is desired, the reports may still not meet congressional needs. Discussing information needs could also help identify which needs could be addressed in an annual or other recurring report and which could be addressed more feasibly through some other means. In addition to performance plans and reports, the congressional staff also described a need for readily accessible background information on individual programs’ authority, scope, and major issues. Committee staff noted that rapid turnover in Members’ staff representatives to a committee results in some of their colleagues needing a quick introduction to complex programs and their issues. Some of the program and agency descriptions on agency Internet sites were designed for the general public and were not detailed enough to meet the congressional staffs’ needs. To obtain new information about special subpopulations or emerging issues, congressional staff would have to make direct requests of the agency. Agency officials told us that they welcomed these requests and would do what they could to meet them. However, depending on the information requested and the time period in which a response is needed, it might not be possible for the agency to obtain it in time. Therefore, discussion between congressional staff and agency officials concerning the information needed is important to clarify what is desired and what is feasible to obtain, as well as to arrange for obtaining the information. In some cases, the agencies said that they were able to conduct special tabulations to obtain the desired information. In other cases, they said that more data collection or analysis efforts might be required and that they would need some initial planning to determine how much time and resources it would take to obtain the requested information. Because it can be costly to obtain some information, advance agreement on the information content and format might avoid some frustration on both sides by clarifying expectations. In a couple of cases, when congressional staff members learned that the information was not readily available and would be costly to obtain, they were satisfied to accept a less precise or less detailed response. Where congressional staff expect certain information will be important in future congressional considerations, advance planning for its collection would help ensure its availability in the desired format when it is needed. In some cases, agencies may be able to alter their information systems to track some new provision; in others, they may have to plan new data collection efforts. As stated in our 1995 report, communication is critical at two points in obtaining special studies: when a Committee frames a request for information, to ensure that the agency understands what is wanted and thus can alert the Committee to issues of content or feasibility that need resolution; and as report drafting begins, to assist the agency in understanding the issues that will be before the Committee and what kind of presentation format is thus likely to be most useful. The Departments of Health and Human Services and Labor provided written comments on a draft of this report, which are reprinted in appendixes II and III. Both HHS and Labor stated that, in general, the report is balanced and contains useful ideas for improving communications between federal agencies and congressional committees. HHS also expressed two concerns. One concern was that the report suggested that the Department did not provide performance information at the program level. It said its component agencies provided this information in their own performance plans, which are presented as part of their congressional budget justifications. We have changed the text to clarify that the HHS Fiscal Year 2000 Performance Plan consisted of a departmentwide summary as well as the performance plans submitted as part of its component agencies’ congressional budget justifications. However, because we understand that these budget justifications were not widely distributed beyond the appropriations committees, we remain concerned that this performance information was not made readily available to authorizing committee staff. HHS’ other concern was that the opening paragraphs of the report implied that it would emphasize GPRA as the primary medium for disseminating agency performance information although, it noted, the scope of the report is appropriately much broader. The Committee’s expectations for and concerns about agencies’ performance plans prepared under GPRA were the impetus for this report. However, the Committee also recognized that these plans and reports are only one mechanism to provide performance information to Congress and thus broadened the focus of our work. Officials at the Department of Education suggested no changes and said that they appreciated recognition of their efforts to work collaboratively with Congress and provide good management for the department’s programs. OMB, HHS, and PWBA provided technical comments that we incorporated where appropriate throughout the text. To explore how agencies might improve the usefulness of the performance information they provide Congress, we conducted case studies of the extent to which the relevant authorizing and appropriations committee staffs obtained the information they wanted about three program areas. These cases were selected in consultation with the requesting committee’s staff to represent programs whose performance information they felt could be improved and to represent a range of program structures and departments under the Committee’s jurisdiction. For example, one selection (pension oversight) is a regulatory program in the Department of Labor; the other two (student loans and health surveillance) represent service programs in the Departments of Education and Health and Human Services. Pension oversight represents the direct operations of a federal agency, while the other cases operate through state and local agencies or the private sector. Each case represents a program or cluster of programs administered by an agency within these departments. To identify congressional information needs and the extent to which they were met, we interviewed staff members recommended by the minority and majority staff directors of the authorizing and appropriation committees for the selected agencies. We asked the staffs to identify what information they needed to address the key policy questions or decisions they faced over the preceding 2 years, and whether their information needs were met. To identify the reasons for the information gaps and how in practice the agencies might better meet those congressional information needs, we interviewed both agency officials and congressional staff; reviewed agency materials; and drew upon our experience with various data collection, analysis, and reporting strategies. We are sending copies of this report to Senator Edward Kennedy, Ranking Minority Member of your committee; Senator Ted Stevens, Chairman, and Senator Robert Byrd, Ranking Minority Member, Senate Committee on Appropriations; Representative William Goodling, Chairman, and Representative William Clay, Ranking Minority Member, House Committee on Education and the Workforce; Representative Tom Bliley, Chairman, and Representative John Dingell, Ranking Minority Member, House Committee on Commerce; and Representative Bill Young, Chairman, and Representative David Obey, Ranking Minority Member, House Committee on Appropriations. We are also sending copies of this report to the Honorable Alexis Herman, Secretary of Labor; the Honorable Donna Shalala, Secretary of Health and Human Services; the Honorable Richard Wiley, Secretary of Education; and the Honorable Jacob Lew, Director, Office of Management and Budget. We will also make copies available to others on request. If you have any questions concerning this report, please call me or Stephanie Shipman at (202) 512-7997. Another major contributor to this report was Elaine Vaurio, Project Manager. Overall, what activities are conducted? By whom? How extensive and costly are the activities, and whom do they reach? If conditions, activities, and purposes are not uniform throughout the program, in what significant respects do they vary across program components, providers, or subgroups of clients? What progress has been made in implementing new provisions? Have feasibility or management problems become evident? If activities and products are expected to conform to professional standards or to program specifications, have they done so? Have program activities or products focused on appropriate issues or problems? To what extent have they reached the appropriate people or organizations? Do current targeting practices leave significant needs unmet (problems not addressed, clients not reached)? Overall, has the program led to improvements consistent with its purpose? If impact has not been uniform, how has it varied across program components, approaches, providers, or client subgroups? Are there components or providers that consistently have failed to show an impact? Have program activities had important positive or negative side effects, either for program participants or outside the program? Is this program’s strategy more effective in relation to its costs than others that serve the same purpose? Performance Budgeting: Fiscal Year 2000 Progress in Linking Plans With Budgets (GAO/AIMD-99-239R, July 30, 1999). Performance Plans: Selected Approaches for Verification and Validation of Agency Performance Information (GAO/GGD-99-139, July 30, 1999). Managing for Results: Opportunities for Continued Improvements in Agencies’ Performance Plans (GAO/GGD/AIMD-99-215, July 20, 1999). Regulatory Accounting: Analysis of OMB’s Reports on the Costs and Benefits of Federal Regulation (GAO/GGD-99-59, Apr. 20, 1999). Performance Budgeting: Initial Experiences Under the Results Act in Linking Plans With Budgets (GAO/AIMD-99-67, Apr. 12, 1999). Emerging Infectious Diseases: Consensus on Needed Laboratory Capacity Could Strengthen Surveillance (GAO/HEHS-99-26, Feb. 5, 1999). Managing for Results: Measuring Program Results That Are Under Limited Federal Control (GAO/GGD-99-16, Dec. 11, 1998). Pension Benefit Guaranty Corporation: Financial Condition Improving, but Long-Term Risks Remain (GAO/HEHS-99-5, Oct. 16, 1998). Managing for Results: An Agenda to Improve the Usefulness of Agencies’ Annual Performance Plans (GAO/GGD/AIMD-98-228, Sept. 8, 1998). Student Loans: Characteristics of Students and Default Rates at Historically Black Colleges and Universities (GAO/HEHS-98-90, Apr. 9, 1998). Credit Reform: Greater Effort Needed to Overcome Persistent Cost Estimation Problems (GAO/AIMD-98-14, Mar. 30, 1998). Managing for Results: Critical Issues for Improving Federal Agencies’ Strategic Plans (GAO/GGD-97-180, Sept. 16, 1997). Direct Student Loans: Analyses of the Income Contingent Repayment Option (GAO/HEHS-97-155, Aug. 21, 1997). Student Financial Aid Information: Systems Architecture Needed to Improve Programs’ Efficiency (GAO/AIMD-97-122, July 29, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). High-Risk Series: Student Financial Aid (GAO/HR-97-11, Feb. 1997). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). Program Evaluation: Improving the Flow of Information to the Congress (GAO/PEMD-95-1, Jan. 30, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch- tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed three agencies' annual performance plans to determine whether the plans met congressional requirements, focusing on: (1) which aspects of congressional information needs were met by the agency's annual performance plan or some other source; (2) where those needs were not met, and what accounted for the discrepancies or gaps in the information provided; and (3) what options agencies could use to practically and efficiently provide the desired performance information. GAO noted that: (1) the congressional staff GAO interviewed identified a great diversity of information they would like to have to address key questions about program performance; (2) the agencies GAO studied met some, but not all, of these recurring and ad hoc congressional information needs through both formal and informal means; (3) the congressional staffs were looking for recurring information on spending priorities within programs, the quality, quantity, and efficiency of program operations, the populations served or regulated, as well as the program's progress in meeting its objectives; (4) some of these recurring needs were met through formal agency documents, such as annual budget request justification materials, annual performance plans, or other recurring reports; (5) other congressional information needs were ad hoc, requiring more detailed information or analysis as issues arose for congressional consideration; (6) information needs that the congressional staffs reported as unmet were similar in content to, but often more specific or detailed than, those that were met; (7) several factors accounted for the gaps in meeting congressional information needs; (8) some information the agencies provided did not fully meet the congressional staffs' needs because the presentation was not clear, directly relevant, or sufficiently detailed; (9) other information was not readily available to the congressional staffs; (10) in some cases, the agencies said they did not have the information because it was either too soon or too difficult to obtain it; (11) improved communication between congressional staff and agency officials might help ensure that congressional information needs are understood, and that arrangements are made to meet them; (12) greater consultation on how best to distribute agency documents might improve congressional access to existing reports; (13) posting publications on Internet sites can increase congressional staffs' access to agency information without their having to specifically request it, but staff still need to learn that the information exists and where to look for it; and (14) agencies' annual Government Performance and Results Act performance plans and other reports might be more useful to congressional committees if they addressed the issues congressional staff said they wanted addressed on a recurring bases, and if agency staff consulted with the committees on their choice of performance measures.
After 15 years of negotiations to join the WTO, on December 11, 2001, China bound itself to open and liberalize its economy and offer a more predictable environment for trade and foreign investment in accordance with WTO rules. U.S. investment and trade with China is of substantial interest to U.S. companies and has increased during the past 10 years. Our 2002 survey of U.S. company views revealed companies’ expectation of a positive impact from China’s implementation of WTO commitments as well as anticipation of difficulties during implementation. The results of China’s negotiations to join the WTO are described and documented in China’s final accession agreement, the Protocol on the Accession of the People’s Republic of China, which includes the accompanying Report of the Working Party on the Accession of China, the consolidated market access schedules for goods and services, and other annexes. China’s WTO commitments are complex and broad in scope. Some commitments related to reforming China’s trade regime require a specific action from China, such as reporting particular information to the WTO, while others are more general in nature, such as those that affirm China’s adherence to WTO principles. The accession agreement includes market access commitments regarding goods and services. These include commitments that will reduce tariffs on products, as well as commitments to reduce or eliminate many other trade barriers such as quotas or licensing requirements on some of these products. Further, China made commitments to allow greater market access in 9 of 12 general service sectors. In the banking sector, for example, China has agreed to reduce licensing requirements and has removed restrictions on foreign currency services. To improve its trade regime, China has generally agreed to make numerous rule of law-related reforms such as publishing and translating trade-related laws and regulations and applying them uniformly at all levels of government and throughout China. China committed to adhere to internationally accepted norms to protect intellectual property rights and enforce relevant laws and regulations related to patents, trademarks, and copyrights. Moreover, China made a substantial number of other rule of law-related commitments regarding transparency of law, judicial review, and nondiscriminatory treatment of businesses. In the past 10 years, U.S. investment and trade with China have increased significantly. At the end of 2002, U.S. companies had total direct investments of $10.3 billion in China, largely in the manufacturing sector. This amount represents more than 10 times the approximately $900 million invested a decade earlier in 1993. In addition, U.S. goods exports and services to China grew at an average annual rate of 12 percent since 1993, totaling $27 billion in 2002, according to the Department of Commerce. While the United States holds a large bilateral trade deficit with China, it has a bilateral goods surplus in areas such as transportation equipment and agricultural products. Appendix III provides additional details regarding U.S. investment and trade with China. In 2002, we conducted a study of U.S. companies’ views about the importance of, the anticipated effects of, and the prospects for, China’s implementing its WTO commitments. Our analysis of responses from 191 of 551 surveyed companies revealed that most of China’s WTO commitments were important to the companies, with rule of law-related reforms the most important. Specifically, at least three quarters of the respondents selected intellectual property rights; consistent application of laws, regulations, and practices; and transparency of laws, regulations, and practices as the most important commitment areas for their companies. Other than those related to rule of law, respondents most frequently selected trading rights; tariffs, fees, and charges; and scope of business restrictions as important commitments. We also found that most companies expected that China’s implementation of its WTO commitments would have a positive impact on their business operations, although many anticipated impediments to implementation of China’s WTO reforms. More than three quarters of the companies reported that they expected China’s implementation of its WTO commitments would lead to an increase in their companies’ activities in China, including their export volume to China, market share in China, and distribution of products there. However, many respondents also expected that many WTO commitments, particularly in rule of law-related commitment areas regarding consistent application of laws, regulations, and practices, and intellectual property rights, would be difficult for Chinese officials to implement. (See table 1 for 2002 survey respondents’ views on the expected level of difficulty of China’s implementation of commitment areas that were most important to them.) Overall, in 2003, respondents thought that China had implemented most of the 26 specific WTO commitment areas to at least some extent when asked to characterize China’s reform efforts along a four-point scale ranging from no extent to great extent. Responses were mixed when company representatives assessed the commitment areas that we found to be of greatest importance to their businesses. In addition, the importance placed on specific commitment areas differed among respondents of the four industry groups--agriculture, banking, machinery, and pharmaceuticals. It is also important to note that many respondents reported they had no basis to judge the extent to which China had made reforms related to some WTO commitment areas, for reasons that varied depending on each company’s experience and operations in China. Respondents’ assessments of each area varied widely, but they generally reported low and moderate ratings of China’s implementation. See figure 1 for respondents’ views on the extent of China’s implementation of the 26 commitment areas, excluding those with no basis to judge. Many respondents had no basis to judge the extent of China’s WTO reforms in certain commitment areas. This indicates that few companies have an in- depth knowledge of Chinese reforms across all 26 areas, as discussed in further detail later. Consequently, the number of company representatives evaluating each individual commitment area varied from 14 to 67. On average, respondents assigned lower marks when assessing the implementation of 19 of the 26 listed commitment areas. For example, company representatives thought that China had made reforms to only some or little extent when assessing China’s trading rights reforms (right to import or export products) and price controls. Company representatives said they eagerly awaited the implementation of China’s trading rights commitments in late 2004. Several company representatives noted that implementation of these commitments would provide more control over their business relationships in China and reduce or eliminate the need to rely on third parties such as distributors and trading companies. Although China had agreed to stop using price controls to restrict the level of imports, one company representative derided China’s price control reforms and others noted their concern regarding the Chinese government’s continuing control of prices on specific products. Respondents on average assigned higher marks to the remaining 7 commitment areas. For example, respondents thought that China had made reforms to a moderate extent when assessing China’s reforms to tariffs, fees, and charges; requirements stipulating a minimum amount of production that must be exported; and restrictions on partnerships and joint ventures. Several respondents described China’s efforts in lowering tariffs, fees, and charges including one respondent who noted that China had reduced tariffs for an agricultural product from 35 to 15 percent since joining the WTO. Company representatives also discussed China’s allowance for greater market access for services, stated that WTO has allowed companies to provide after-sales service, and one said that “China is doing a good job” in addressing this area. Respondents’ assessments of the most important commitment areas provide further detail regarding companies’ views on China’s progress. See table 2 for the ranking of the commitment areas by importance to respondents, which we calculated using weighted responses. More than half of respondents reported that China had made reforms to a moderate or great extent when asked to assess China’s overall progress in implementing reforms that were important to their companies. However, when asked to assess China’s implementation of specific commitment areas, responses for four of the five most important areas fell in the “some or little extent” category. Company representatives from the four industries assigned varied levels of importance to specific commitment areas. The five specific commitment areas ranked as most important to respondents overall were (1) standards, certifications, registration, and testing requirements; (2) customs procedures and inspection practices; (3) intellectual property rights; (4) tariffs, fees, and charges; and (5) consistent application of laws, regulations, and practices. Among these five areas, tariffs, fees and charges received higher marks, with respondents reporting on average that China had made reforms to a moderate extent. Respondents noted that it was relatively easy to assess China’s implementation of tariffs, fees, and charges because China had set time schedules for tariff reductions on various products. Many respondents told us that China’s efforts to achieve tariff reductions were on schedule, allowing companies to pay lower tariffs on imported products. Respondents provided lower ratings on average for the four other important commitment areas and indicated that China had only implemented these reforms to some or little extent. In the area of standards, certifications, registration, and testing requirements, for example, some respondents discussed continuing requirements such as product registrations that require approval from multiple Chinese government agencies, delays and bureaucratic bottlenecks in processing product registration, and the use of product standards to protect Chinese agricultural producers. When evaluating China’s reforms made in customs procedures and inspection practices, company representatives discussed the “hassles” and inefficiencies created by an unpredictable and slow customs system characterized by inconsistent application of standards and duties. In addition, numerous company representatives discussed the limitations of China’s efforts to address intellectual property rights. Respondents cited specific experiences with generic copies of pharmaceutical products, products illegally copied to look like those of U.S. companies, and false labeling of Chinese products. Some respondents even commented on the Chinese government’s inadequate enforcement of intellectual property rights. Furthermore, some respondents noted inconsistency in China’s application of laws, regulations, and practices within and among national, provincial, and local levels of government. For example, one banking representative said that different local governments each have different explanations of China’s laws and regulations. This issue illustrates a larger rule of law-related problem discussed by company representatives: the Chinese national government’s commitment to WTO implementation did not always coincide with local governments’ interpretation and implementation of China’s commitments. Respondents among the four selected industries (agriculture, banking, machinery, and pharmaceuticals) had different views on the commitment areas most important to their companies. For example, representatives of agricultural companies and organizations noted the significance of quota reductions while representatives of banking firms emphasized commitments related to market access for services and foreign exchange restrictions. Moreover, machinery company representatives identified customs procedures and inspection practices as an important area for their transport of goods. For pharmaceutical companies, intellectual property and trading rights stood out as among the most important commitments. Table 3 shows respondents’ views on the most important commitment areas by industry. The relative importance that respondents from the four industries assigned to each of the 26 commitment areas reflected the nature of their businesses. Company representatives also described the importance of these commitment areas in terms of their experiences with China’s reform efforts. First, for example, agricultural companies identified the tariff-rate quota system as well as China’s application of sanitary and phytosanitary measures and inspection requirements as important. Similarly, other agriculture respondents emphasized the importance of tariffs and one company representative noted that some agricultural tariffs applicable to his company had declined as much as 40 percent. A representative of an agricultural organization also noted that although China had increased trading rights, continued quota restrictions undermined this effort. Second, key issues for banking firms (a service industry) included China’s market access commitments to fully open the industry to foreign banks 5 years after China’s accession to the WTO. Banking industry representatives also identified scope of business restrictions, which can limit the types of services offered to clients, as important. However, company representatives also told us that market access obstacles, such as branch licensing that limits the ability of foreign banks to offer additional products and to expand geographically, continue to exist. Next, machinery companies identified the importance of China’s tariff rates and product certification system that sometimes involves on-site inspection of manufacturing facilities outside of China. Some machinery company representatives discussed the importance of timely product certification at the ports, the importance of an efficient product registration process for new products imported into China, and the need for testing procedures at customs that allow products to enter the country without damage caused by product testing. Finally, representatives from pharmaceutical companies identified protection of intellectual property rights as important and said that they continue to face challenges in this area. Specifically, several pharmaceutical company representatives discussed the continued need for patent protection to prevent counterfeiting of drugs sold at a fraction of the price charged for the genuine product. A few representatives of pharmaceutical firms noted that the Chinese government had allowed counterfeit generic drugs to be sold and believed that China displayed discrimination favoring Chinese products rather than complying with the principle of national treatment, under which imported foreign products and services are treated no less favorably than domestic products or services. As described by one company representative, although protection of intellectual property rights is getting better, the situation is still bad. Another respondent said simply that “piracy is everywhere” in China. Another notable finding of our questionnaire is that many respondents were unable to assess certain commitment areas listed in our questionnaire. Company representatives provided a number of explanations for their limited ability to evaluate China’s progress in implementing specific WTO commitment areas. Specifically, for 13 of the 26 specific commitment areas we asked about, more than half of the respondents said they had no basis to judge the extent to which China had made reforms in these commitment areas. Most notably, for four commitment areas, at least three quarters of the respondents selected “no basis to judge” when asked to assess the extent to which China had actually made reforms in these commitment areas. These areas included export restrictions, such as eliminating taxes and charges on exports; China’s application of safeguards against U.S. exports, which includes antidumping measures and other legal actions against import surges; local content requirements; and government requirements stipulating a minimum amount of production that must be exported. See figure 2 for the number of respondents who indicated they had no basis to judge China’s reforms and those who assessed China’s implementation of its WTO commitment areas. The reasons for a “no basis to judge” response could result from any number of factors, including the irrelevance of specific commitment areas to particular companies, lack of experience with commitment areas, and lack of knowledge about China’s WTO commitments. Some company representatives told us that they could not assess commitment areas that simply did not apply to their companies. For example, representatives of machinery companies had no basis to judge “scope of business restrictions for services” because they did not provide services. Other respondents stated that their companies did not have experience with particular commitment areas, such as one respondent’s inability to comment on “independence of judicial bodies” because the respondent’s company had not accessed the Chinese judicial system. Moreover, some respondents noted that they did not have sufficient awareness and understanding of the exact terms of China’s WTO commitments and/or did not actively track specific Chinese implementation efforts. Several respondents told us that they often could not distinguish between China’s broad economic reforms and its actions taken to implement specific WTO commitments. Other respondents said that the WTO did not apply to their company’s business model, did not really matter to their business, or did not have relevance to current market conditions that affected their business. Most respondents reported that China’s implementation of its WTO commitments had had a positive impact on their companies, even though some company representatives indicated that China’s reform efforts would continue to present challenges for their company operations in China. For example, one respondent noted the success of his company’s overall operations in China but stated that implementation of China’s WTO commitments remained slow and problematic. Another company representative noted that although actual changes are happening very slowly, the overall pressure to reform is having a positive effect. Companies also provided information on whether various business activities had increased, stayed about the same, or decreased since China joined the WTO in December 2001. The majority of respondents reported that most of the 13 business activities such as revenue stream and volume of production in China had increased. Company representatives described a broad range of increased company activities including new lines of business and new products, expansion of existing business to meet growing demand, and the opening of new branch offices and factories. Some respondents discussed the broad range of factors that influence company business activities. Respondents cited other factors, such as the business environment in China and general market and economic conditions, as more direct influences on company activities than China’s WTO membership. Overall, company representatives reported a generally positive impact from implementation. More than two thirds of the 80 company representatives responding to this question reported that China’s implementation of its WTO commitments had had a positive impact on their companies, as shown in figure 3. Some respondents noted that China’s accession to the WTO had increased business opportunities for their companies through changes such as decreased tariffs and increased transparency of laws and regulations. Respondents also noted that the lower tariffs helped to improve business in China and had had an immediate impact on their bottom line because of reduced costs, ultimately helping their companies increase profits. One company representative told us that the prevalence of government officials with a pro-business attitude and the ability to speak English proficiently had contributed to the positive impact on company operations in China. Responses regarding the impact of China’s WTO implementation on their companies varied when analyzed by company size and industry. First, when analyzed by company size, a majority of representatives of small- and medium-sized enterprises reported little, no, or a negative impact from WTO implementation. Large company representatives responded more positively, with nearly three quarters selecting either “very positive” or “generally positive” when asked what impact China’s WTO implementation had on their company’s ability to do business in China. Second, a majority of respondents in three of the four industries reported a positive impact on company operations. Specifically, most of the representatives of agricultural companies reported little, no, or a negative impact from China’s implementation efforts. Agriculture respondents discussed negative consequences resulting from WTO-inspired testing requirements that ultimately resulted in the rejection of U.S. shipments to China. In contrast to agriculture, almost all of the banking industry respondents reported either a very positive or generally positive impact on their companies’ ability to do business in China. One representative from a banking company stated that his company has a positive view of market development in China—the rules seem much clearer for banks and there is an increased sense of assurance that the company can be successful as a result of China’s WTO implementation. A majority of the manufacturing and pharmaceutical company respondents also reported more positive than negative responses regarding the impact of China’s WTO implementation on their companies. A number of company representatives reported a positive outlook for their future in China when asked about the likely impact that China’s WTO implementation would have in 2 years’ time, but they also noted the challenges they expect to continue. Some respondents said they expected the overall business environment in China would improve significantly. Others specifically discussed WTO commitments that would have an impact on their ability to do business in China. For example, some respondents stated that additional tariff cuts would reduce product costs and result in increased profits. However, other respondents discussed obstacles hindering reform efforts. One company representative noted that China’s regulatory reforms may be fine on paper, but speculated that actual implementation could invalidate the intent of the reforms. Some company representatives noted that different interpretations of laws and regulations as well as varied approaches to implementation between provinces and levels of government create challenges for foreign companies in China. Several company representatives discussed ongoing delays to business operations resulting from Chinese requirements for product registration and testing. Generally, company representatives said that progress is continuing, that their parent companies would continue investing in business operations in China, and that they expected the overall business climate to improve, but reforms could take time. Overall, our questionnaire respondents reported that their company activities have increased since China joined the WTO. Respondents indicated whether their companies’ business activities in 13 areas had increased, stayed about the same, or decreased since China joined the WTO in December 2001. Specifically, at least 70 percent of respondents reported that their companies’ business activities had increased for 9 of the 13 listed activities, as shown in figure 4. Some company representatives told us that China’s WTO membership helps to attract foreign investment, which in turn helps their businesses. In contrast, most respondents reported that the other four activities had stayed the same or decreased. Activities that had stayed the same or decreased included the number and value of their ventures with Chinese partners. None of the respondents to our questionnaire reported a decrease in the number of products distributed in China, the scope of product distribution in China, or the number of services provided in China. But almost one third of respondents indicated that the number of company employees in the United States had decreased while about one sixth reported an increase. Respondents told us that the number of employees in the United States depends on factors other than China’s WTO accession, such as current economic conditions, corporate restructuring, changes in the company’s industry, and/or a change in company strategy. Some respondents also discussed the difficulty of identifying a link between other company activities and China’s WTO membership. Company representatives cited a number of possible influences on changes in company activity levels, such as China’s ongoing economic reforms, an improving business environment in China, and market development opportunities in China. Our analysis of U.S. companies’ responses to our questionnaire provides findings and lessons that have important implications for policymakers who rely on private sector input in order to judge China’s progress in opening its market. As noted in our March 2003 report, the private sector plays an important role in monitoring and enforcement activities. Our results indicate areas where China has made progress in carrying out WTO- related reforms and areas that might need more attention. Our results also show that despite the problems U.S. companies are facing in China’s implementation of specific commitment areas, more than two thirds of respondents indicated that China’s WTO implementation had a positive impact on their companies’ ability to do business in China. Our work also provides a number of lessons regarding the use of private sector input that could help shape best practices for U.S. government efforts to monitor and enforce China’s compliance with its commitments. First, because company experiences and assessments varied, both overall and among companies in the same industry, policymakers are well advised to seek input from a number of companies with interests in an area of concern and not just a few companies. Doing so increases the representativeness of the information gathered for monitoring purposes, because views are often company-specific and one company in an industry cannot be assumed to speak for all. Second, we found that the number of company representatives who report they have a basis to judge China’s implementation of specific WTO commitment areas varies greatly. Broad input from a wide range of companies assures policymakers that monitoring is authoritative and complete because relatively few individual companies believed they had a basis to judge all 26 commitment areas. Furthermore, in some cases, like Chinese export restrictions, application of safeguards, and/or subsidies, very few U.S. companies reported they had a basis to judge implementation. This observation raises the question of whether U.S. government officials can rely on private sector input to identify the full range of China’s compliance problems. Instead, for some commitment areas, alternative strategies that reach out to specific companies or that rely on economic or legal information to identify problems, for example, may be needed to monitor China’s implementation. Finally, we report that respondents cited a number of factors that influence company activities in addition to China’s efforts to implement specific WTO commitments. These results reaffirm the importance of ongoing private sector education about China’s WTO obligations and the market access opportunities that the private sector should expect. Furthermore, it indicates that any monitoring strategy benefits from collecting and reviewing information about what companies may consider solely “commercial problems” but that may actually involve WTO-related issues, where the U.S. government can clearly take action. Nevertheless, knowledge of U.S. company views remains fundamental for policymakers to judge the degree to which the benefits of China’s WTO membership are being realized. We will consider the implications of this work as we conduct our current review of U.S. government monitoring and enforcement activities. We are sending copies of this report to interested congressional committees. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128. Other GAO contacts and staff acknowledgments are listed in appendix IV. The Chairman and the Ranking Minority Member of the Senate Finance Committee and the Chairman and the Ranking Minority Member of the House Committee on Ways and Means asked us to undertake a long-term body of work relating to China’s membership in the World Trade Organization (WTO). This work began in 2001 and includes examining, through annual surveys, the experience of U.S. firms doing business in China. Our objectives for this report were to assess the views and experiences of selected U.S. companies with a presence in China regarding (1) the extent to which China has implemented its WTO commitments in key industries and (2) the impact of China’s implementation of its WTO commitments on these U.S. companies’ business operations. To respond to our objectives, we collected the views of 82 U.S. companies and 11 representatives of U.S. agricultural associations with offices in China. To answer our two objectives, we gathered company views primarily via in- person interviews in China; we also conducted some interviews by telephone in instances when it was logistically impossible to schedule in- person meetings. We used this approach because the work we conducted for our 2002 report on U.S. company views indicated that this method would yield better response rates than mail or Web surveys and would allow us to contact the corporate representatives who were most knowledgeable about WTO implementation issues. We selected the participants from a commercial database listing U.S. companies that were identified as being in China as of 2003. We purchased the database, Foreign Companies in China 2003, from Commercial Intelligence Service, a division of Business Monitor International. Our research indicated that this database best met our need for identifying U.S.-nationality companies and their respective contact information in China by industry. However, the database likely does not include all U.S. companies in China, because foreign investors present a constantly changing population, some companies in China may not wish to publicize their presence, and/or because it is not always clear who is the ultimate parent of corporate subsidiaries. We selected industries that encountered implementation issues, had key commitments implemented during China’s first year of WTO membership, and industries with concentrations of U.S. foreign investment in China. The four industries included: agriculture, banking, machinery, and pharmaceuticals. Although fewer U.S. agriculture-related companies have a physical presence in China than the other three selected industries, agriculture emerged as a key implementation issue in 2002, and its importance continued in 2003. In addition to interviewing the agricultural firms listed in the purchased database, we also conducted structured interviews with a judgmental selection of 11 representatives of nonprofit agricultural associations in China. The representatives of these associations promote U.S. agricultural exports to China for various commodities. We interviewed them to gain a more complete understanding of U.S. agricultural interests in China and do not generalize their responses to the full universe of nonprofit agricultural associations. Funding for representative offices of these associations in China is in part provided by the U.S. government through the Department of Agriculture’s Foreign Market Development Cooperator Program. We administered the same questionnaire we used for the private sector firms, with slight modifications to acknowledge the nature of these cooperators as nonprofit associations rather than companies. Data for these associations are presented separate from company representative responses in the report. Next, the banking industry provided an opportunity to explore the experiences of firms that provide services in China. Banking issues also emerged as a concern for U.S. government officials during the first year of China’s WTO membership. Third, machinery is an industry with representation from a broad range of U.S. companies. Finally, for the pharmaceutical industry, numerous commitments were scheduled for implementation during China’s first 2 years of WTO membership. In addition, this industry provided an opportunity for us to explore the experience of companies with an interest in intellectual property rights, a key issue during the first year of China’s WTO membership. Companies from the four selected industries were identified in the aforementioned electronic database, Foreign Companies in China 2003, purchased from the Commercial Intelligence Service. The database contained a total of 243 contacts for companies in the four selected industries. We reviewed the list of contacts in order to judgmentally identify primary, secondary and tertiary contacts for each company. In addition, we confirmed U.S. incorporation for each company, leaving a total of 149 companies in our study population. During the scheduling process, it became apparent that positive responses from companies on the invited list of 149 companies might not fill our itinerary for the planned 2-week data-gathering trip to China in October 2003. Consequently, we supplemented the list of 149 companies with the list of 48 companies that had completed interviews with us in China in 2002. Three of these 48 companies accepted the interview invitation and completed structured interview questionnaires. These companies are included in our data analysis of 82 questionnaire responses. See table 4 for an explanation of the results of requests for interviews with the 149 companies from the four selected industries. We conducted structured interviews with representatives of U.S. firms and agricultural organizations in the United States and Beijing, Shanghai, and Hong Kong, China, in October 2003. Structured interviews provided an opportunity to discuss questionnaire responses in greater detail as well as gain an understanding for the context of these responses. We discussed topics during the interviews that included the importance of China’s WTO commitment areas, the extent to which China had implemented reforms in WTO commitment areas, and the impact of China’s reforms on respondents’ companies. We restricted our analysis to the subset of firms that responded to our questionnaire, and we did not make estimates about the larger population of all U.S. companies with a presence in China. From the study population of 149 U.S. companies with a presence in China, we received 79 questionnaires, for an overall response rate of 60 percent. As the response rate was 60 percent, and some key questions had a high frequency of “no basis to judge” responses, we did not calculate sampling errors, and we present questionnaire results in this report in unweighted form. The unweighted responses represent the responses received and are not projected to the population of U.S. companies with a presence in China nor the four selected industries. Respondents to our questionnaire from the study population represented four industries: agriculture, banking, machinery, and pharmaceuticals. The largest number of respondents represented machinery companies, followed by pharmaceuticals, banking, and agriculture. Figure 5 displays the number of respondents from each of the four industries. Questionnaire respondents reported that they carry out their business activities in facilities and offices across all of China. Shanghai, Beijing, and Guangzhou were the most frequent responses to the question of where companies had a facility or other presence among all of the Chinese locations listed in our questionnaire (listed in order of frequency of responses). In fact, only a few respondents (less than five) did not have a facility or other presence in Shanghai, Beijing, or Guangzhou. About one third of respondents reported having facilities or some other presence in all three of these locations, while about two thirds of respondents reported having a presence in locations beyond Shanghai, Beijing, and Guangzhou. Figure 6 shows the number of companies that reported having a facility or other presence in each location in China listed in our questionnaire. Respondents reported that they engage in a range of business relationships in their many locations throughout China. More than two fifths of respondents reported having one type of business relationship; about one third of the respondents had two types of relationships; and about one quarter of the respondents reported three or more types of business relationships there. Wholly owned foreign enterprises, joint ventures, and representative offices were the most frequently reported types of business relationships, respectively. Figure 7 displays the number of respondents that reported each type of business relationship. States and the number of employees in China. The number of employees in the United States varied from none (such as a company incorporated in the United States but with all employees in China) to 90,000. Most of the companies that completed our questionnaire, however, were large companies. Less than 15 percent of respondents reported that they had 500 or fewer employees in the United States. The number of employees that companies reported having in China ranged from zero to 8,000. Almost two thirds of respondents reported that they had 500 or fewer employees in China. All firms that responded to our questionnaire were assured that their responses would remain confidential. In spite of this, due to the sensitive and/or proprietary nature of the topics discussed, it is possible that the data presented in this report reflect the views of respondents only to the extent to which they felt comfortable sharing them with an agency of the U.S. Congress. In addition, respondents reported varied knowledge of China’s WTO commitments and their application to their companies. Other potential sources of errors associated with the questionnaire, such as question misinterpretation and question nonresponse, may be present. We included steps in the development of the questionnaire, the data collection, and data analysis to reduce possible nonsampling errors. We developed this questionnaire based on the experience we gained administering the instrument for our 2002 survey of U.S. companies with a presence in China. In addition, we solicited feedback from internal and external parties on a draft of this year’s questionnaire. We pre-tested the questionnaire with eligible representatives of U.S. companies with a presence in China to help ensure that our questions were interpreted correctly and that the respondents were willing to provide the information required. We addressed possible interviewer bias, including the fact that we conducted some interviews by telephone, by ensuring that all respondents had copies of the instrument in front of them when we conducted our interviews. We compared the results of our questionnaire to those of recent surveys of U.S. companies in China that were conducted by the U.S.-China Business Council and the American Chambers of Commerce in China and in Shanghai. While these surveys targeted different populations of U.S. companies in China and had low response rates, we noted that both had a few questions that were similar to ones we used and that both obtained results that were broadly similar to ours. We did our work in the Washington, D.C., area, and in Beijing, Hong Kong, and Shanghai, China. We performed our work from October 2002 to January 2004 in accordance with generally accepted government auditing standards. 2003 Questionnaire of U.S. We’d like to start off with a few background questions that will help us learn about your company’s business operations (organization’s operations) in China in the ____________ industry. Q1) First of all, could you please provide some background information about your company’s business operations (organization’s operations) in the ____________ industry in China? N (Total company representatives and nonprofit agricultural organizations) = 93 N (Total company representatives without nonprofit agricultural organizations) = 82 N (Agricultural companies) = 5 N (Nonprofit agricultural organizations) = 11 N (Banking companies) = 10 N (Machinery companies) = 53 N (Pharmaceutical companies) = 12 N (Other) = 2 *Q1a) (for agricultural organizations only) Do all U.S. sales of your main commodity commonly go through your organization? Q2) Currently, what forms of business investment and operations does your company have in China? N = 82 1. [] Agent/Distributor in China 2. [] Representative Office 3. [] Minority Equity Joint Venture 4. [] Majority Equity Joint Venture 5. [ ] Contractual Joint Venture 6. [ ] Foreign-invested Stock Companies 7. [] Wholly Owned Foreign Enterprise 8. [] Other (Please describe.) Q3) Where in China does your company have facilities or any other presence? N = 81 t) Liaoning (except Shenyang) i) Guizhou & Yunnan ab) Western province (Any) (Shaanxi, Gansu, Qinghai, Ningxia, Xinjiang, & Tibet) Q4) Approximately how many permanent, full-time employees does your company (organization) have in the United States and in China? N = 82 a) Approximate number of permanent full-time employees in the United States b) Approximate number of permanent full-time employees in China ________ Now we’d like to ask you some questions about your company’s (organization’s) experiences in your industry since China joined the World Trade Organization in December 2001. Q5a) Please look at the list of China’s WTO reform commitment areas. Which are important to your company (organization) and why? Tariff & nontariff trade restrictions (increased market access) 1. Tariffs, fees, & charges N=79 (Ag Org N =10) 2. Quotas and other quantitative import restrictions (licensing & tendering requirements) N=78 (Ag Org N=11) 3. Standards, certifications, registration, & testing requirements (product safety, animal, plant, & health standards, etc.) N=78 (Ag Org N=11) 4. Customs procedures & inspection practices N=78 (Ag Org N=11) 5. Export restrictions N= 77 (Ag Org N=9) 6. Market access for services N = 79 (Ag Org N=9) Investment-related measures (liberalized foreign investment) 7. Government requirements stipulating minimum amount of production that must be exported N= 78 (Ag Org N=10) 8. Foreign exchange restrictions (including balancing & repatriation of profits) N=79 (Ag Org N=10) 9. Technology transfer requirements N=77 (Ag Org N=10) 10. Local content requirements N=76 (Ag Org N=10) 11. Scope of business restrictions for goods (types you can provide, customers you can do business with, number of transactions you can conduct, & where you can conduct business geographically) N=76 (Ag Org N=11) 12. Scope of business restrictions for services (types you can provide, customers you can do business with, number of transactions you can conduct, & where you can conduct business geographically) N=75 (Ag Org N=10) 13. Restrictions on partnerships & joint ventures (choice of partner & equity limits) N=77 (Ag Org N=10) 14. Establishment & employment requirements (capital, deposit, years in practice, threshold sales, forced investment, & nationality/residency requirements) N=77 (Ag Org N=10) 15. Trading rights (ability to import & export) N=77 (Ag Org N=10) 16. Distribution rights (retail, wholesale and courier) N=78 (Ag Org N=10) 17. Subsidies (for Chinese firms or for export) N=78 (Ag Org N=11) 18. Operation of state-owned enterprises N=76 (Ag Org N=10) 19. Price controls including dual and discriminatory pricing N=76 (Ag Org N=10) 20. Equal treatment (in taxation, access to funding, and under Chinese law) N=76 (Ag Org N=10) 21. Consistent application of laws, regulations, & practices (within & among national, provincial & local levels) N=78 (Ag Org N=10) 22. Transparency of laws, regulations, & practices (publishing and making publicly available) N=78 (Ag Org N=11) 23. Enforcement of contracts & judgments/Settlement of disputes in Chinese court system N=77 (Ag Org N=11) 24. Independence of judicial bodies N=77 (Ag Org N=10) 25. Intellectual Property Rights N=76 (Ag Org N=10) 26. China’s application of safeguards against U.S. exports (antidumping and other legal actions against import surges) N=73 (Ag Org N=10) Q5b) Now that you’ve thought about the commitments, could you please tell us which three are most important to your company (organization), in order of importance? (Please review the list of commitments when answering this question.) a) Which is the most important? ________ b) Which is the second most important? _______ c) Which is the third most important? _______ Q6) Overall, based on your company’s (organization’s) experience, to what extent – if any - has China actually made reforms in the commitment areas that are important to your company (organization)? Have they done so to a… N= 77 (Ag Org N=11) 1. [ (3)] Great extent 2. [ (2)] Moderate extent 3. [ (4)] Some or little extent 4. [ (1) ] No extent 5. Don’t know/No basis to judge Follow up: Please explain your response. Q7) Please look at the list of WTO-related reform commitments again. Based on your company’s (organization’s) experience in your industry, to what extent has China actually made reforms in these commitment areas since joining the WTO? If you are not familiar with any of the reform commitments, please indicate that you have no basis to judge. (Please respond according to the extent scale.) (i) (ii) (iii) (iv) (v) Tariff & nontariff trade restrictions (increased market access) 1. Tariffs, fees, & charges N=79 (Ag Org N=11) (1) (0) (1) (4) (5) 2. Quotas and other quantitative import restrictions (licensing & (5) (2) (2) (2) (0) tendering requirements) N=80 (Ag Org N=11) 3. Standards, certifications, registration, & testing requirements (0) (2) (4) (2) (2) (product safety, animal, plant, & health standards, etc.) N=79 (Ag Org N=10) 4. Customs procedures & inspection practices N=80 (Ag Org N=11) (1) (1) (5) (3) (1) 5. Export restrictions N=80 (Ag Org N=11) (9) (0) (0) (0) (2) 6. Market access for services N=80 (Ag Org N=11) (11) (0) (0) (0) (0) Investment-related measures (liberalized foreign investment) 7. Government requirements stipulating minimum amount of (9) (1) (0) (0) (0) production that must be exported N=80 (Ag Org N=10) 8. Foreign exchange restrictions (including balancing & repatriation (9) (0) (0) (1) (0) of profits) N=79 (Ag Org N=10) 9. Technology transfer requirements N=78 (Ag Org N=10) (10) (0) (0) (0) (0) 10. Local content requirements N=77 (Ag Org N=10) (10) (0) (0) (0) (0) 11. Scope of business restrictions for goods (types you can provide, (9) (0) (1) (0) (0) customers you can do business with, number of transactions you can conduct, & where you can conduct business geographically) N=78 (Ag Org N=10) 12. Scope of business restrictions for services (types you can (10) (0) (0) (0) (0) provide, customers you can do business with, number of transactions you can conduct, & where you can conduct business geographically) N=77 (Ag Org N=10) 13. Restrictions on partnerships & joint ventures (choice of partner & (9) (0) (1) (0) (0) equity limits) N=77 (Ag Org N=10) 14. Establishment & employment requirements (capital, deposit, (9) (1) (0) (0) (0) years in practice, threshold sales, forced investment, & nationality/residency requirements) N=78 (Ag Org N=10) 15. Trading rights (ability to import & export) N=79 (Ag Org (2) (2) (4) (1) (2) N=11) 16. Distribution rights (retail, wholesale and courier) N=79 (Ag Org (5) (1) (1) (3) (1) N=11) 17. Subsidies (for Chinese firms or for export) N=79 (Ag Org (5) (3) (3) (0) (0) N=11) 18. Operation of state-owned enterprises N=77 (Ag Org N=11) (6) (2) (1) (2) (0) 19. Price controls including dual and discriminatory pricing N=78 (9) (2) (0) (0) (0) (Ag Org N=11) 20. Equal treatment (in taxation, access to funding, and under (5) (3) (1) (1) (1) Chinese law) N=79 (Ag Org N=11) 21. Consistent application of laws, regulations, & practices (within & (1) (2) (5) (3) (0) among national, provincial & local levels) N=79 (Ag Org N=11) 22. Transparency of laws, regulations, & practices (publishing and (1) (2) (5) (2) (1) making publicly available) N=79 (Ag Org N=11) 23. Enforcement of contracts & judgments/Settlement of disputes in (5) (2) (1) (2) (0) Chinese court system N=78 (Ag Org N=10) 24. Independence of judicial bodies N=79 (Ag Org N=10) (6) (2) (2) (0) (0) 25. Intellectual Property Rights N=80 (Ag Org N=10) (7) (2) (1) (0) (0) 26. China’s application of safeguards against U.S. exports (8) (1) (0) (0) (0) (antidumping and other legal actions against import surges) N=79 (Ag Org N=9) Q8) Overall, what impact has China’s implementation of its WTO commitments had on your company’s (organization’s members) ability to do business in China? N=80 (Ag Org N=11) (Summarize the answers with the following prompt: Overall, would you say that the impact has been….) 1. [ (4)] Very positive 2. [ (4)] Generally positive 3. [ (1)] Little or no impact 4. [ (2)] Generally negative 5. [ ] Very negative 6. Don’t know/No basis to judge Follow-up: Does your company (organization) have any expectations about the likely impact in two years’ time? Q9a) Is the United States Government doing anything on your behalf to ensure that China’s WTO commitments are implemented? N=55 (Ag Org N=11) Follow-up: Is the USG doing anything on your behalf about IPR commitments? N=57 (Ag Org N=9) Q9b) How satisfied or dissatisfied are you with the United States Government’s efforts to ensure that China’s WTO commitments are implemented? N=80 (Ag Org N=11) 1. [ (4)] Very satisfied 2. [(6)] Generally satisfied 3. [ (1)] As satisfied as dissatisfied 4. [ ] Generally dissatisfied 5. [ ] Very dissatisfied 6. [ ] Don’t know/No basis to judge Follow-up (If applicable): How satisfied are you with the U.S. Government’s efforts to ensure that IPR commitments are implemented? Q10) Has your company (Have your organization’s members) contacted any professional associations or government agencies in China or the United States about any WTO issues? questionnaire administration to account for differences in how respondents interpreted the 2. No GO TO QUESTION 11. 3. Not sure GO TO QUESTION 11. Follow-up if the answer is “Yes.” Which ones? (Check off the boxes that correspond to the organizations.) a. China’s Ministry of Commerce b. Other Chinese government agencies or officials (Please specify) e. U.S. trade associations representing your company’s interests h. U.S. Department of Agriculture i. U.S. Department of Commerce j. U.S. Department of State l. Other (Please specify): Follow up: Whom did you contact on which issues? What happened after you contacted these organizations and/or agencies? Q11) Some reform commitments have to be made by different levels of government, such as the central government or the provincial or city governments. Has your company (organization) experienced any differences in how reforms have been implemented within and among national, provincial and local levels of government? Q12) Please tell me whether your company’s (organization members’) activities in each of the following areas have increased, stayed about the same, or decreased since China joined the WTO in December 2001. (Please respond using the scale provided.) (i) (ii) (iii) (iv) 1. Number of facilities in China N=82 (Ag Org (2) (1) (0) (6) N=9) 2. Value of total investments in China N=82 (Ag (0) (1) (0) (8) Org N=9) 3. Number of employees in China N=82 (Ag Org (4) (1) (0) (5) N=10) 4. Number of employees in the U.S. N=80 (Ag (0) (3) (1) (6) Org N=10) 5. Scope of product distribution in China N=81 (8) (0) (0) (2) (Ag Org N=10) 6. Number of products distributed in China N=82 (7) (1) (0) (3) (Ag Org N=11) 7. Number of services provided in China N=82 (0) (0) (0) (11) (Ag Org N=11) 8. Number of ventures with Chinese partners (0) (1) (0) (10) N=82 (Ag Org N=11) 9. Value of ventures with Chinese partners N=82 (0) (1) (0) (9) (Ag Org N=10) 10. Value of exports to China N=82 (Ag Org (8) (2) (0) (0) N=10) 11. Value of exports from China N=82 (Ag Org (2) (0) (0) (9) N=11) 12. Volume of production in China N=82 (Ag (0) (1) (0) (10) Org N=11) 13. Company revenue stream N=82 (Ag Org (4) (2) (0) (4) N=10) Q13) Is there anything else you would like to tell us regarding China’s joining the WTO, and its implementation of its WTO commitments? Thank you for your participation and help! U.S. investment and trade with China have grown significantly over the past decade. As a result of significant new investments from Hong Kong, the United States, Japan, and Taiwan; China surpassed the United States as the world’s largest recipient of foreign direct investment flows in 2002. However, China still represents a very small share of the total stock of U.S. investments worldwide. In terms of trade in goods and services, China is the United States’ fourth largest trading partner, after Canada, Mexico, and Japan. Both U.S. exports to China and imports from China have risen rapidly over the past decade. However, the United States imports significantly more from China than it exports to China, resulting in a U.S. bilateral trade (goods and services) deficit with China of $102 billion in 2002, according to U.S. trade statistics. As of the end of 2002, U.S. companies had a total stock of direct investments in China of $10.3 billion—more than 10 times the approximately $900 million invested a decade earlier in 1993. However, compared to the $1.5 trillion of accumulated U.S. direct investments worldwide, China accounts for less than 1 percent of total U.S. investment. In terms of new investment inflows, though, China receives significant investments from several countries, in addition to the United States. In 2002, according to Chinese statistics, nearly $53 billion in foreign direct investment flowed into China, making it the world’s top investment destination (rather than the United States) for the first time. Hong Kong was by far the largest supplier of foreign direct investment to China, with about 34 percent of the total, followed by the United States (over 10 percent), Japan (8 percent), and Taiwan (about 8 percent). U.S. direct investment in China has largely focused on manufacturing sectors, particularly computer and electronic products and chemicals. Mining has also been a significant area of U.S. investment. Figure 8 shows the distribution of the stock of U.S. direct investment in China as of 2002. The pattern of U.S. investment in China, however, differs from the worldwide pattern of U.S. investment. Figure 9 shows that manufacturing accounts for about 26 percent of U.S. investments worldwide compared with about 60 percent of investments in China (fig. 8). Similarly, globally the United States has about one third (33 percent) of the stock of its investments in other industries (e.g., agriculture, construction, retail trade, and transportation and warehousing) compared with about 10 percent of its investments in China in the other industries category. Finance and depository institutions (except insurance) is the third largest area of U.S. global investments, accounting for about 20 percent in 2002, while in China, only about 3 percent of U.S. investments are in this area. This difference in the pattern of U.S. investment in China compared to global patterns is not surprising, because China is a developing country, has an abundant supply of relatively low-cost labor, and is a growing producer of manufactured goods worldwide. In contrast, the European Union, Canada, and Japan, which accounted for well over half of the stock of U.S. direct investment abroad, are developed countries with economies similar to the United States. U.S. trade in goods and services with China has also grown significantly during the past decade. From 1993 to 2002, U.S. exports to China grew at an average annual rate of 12 percent, compared with 5 percent for U.S. exports worldwide during the same time period. Similarly, U.S. imports from China grew at an average annual rate of 17 percent, while overall U.S. imports grew at 8 percent annually. Consequently, China was the U.S.’s fourth largest trading partner in 2002 (after Canada, Mexico, and Japan), with about $27 billion in U.S. exports (goods and services) and about $129 billion in U.S. imports from China (goods and services). In 2003, goods trade data show that this rapid growth continued, with U.S. exports to China at about $27 billion, an increase of 30 percent from 2002. Similarly, U.S. imports from China in 2003 were about $152 billion, an increase of 21 percent from 2002. Services trade data for 2003 were not available as of the date of this report. As a result of the difference between U.S. exports and imports, the United States has had a growing bilateral trade deficit with China. In 2002, the U.S. bilateral trade deficit with China reached $102 billion, the largest with any country. The deficit resulted from the $104 billion difference in goods trade (as opposed to services trade). The United States maintained about a $2 billion surplus in services trade with China in 2002. In 2003, the U.S. bilateral goods trade deficit with China expanded further, to about $125 billion, an increase of about $21 billion. The increase was primarily due to an increase in the trade deficit in computers, electrical equipment, and appliances, as well as smaller increases in textiles, apparel, and leather; metal and machinery (except electric) products; and miscellaneous manufactured components, including furniture. However, the U.S. bilateral trade surplus with China in agriculture, food and tobacco products, and minerals rose from about $260 million in 2002 to about $2.8 billion in 2003. Figure 10 shows U.S. exports and imports of goods by broad industry category in 2003. In terms of U.S. goods exports, China ranks sixth after Canada, Mexico, Japan, the United Kingdom, and Germany. Top U.S. goods exports to China in 2003 included: computers, electrical equipment, and appliances (23 percent); agriculture, food and tobacco products, and minerals (20 percent); metal and machinery (except electrical) products (16 percent); petroleum and chemical products (16 percent); and transportation equipment (12 percent). U.S. goods exports to China increased by 30 percent from 2002 to 2003, compared to an increase of 3 percent for U.S. goods exports worldwide during the same period. Figure 11 shows the relative share of broad industry groups in U.S. exports to China in 2003. In terms of U.S. imports of goods, China is the second largest foreign supplier to the U.S. market after Canada and ahead of Mexico. Top U.S. goods imports from China in 2003 also included computers, electrical equipment, and appliances (36 percent). As previously noted, U.S. direct investment in China was relatively large in the computer and electronic products area. A share of trade in this area between the United States and China is likely to be intracompany trade, in which components are produced in one country and exported to the other country. The components are then used to produce final goods that are ultimately sold in each market, as well as other countries. Other important imports included miscellaneous manufactured components, including furniture (21 percent) and textiles, apparel, and leather (19 percent). Figure 12 shows the relative share of broad industry groups in U.S. imports from China in 2003. Services trade with China is less significant to the United States relative to services trade with other partners. Unlike goods trade, China is not in the top 10 importers or exporters of services trade with the United States. In 2002, the United States exported about $6 billion worth of services to China, compared with about $280 billion in exports worldwide. Other private services (such as education, insurance, telecommunications, and business, professional, and technical services) generated $2.6 billion in sales, followed by other transportation such as freight charges from transportation of goods by ocean, air, or land, and port charges ($1.4 billion). The United States imported about $4 billion in services from China in 2002, compared with about $205 billion worldwide. Top U.S. imports of services from China included other transportation ($2.3 billion) and travel services ($1.1 billion). Figure 13 shows the value of U.S. services exports and imports with China in 2002 by category. In order to provide background on U.S. investment and trade with China, we collected and analyzed the most recently available direct investment abroad and cross-border private services trade data from the Bureau of Economic Analysis (BEA) as well as goods trade data from the Bureau of the Census. We collected the most recently available Census trade data through the U.S. International Trade Commission’s Dataweb. We also collected information on worldwide investment in China from reports from the Organization for Economic Cooperation and Development and the Congressional Research Service (based on Chinese government data sources). Since these data are used solely in this appendix as background information to the report, we did not assess the reliability of these data. For more information on BEA’s methodology for collecting U.S. direct investment abroad data, see “U.S. Direct Investment Abroad” in the September 2003 issue of the Survey of Current Business and “U.S. Direct Investment Abroad: 1994 Benchmark Survey, Final Results” located at BEA’s Web site at www.bea.gov. For more information on BEA’s methodology for collecting U.S. international services data, see “U.S. International Services” in the October 2003 issue of the Survey of Current Business also available at BEA’s Web site. The industry categories that BEA and Census use do not correspond to the industry classifications used in our questionnaire. Because of this, and because the number of firms that responded to our questionnaire are not representative of all companies in China nor of all U.S. companies in China from these industries, these responses are not representative of the industry groups used in this profile of U.S. investment and trade with China. In addition, the industry categories used by BEA and Census have changed since our 2002 report. Therefore, the figures in this report and our prior GAO report are not comparable. In order to present broader industry groups, we combined some Census data categories. These groupings are presented in table 5. Census goods trade categories are based on the North American Industry Classification System (NAICS). We collected these data at the three-digit level of aggregation and combined product categories into broader groups. These NAICS codes are listed in table 5. For services trade data from BEA, we separated the category “financial services” from the broader category of “other services.” In addition to those named above, Martin de Alteriis, Shakira Edwards, Victoria Lin, Rona Mendelsohn, Beverly Ross, and Timothy Wedding made key contributions to this report. World Trade Organization: Ensuring China’s Compliance Requires a Sustained and Multifaceted Approach. Washington, D.C.: GAO-04-172T, October 30, 2003. GAO’s Electronic Database of China’s World Trade Organization Commitments. Washington, D.C.: GAO-03-797R, June 13, 2003. World Trade Organization: First-Year U.S. Efforts to Monitor China’s Compliance. Washington, D.C.: GAO-03-461, March 31, 2003. World Trade Organization: Analysis of China’s Commitments to Other Members. Washington, D.C.: GAO-03-4, October 3, 2002. World Trade Organization: Selected U.S. Company Views about China’s Membership. Washington, D.C.: GAO-02-1056, September 23, 2002. World Trade Organization: Observations on China’s Rule of Law Reforms. Washington, D.C.: GAO-02-812T, June 6, 2002. The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading.
As the second largest source of foreign direct investment in China, U.S. companies continue their keen interest in China's implementation of its World Trade Organization (WTO) commitments. China's 2001 WTO commitments include specific pledges to increase market access, liberalize foreign investment, continue fundamental market reforms, and improve the rule of law. In 2002, GAO reported on selected U.S. companies' views, finding that many commitment areas, particularly those related to rule of law, were important to U.S. companies. GAO also found that company representatives expected China's reforms would have a positive impact on their business operations but expected some difficulties during implementation. In 2003, GAO continued to analyze companies' views about (1) the extent to which China has implemented its WTO commitments and (2) the impact of China's implementation of its WTO commitments on U.S. companies' business operations. GAO collected the views of representatives from 82 U.S. companies with a presence in China. GAO focused on companies in the agriculture, banking, machinery, and pharmaceutical industries. Results reflect a response rate of 60 percent of the study population. These responses may not reflect the views of all U.S. companies with activities in China. U.S. company representatives who completed GAO's 2003 questionnaire thought that China had implemented most of the 26 listed WTO commitment areas on average only to some or little extent. When respondents assessed five areas found to be of greatest importance to their companies overall--(1) standards, certifications, registration, and testing requirements; (2) customs procedures and inspection practices; (3) intellectual property rights; (4) tariffs, fees, and charges; and (5) consistent application of laws, regulations, and practices--responses were mixed, but they reported that China had taken at least some steps to implement these commitment areas. Our analysis showed that the importance placed on specific areas differed among the agriculture, banking, machinery, and pharmaceutical industries. For example, agricultural respondents identified tariffs as important while banking respondents identified scope of business restrictions for services as important. Few respondents were able to assess all of China's commitment areas for reasons that varied depending on each company's experience and operations in China. More than two thirds of respondents reported that China's implementation of its WTO commitments had a positive impact on their companies' ability to do business in China. However, some respondents indicated that China's reform efforts had created difficulties for their company operations in China. Overall, company representatives reported that company activities, such as volume of production in China and company revenue stream, have increased since China joined the WTO. However, respondents noted that changes in business activities cannot be directly attributed to China's WTO accession.
DOD defines “readiness” as the ability of the U.S. military forces to fight and meet the demands of the National Military Strategy. DOD uses a variety of automated systems, review processes, and reports to collect and disseminate information about the readiness of its forces to execute their tasks and missions. Two of the primary means of communicating readiness information are the Quarterly Readiness Report to Congress— which is a classified product prepared by the Office of the Under Secretary of Defense for Personnel and Readiness with input from the services, combatant commands, and Joint Staff and details military readiness on a quarterly basis—and the Joint Force Readiness Review, which is a classified product prepared by the Chairman of the Joint Chiefs of Staff and assesses the armed forces’ capability to execute their wartime missions under the National Military Strategy on a quarterly basis. The Joint Staff assesses the department’s overall ability to resource and execute the missions called for in the National Military Strategy. The overall assessments, which are classified, are based on joint and force readiness. Joint readiness focuses on the ability of the combatant commands to provide, integrate, and synchronize forces assigned to missions, while force readiness focuses on the ability of the force providers to provide forces and support capabilities. The military services organize their forces into units for training and equipping purposes. Joint guidelines require that commanders assess their units’ abilities to perform their core competencies, or their ability to undertake the wartime or primary missions for which they are organized or designed. These classified assessments are based on four distinct resource indicators—personnel, equipment availability, equipment readiness, and how well the unit is trained to conducts its missions. Joint guidelines also require joint and service unit commands to evaluate, in near real-time, the readiness of forces to accomplish assigned and potential tasks through the Defense Readiness Reporting System (DRRS). The system provides the means to monitor the readiness of DOD components to provide capabilities to support the National Military Strategy consistent with DOD priorities and planning direction. Through DRRS, commanders, military service chiefs, and agency directors assess the ability of their organizations to accomplish a task to standard, based on their capabilities, under conditions specified in their joint mission- essential task list or agency mission-essential task list. In 2005, faced with a situation where its process for providing forces was not responsive enough to meet operational needs, and where the department was not able to provide funding to maintain the readiness of all its forces to do their full range of assigned missions, DOD established a centralized Global Force Management process. According to the department, establishment of the process enabled the Secretary of Defense to make proactive, risk-informed decisions on how to employ the force. The goal of Global Force Management is to allow officials to identify the global availability of forces and/or capabilities needed to support plans and operations. The department relies on Global Force Management to distribute the operational forces that belong to the military services among competing combatant commander requirements. Each combatant command documents its need for forces and/or capabilities, and then DOD uses the Global Force Management process in the following ways to meet identified needs. A portion of DOD’s operational forces are assigned to the combatant commands and positioned in the geographic combatant commander’s theater of operations to provide shorter response times. Combatant commanders have authority over forces assigned to them until the Secretary of Defense reassigns the forces. The combatant commanders receive additional forces to supplement their assigned forces through the allocation process. These forces are temporarily transferred to a combatant command to meet operational demands for both steady state rotational requirements that are planned in advance and emergent needs that arise after the initial allocation plan has been approved. They supplement a combatant commander’s assigned forces in order to mitigate near-term risk. The Global Force Management process also includes a process to apportion forces. Apportioned forces provide an estimate of the services’ capacity to generate capabilities along general timelines for combatant commander planning purposes. These are the forces that a combatant commander can reasonably expect to be made available, but not necessarily an identification of the actual forces that will be allocated for use when a contingency plan or crisis response plan transitions to execution. After more than a decade of conflict, recent budget uncertainty, and decreases in force structure, U.S. forces are facing significant challenges in rebuilding readiness. DOD officials noted that it will take a significant amount of time to realize improvements in readiness as the department works to address identified challenges. In addition, the individual military services, which train and equip forces used by the combatant commands, report persistently low readiness levels. The services attribute the low readiness levels to various factors. Specifically, The Army attributes its persistently low readiness level to emerging demands, lack of proficiency in core competencies, and end strength reductions. Even as the Army has brought forces back from Afghanistan, the Army faces increasing emergent demands that strain existing capacity, such as the deployment of the 101st Airborne Division in Africa to respond to the Ebola crisis. In addition, other factors contribute to readiness challenges, including a lack of familiarity among leaders and units with the ability to conduct collective training towards core competencies because the Army focused on counterinsurgency for many years. Finally, the Army is downsizing to an end strength of 980,000—about a 12 percent reduction in size. Army leadership testified in March 2015 that any end strength reductions below this level would reduce the Army’s capability to support missions identified in defense guidance. The Navy attributes its persistently low readiness level to increased lengths of deployments for aircraft carriers, cruisers, destroyers, and amphibious ships, which has created significant maintenance challenges. The Navy currently has 272 ships, a decrease from 333 ships in 1998—an 18 percent decrease. Even as the number of Navy ships has decreased, the number of ships deployed overseas has remained roughly constant at about 100 ships. Consequently, each ship is being deployed more to maintain the same level of presence. In addition, the Navy has had to shorten, eliminate, or defer training and maintenance periods to support high deployment rates. The Air Force attributes its decline in readiness to continued demands and a reduced force structure. For example, in 1991 the Air Force had 154 fighter and bomber squadrons, and as of December 2015 the Air Force had 64 fighter and bomber squadrons—a 58 percent decrease from 1991 levels. Further, its readiness level has declined because of persistent demand for forces, a decline in equipment availability and in experienced maintenance personnel, and the impact of high deployment rates on units’ ability to conduct needed training. The Marine Corps attributes its readiness levels to an increased frequency of deployments to support the sustained high demand for the force; gaps in the number of unit leaders with the right grade, experience, and technical and leadership qualifications; and training shortfalls, including a lack of sufficiently available aircraft to train to standards, resulting from over a decade of war. While the services have reported readiness shortfalls across the force, there have been some readiness gains in select areas, such as Army Brigade Combat Teams and Marine Corps Infantry Battalions. For example, beginning in fiscal year 2014, reported readiness levels for Army Brigade Combat Teams generally improved, but plateaued in fiscal year 2015. In addition, readiness levels for infantry battalions have improved over the past 5 years as infantry units resumed training to core mission-essential tasks after the end of Operations Enduring Freedom and Iraqi Freedom. Though DOD officials indicated that overall demand has been decreasing since 2013—primarily because of the drawdown of forces in Iraq and Afghanistan—DOD has reported that the ability of the military force to rebuild capacity and capability is hindered by continued, and in some cases increased, demand for some types of forces. Additionally, DOD is responding to these global demands with a reduced force structure, which further impacts reported readiness. For example, from fiscal year 2013 through fiscal year 2016, active component end strength decreased by about 7 percent and reserve component end strength decreased by about 4 percent across the force. Combatant command demand has consistently exceeded what the services are able to supply. DOD has spent most of the last decade responding to near-term combatant command demands, primarily in Iraq and Afghanistan. Combatant command officials we spoke with acknowledged that even though demand in support of U.S. Central Command operations in Iraq and Afghanistan had been decreasing, overall demand remains high and is likely to remain high in order to support global needs. For example, U.S. European Command officials noted that the command’s assigned forces are now staying in Europe and being used to meet the growing needs of the command, such as the response to Russian aggression, which officials noted has been the most significant driver of changes to the command’s needs since February 2014. Moreover, U.S. Pacific Command officials noted that their operational requirements have steadily increased to ensure adequate capability exists to address the increasingly unpredictable and provocative actions of North Korea and China. Global demands for select force elements, such as the Air Force’s personnel recovery units, the Army’s division headquarters, and the Navy’s carrier strike groups, have been persistently high. These high- demand force elements already face challenges in meeting service- established deployment-to-dwell ratios. For example: Units within the Air Force’s personnel recovery service core function have experienced challenges maintaining deployment-to-dwell ratio within the Air Force’s and Office of the Secretary of Defense’s stated goals of 1:2 and 1:5 for active and service component units, respectively. Specifically, the HC-130 fixed wing aircraft had a deployment-to-dwell ratio of approximately 1:1 for the active duty and 1:4 for the reserve component as of January 2016. The Army has experienced challenges in meeting the demand for division headquarters during fiscal years 2010 through 2015 and reports that it will continue to experience readiness challenges at the active component division headquarters level for the next few years. As of August 2015, division headquarters had a deployment-to-dwell ratio of less than 1:1, which requires Secretary of Defense approval and is in excess of the Army’s goal of 1:2 and the Office of the Secretary of Defense’s goal of 1:2. Because of increased demand over the past several years, many Navy ships have been deployed for 9 to 10 months or longer compared to the 7 months the Navy reports as a sustainable deployment length. Moreover, combatant commander demand for carrier strike groups has grown and the Navy is unable to meet current demand. Some portions of the force have experienced reduced demand and improved readiness. Our analysis shows that some of the decline in overall force demand can be attributed to the decline in demand for Army Brigade Combat Teams, which have experienced improved readiness. For example, as we found in May 2016, Brigade Combat Team demand decreased by more than two-thirds since fiscal year 2011 and was mostly met from fiscal years 2010 through fiscal year 2015. In addition, beginning in fiscal year 2014, reported readiness trends for Brigade Combat Teams generally improved, but plateaued in fiscal year 2015. DOD has undertaken efforts to better manage the demands placed on the force. Specifically, in 2014 the Joint Staff introduced plans to reform the Global Force Management process in an effort to address declines in readiness and capacity across the force. However, at the time of our report, the department was still working to complete implementation of Global Force Management reform initiatives and thus it is too soon to tell what impact implementation of these initiatives will have on DOD’s readiness recovery efforts. The department focused its Global Force Management reform on an effort to transition to a resource-informed process, instead of a process driven primarily by combatant command demand. The intent is to better balance the distribution of forces for high-priority missions with the need to rebuild the readiness of the force. Through Global Force Management reform, the department expects to be better positioned to reduce the burden on the force and allow the services time to rebuild readiness. Global Force Management reform efforts include the following changes. Revising combatant command plans: DOD officials noted that in 2015 the department began efforts to revise several major plans in an attempt to better reflect what the current and planned force is expected to achieve. This effort to revise major plans, which the combatant commands were currently undergoing at the time of our review, has already resulted in some changes. Implementing the “ceiling and floor” concept: This effort is intended to balance the availability of forces against combatant commander requirements. The “ceiling” is the maximum number of forces a force provider can generate under current funding levels while still achieving readiness recovery goals and the “floor” is the minimum force level needed in each combatant commander’s theater of operations for initial response needs. Forces included in the floor would only be considered for reallocation if there was a major operational plan being executed in another geographic area of responsibility. According to U.S. European Command officials, in an effort to rebuild service readiness, the services are not allowed to deploy forces above the identified ceiling without Secretary of Defense approval, which has resulted in more difficulty in sourcing combatant command requirements. DOD has reported that the results of implementing the “ceiling” and “floor” concept would not be fully realized until fiscal year 2017. Realigning the force assignment and allocation processes: This effort, which the department implemented in late 2014, is intended to realign the Global Force Management assignment and allocation processes to address the assignment of forces to the combatant commands prior to allocating additional forces in support of demands throughout the year. DOD uses the assignment of forces to provide the combatant commanders a base set of forces in support of both enduring and emergent requirements, thereby potentially mitigating risk. Realigning these Global Force Management processes should allow commanders to better understand the assigned forces they will have access to before requesting additional forces through the allocation process and mitigate risks inherent with declining force size and readiness challenges. Updating apportionment tables: DOD produces force-apportionment tables to (1) help leaders assess plans based on projected force inventory and availability; (2) inform risk estimates; and (3) inform mitigations. The overarching goal of the force apportionment tables is to provide improved assumptions to assess risk and produce better, executable plans. DOD previously required that the tables be produced annually, but through Global Force Management reform, and beginning in late 2014, the department began requiring quarterly updates to the tables. More frequent updates should provide the combatant commanders with a better representation of the forces available during planning. According to U.S. Southern Command officials, while updating the apportionment tables on a quarterly basis does not provide a sense of unit readiness, it is a helpful tool for planning purposes. Establishing a Readiness and Availability Priorities framework: The Readiness and Availability Priorities framework is intended to inform risk decisions and Global Force Management policy recommendations. Through the Readiness and Availability Priorities framework, the Joint Staff, in coordination with the services and combatant commands, assess the department’s ability to meet prioritized mission requirements and evaluate the associated risk based on force employment decisions that have already been approved. We found that the full impact of DOD’s Global Force Management reform is not known because some elements are in the early stages of implementation. In the time since portions of the reform have been put in place, officials have cited limited progress in better managing the services’ ability to meet combatant command demand. For example, our analysis of global force management data showed that between fiscal years 2013 and 2015, the number of combatant command requirements that the Secretary of Defense considered for sourcing decreased by about one-third. However, in fiscal year 2015, the department was still sourcing most combatant commander-identified requirements rather than making decisions that would have benefited the services’ readiness recovery efforts. Specifically, the Secretary of Defense provided full or partial sourcing to more than 90 percent of combatant command requirements. DOD has stated that readiness rebuilding is a priority, but implementation and oversight of department-wide readiness rebuilding efforts has not fully included key elements of sound planning, which could place readiness recovery efforts at risk given the continued high pace of operations and many competing priorities. Leading practices we identified in our prior work show that sound strategic planning can enhance an agency’s efforts to identify and achieve long-range goals and objectives and entails consideration of key elements during planning efforts. Key elements include (1) a mission statement; (2) long-term goals; (3) strategies to achieve goals; (4) external factors that could affect goals; (5) metrics to gauge progress; and (6) evaluations of the plan to monitor goals and objectives. As summarized in table 1, however, our analysis of readiness recovery plans shows that DOD and the services have only partially incorporated these key elements of sound planning into their readiness rebuilding efforts. DOD strategic guidance makes it clear that readiness rebuilding is a priority that supports the department’s mission of deterring war, and each service has promulgated guidance highlighting readiness as a mission priority. Sound planning requires a mission statement that concisely summarizes what the organization does, presenting the main purposes for all its major functions and operations. In its strategic guidance, DOD states that its overarching mission is to provide military forces needed to deter war and to protect the security of the United States. Further, it has emphasized that rebuilding the readiness of the force supports its ability to accomplish the missions and to continue to meet the demands outlined in the National Military Strategy. Consequently, DOD’s emphasis on rebuilding readiness is outlined in key strategic guidance. For example, the Guidance for the Employment of the Force states that it is an overarching priority to recover readiness in each service while minimizing deployments. In addition, the Defense Planning Guidance states that the components are to continue their efforts to return to desired readiness levels by the end of the Future Years Defense Program. Alongside its emphasis on recovering readiness, however, DOD has stated that finding the proper balance between recovering readiness, force structure sizing, modernization, and future threats is an important component of the mission and the highest priority of its leadership. Thus, each of these priorities must be considered within the context of the risk they place on both the force and the mission. While each service has promulgated guidance highlighting the need to rebuild readiness, they have not consistently prioritized the importance of these efforts. For example: The Army has identified readiness as its highest priority. The Chief of Staff of the Army published specific readiness guidance with the overarching objective of maximizing the readiness of the total force. In the memorandum, the Chief of Staff noted that readiness was the service’s number one priority and that there was “no other number one” priority. Both the Navy and the Marine Corps emphasize the importance of rebuilding readiness. Specifically, the Vice Chief of Naval Operations testified that the Navy’s priority was implementation of the Optimized Fleet Response Plan, which is designed to support the Navy’s overall readiness recovery goals. In addition, the Assistant Commandant of the Marine Corps testified in support of the Marine Corps Posture Statement that, given the current fiscal environment, the service was working to maintain a balance between current readiness and projected future readiness, but that current readiness remains its main focus. Air Force leaders have stated that striking a balance between today’s readiness and future modernization is important, but exceptionally difficult. Recognizing the impact that combatant commander demand and uncertain funding, among other things, can have on readiness, the Air Force does not expect to recover readiness prior to 2020. However, according to Air Force and DOD strategic guidance, the Air Force must be prepared to operate in highly contested battle spaces in the future. Therefore, the Air Force is focusing on recapitalization and modernization of its aircraft to ensure it is able to meet combatant commanders’ capability and capacity requirements in the future. DOD has linked readiness recovery to its ability to accomplish its missions. However, the military services have not developed complete goals or comprehensive strategies for rebuilding readiness that have been validated to ensure they reflect the department’s priorities. Two interconnected key elements of sound planning are to establish comprehensive and specific goals and to establish a strategy to achieve those goals. At the department level, the Office of the Under Secretary of Defense for Personnel and Readiness is responsible for developing plans, programs, and policies for readiness to ensure forces can execute the National Military Strategy, as well as oversight of military training and its enablers. The military services have the authority and responsibility to man, train, and equip forces for employment, and are also responsible for identifying critical readiness deficiencies and developing strategies for addressing the deficiencies. In line with these responsibilities, the Deputy Secretary of Defense established the Readiness Deputy’s Management Action Group (Readiness DMAG) in late 2011. The Office of the Under Secretary of Defense for Personnel and Readiness then charged the Readiness DMAG with synchronizing and coordinating actions and overseeing the military services’ readiness recovery efforts. Through the Readiness DMAG, DOD required the services to develop and implement readiness rebuilding plans that describe each service’s readiness goals and the time frames within which the goals could be met, with a focus on improved readiness for the full range of assigned missions. Each service has established some readiness recovery goals, but the goals only capture portions of the force and have been extended over time. Each service has also established readiness recovery strategies, but these strategies have been incomplete or not comprehensive and, in many cases, have not fully identified the resources required to achieve the goals the strategies support. In 2015, the services reported their readiness rebuilding plans to DOD, which included some readiness goals, strategies for achieving the identified goals, and time frames for when the rebuilding efforts would be complete. Tasked by the Office of the Under Secretary of Defense for Personnel and Readiness to establish these plans, the services selected a representation of critical force elements that would allow them to highlight progress in working toward achieving identified goals. In response, the services selected force elements that were either experiencing a high pace of deployments, facing challenges in achieving readiness recovery, or were key to their respective readiness recovery efforts. For example, the Navy included ballistic missile submarines, carrier strike groups, amphibious ready groups, large surface combatants, attack submarines, and patrol aircraft. As part of their initial effort, the services had set goals and time frames for achieving readiness recovery. However, by the time of our review, many of the goals had been changed and time frames had been extended. Table 2 outlines the key force elements that the services’ readiness recovery plans are based on and the goals and time frames for the plans. Inconsistencies exist in the individual service readiness recovery goals and in the time frames for achieving them because of DOD’s decision to direct the services to develop their own respective readiness recovery plans without validating them to ensure that they are complete, comprehensive, and reflect the department’s priorities. For example, the services established readiness recovery goals, but these goals are only for portions of the force in each service. For instance, the Army established specific readiness recovery goals for five force elements (Brigade Combat Teams, Combat Aviation Brigades, Division Headquarters, Patriot Battalions, and Terminal High Altitude Area Defense Batteries). Also, the Army set readiness recovery goals for a large portion of its overall active component non-brigade Combat Team force and segments of its active-duty and Army National Guard Brigade Combat Teams, but these goals do not capture the entirety of the force. Moreover, the time frames that the services identified for achieving readiness recovery goals have been extended by some services since the plans were initially established in 2015; services extended the goals primarily because of the services’ inability to achieve initially identified goals with the strategies they outlined. Additionally, each service has either established or is working to establish strategies for helping achieve readiness recovery goals, but we found that some strategies are not comprehensive or complete. For example: Readiness recovery for the Navy is premised on successful implementation of the Optimized Fleet Response Plan. This plan seeks to provide a more sustainable force-generation model for Navy ships, as it reduces deployment lengths and injects more predictability for maintenance and training into ship schedules. According to Navy policy, this framework establishes a readiness-generation cycle that operationally and administratively aligns forces while aligning and stabilizing manning, maintenance and modernization, logistics, inspections and evaluations, and training. As of April 2016, the Navy had established optimized schedules for five of the six elements of the fleet and had plans to complete the remaining schedule for Amphibious Ready Groups before the end of May 2016. The Army’s strategy to achieve readiness goals is evolving, but as yet, incomplete. A key aspect of this strategy is to develop and implement a new force generation model called “sustainable readiness,” which the Army expects to implement in fiscal year 2017. The Army expects this model will provide increased predictability and visibility to optimize unit rotations and sustain readiness when units are not deployed. Additionally, the Army expects the model to generate more combat power and enabling capability given available resources, as well as to help define readiness goals. The Air Force strategy to rebuild readiness is predicated on conditions of consistent funding and decreasing operational demand. Without these two conditions being met, the Air Force has stated that readiness will not improve significantly. For example, the Air Force identified five influencers of future readiness, which are (1) operational tempos, as reflected in the ratio of deployment-to-dwell; (2) flying hour program; (3) critical skills availability, or having the right personnel for each position; (4) weapons system sustainment; and (5) training resource availability. Each of these influencers is affected by operational demand or consistent funding. The Air Force regularly measures its ability to increase its readiness using the five influencers. The Air Force found that while problems with any one area could lead to serious readiness problems, improvement required balanced efforts across all five areas. The Marine Corps does not yet have a measurable readiness goal with an analytical basis, or a specific strategy to meet its current overall readiness goal. The Marine Corps focuses on five institutional pillars of readiness, which include high quality people, unit readiness, capacity to meet combatant commander needs, infrastructure sustainment, and equipment modernization. In addition, the Marine Corps has established specific strategies to achieve goals developed for certain communities, such as aviation. For example, the Marine Corps issued the Ready Basic Aircraft Recovery Plan and 2016 Marine Aviation Plan in an effort to mitigate current readiness challenges and recover future readiness for the aviation community. In overseeing readiness rebuilding efforts, neither the Office of the Under Secretary of Defense for Personnel and Readiness nor the Readiness DMAG has required that the services fully identify the resources required to support achievement of service-identified goals. A viable readiness recovery effort will require both DOD and the services to develop and agree on goals that can guide the efforts of the joint force, and clearly establish strategies that will result in the achievement of the goals. DOD has acknowledged challenges with funding, accepting that it is a constrained resource. However, the department has not identified the resources needed to fully implement readiness rebuilding efforts, and thus does not know what achieving readiness recovery will cost. In addition, adding funding may not help the services recover the readiness of their forces in some cases. For example, the Air Force’s Guardian Angel Weapon System, within the Personnel Recovery service core function area, is lacking experienced active-duty pararescue jumpers to meet combatant commander demand. To mitigate this problem, the Air Force continues to recruit new pararescue jumpers, but currently the least experienced personnel are filled well over authorized amounts, while mid-career personnel, who are required on all deployments and are needed to mentor and train new personnel, are filled at about half of their authorized amounts. Additional funding is not going to help the rebuilding efforts in the near term, as the units need time—about 7 years—to develop the least experienced personnel into mid-career and most experienced personnel, according to Air Force officials. In addition, the Chairman of the Joint Chiefs of Staff, the Vice Chief of Naval Operations, and the Assistant Commandant of the Marine Corps have all testified that DOD will not be able to address readiness problems with money alone, but that factors such as operational requirements and time must also be considered. For each service, the resource requirements needed to fully implement readiness recovery will vary by force element. For some force elements, the services understand the barriers to rebuilding readiness and in some cases have estimated portions of the expected costs. For example, we found that the Navy has estimated that total ordnance shortfalls across its aircraft carrier, cruiser, and destroyer force amount to at least $3.3 billion for items such as torpedoes and guided missiles, which are needed to fully achieve readiness recovery. In some cases, however, the services have not quantified or budgeted for the full costs of achieving identified readiness goals for specific segments of the force because readiness recovery goals have not been established. For example, lacking clearly established readiness recovery goals for the non-Brigade Combat Team portion of the National Guard and for the entirety of the Army Reserve, the Army is unable to identify the resources that would be required to achieve such goals. In addition, the Marine Corps has not articulated a measurable readiness recovery goal with any analytical basis, and as such, is not able to identify the resources needed to achieve that goal. Another key element of sound planning is understanding key factors that are external and beyond agency control that could significantly affect achievement of long-term goals. DOD and the services have identified potential risks to achieving their readiness recovery goals—such as budget uncertainty—but they have not fully considered how to account for these risks, including how they will influence the assumptions on which the plans are based. Based on our work, we found assumptions in three areas that are also questionable: (1) availability of funding, (2) ability to complete maintenance on time, and (3) whether operational tempo and other factors will allow sufficient time for training. DOD has reported that time and sufficient, consistent, and predictable resourcing are needed to allow the services to rebuild readiness. In an effort to help the services improve their readiness, Congress appropriated $1 billion in overseas contingency operations funds in 2016 that were designated for use in readiness improvement efforts. Like the rest of the federal government, however, DOD faces across-the-board spending reductions through sequestration. We previously examined the effects of sequestration on DOD, noting that the department placed an emphasis on preserving readiness when implementing spending reductions, but still expected sequestration to affect plans to improve military readiness by either delaying or cancelling activities. For example, we found that the Air Force cancelled or reduced participation in most of its planned large-scale fiscal year 2013 training events. Moreover, like much of the government, DOD has been funded through continuing resolutions, which create uncertainty about both when they will receive their final appropriation and what level of funding ultimately will be available. Recognizing these challenges, the Air Force cited funding levels for modernization and recapitalization as a risk to achieving readiness recovery within identified time frames, the Army noted that if sequestration were to return without a commensurate change to DOD’s strategy, the impact would be devastating to Army readiness, and the Navy noted that stable and consistent funding is key to implementing the Optimized Fleet Response Plan, which is the Navy’s plan to rebuild readiness. Readiness recovery is premised on the services being able to meet maintenance time frames, but most of the services expect continued challenges in doing so. For example, over the last 5 years, shipyards have not completed the majority of required maintenance on time, primarily because high deployment rates have led to shortened, eliminated, or deferred maintenance periods and a growth in maintenance backlogs. In May 2016, we found that from fiscal years 2011 through 2014, 89 percent of aircraft carrier maintenance periods took more time than scheduled, which also increased the costs. Recognizing these challenges, the Navy implemented the Optimized Fleet Response Plan in 2014 in order to provide a more sustainable schedule for ships, introducing more predictability for maintenance and training. The Navy’s readiness recovery goal of 2020 assumes successful implementation of the Optimized Fleet Response Plan. With only a portion of the fleet having entered this optimized cycle, it is too early to assess its effectiveness, but as we previously found, the first three aircraft carriers have not completed maintenance tasks on time, and of the 83 cruisers and destroyers, only 15 have completed a Chief of Naval Operations maintenance availability under the Optimized Fleet Response Plan. Extended deployments to meet global demands have resulted in greater and more costly maintenance requirements. In addition, the Marine Corps is facing significant challenges in its aviation maintenance and the Air Force has significant shortages of maintenance personnel. The services’ readiness recovery plans are further premised on the notion that units would have the time and resources to train to meet the full range of missions assigned to them. However, the high pace of deployments, reduced time at home station, and reduced funding for conducting full-spectrum training has had an effect on individual units’ ability to train and fully recover readiness. For example, the Army has stated that one of its greatest challenges inhibiting readiness recovery is difficulty maintaining collective training proficiency in its core competencies due to a lack of personnel depth and experience. Because the Army converted almost all Combat Training Center rotations between 2003 and 2012 to focus on counterinsurgency, opportunities to train thousands of company commanders, field grade officers, and battalion commanders on their unit’s core competency missions were lost. A key part of the Army’s plan is to ensure that these soldiers have repeated full spectrum training experience at combat training centers over the next several years. However, the Army projects increasing emergent demand that may jeopardize the Army’s ability to achieve this. In addition, high deployment rates for Air Force units have resulted in less time for units to complete their full training requirements. According to Air Force officials, high deployment rates mean there are fewer aircraft available to train on at home stations, and often the most experienced personnel are disproportionally deployed, leaving fewer experienced personnel available to train less experienced personnel at home stations. Moreover, the Air Force reported that the availability of training ranges, munitions for training, and training simulators, among others, were key factors for readiness rebuilding. The service has reported that while the training resource availability is relatively healthy in terms of operation and maintenance funding, substantial funding is required to address long-term investment shortfalls. An element of sound planning is developing a set of metrics that will be applied to gauge progress toward attainment of the plan’s long-term goals. These metrics are then used to evaluate the plan through objective measurement and systematic analysis to determine the manner and extent to which programs associated with the plan achieve their intended goals. For example, evaluations can be a potentially critical source of information in assessing (1) the appropriateness and reasonableness of goals; (2) the effectiveness of strategies by supplementing metrics with impact evaluation studies; and (3) the implementation of programs, such as identifying the need for corrective action. The Office of the Secretary of Defense, the Joint Chiefs of Staff, the combatant commands, and the military services assess and report, through various means and using various criteria, the readiness of forces to execute their tasks and missions. Some key reporting mechanisms include the Defense Readiness Reporting System, the Joint Forces Readiness Review, and the Quarterly Readiness Report to Congress. These processes provide snapshots of how ready the force is at a given point in time. However, most of the services have not fully established metrics to track progress toward achieving readiness recovery goals. Using metrics to gauge progress toward the attainment of a plan’s long- term goals would provide the services with an objective measurement to use at specific points in identifying the extent of progress in attaining readiness recovery, and would afford the department the opportunity to know whether the efforts are achieving their intended goals. Specifically, while most of the services continue to monitor overall operational readiness through the Defense Readiness Reporting System, they have not fully developed metrics to measure progress toward achieving their readiness recovery goals. For example, The Navy’s readiness recovery plan—the Optimized Fleet Response Plan—is based on maximizing ship operational availability. Operational availability measures the amount of time a ship can get under way and execute a mission. The Navy has developed long-range ship schedules that project operational availability output for various force types, such as carrier strike groups, over the next 9 years. While the Navy’s projections show some progress towards its operational availability readiness recovery goals, the Navy has not set specific benchmarks, interim goals, or milestones that it expects to achieve on an annual basis or otherwise to evaluate the effectiveness of readiness recovery efforts. Navy officials said that they have projections for readiness recovery and that there are some measures in place to keep leadership informed of readiness recovery efforts, but that they have not set specific benchmarks, interim goals, or milestones for tracking progress of readiness recovery efforts. The Army established thresholds for various metrics that impact readiness—such as sustainable deployment rates—for the select force elements that form the foundation for the readiness recovery plan. However, Army officials told us that these thresholds and metrics were not intended to be used to track its readiness progress. Rather, officials told us that the Army planned to use its process for regularly tracking, reporting, and projecting readiness to measure progress towards achieving readiness recovery, which includes periodic reports on readiness. Part of the process includes regularly monitoring the percentage of Brigade Combat Team and non-Brigade Combat Team units reporting the highest levels of readiness. However, the Army’s process does not set interim benchmarks for readiness recovery. Additionally, the Army does not track, report, or project readiness against the thresholds and metrics it has established for specific active component force elements or against its broader readiness goals for Brigade Combat Team and active component non-Brigade Combat Team forces. In early 2016, Air Force officials described operational tempo and other conditions that are necessary to begin to recover readiness and stated that until those conditions are met, readiness will not improve significantly. Once those conditions are met, readiness is expected to improve over an 8- to 10-year period. The Air Force will continue to use current readiness metrics, to include operational tempos as reflected in the ratio of deployment-to-dwell, and critical skills availability—having the right personnel for each position—to chart progress towards meeting its readiness recovery goal. However, the Air Force has stated that it will be at least 2020 before its starting conditions are met. The Marine Corps does not yet have a specific strategy or metrics to track its progress in achieving its overall readiness goal. The Marine Corps has established specific strategies and accompanying metrics to achieve goals developed for certain force communities, such as aviation, one of its most stressed communities. For example, the Marine Corps’ primary metric for assessing aviation readiness recovery is having sufficient aircraft available to fully train a squadron. While Marine Corps officials state that they regularly monitor readiness through multiple forums, the Marine Corps has not set specific benchmarks, interim goals, or milestones to evaluate the effectiveness of overall or force-community-specific readiness recovery efforts. Marine Corps officials explained that they have not been required to do so. Moreover, according to officials, lacking fully developed metrics to assess the services’ ability to measure progress toward achieving intended goals, DOD has not developed a method to evaluate readiness recovery efforts. Without metrics and a method for evaluating the effectiveness of overall readiness recovery efforts through objective measurement and systematic analysis, DOD may not be able to ensure that the department is achieving its intended goals. With decreased commitments to Afghanistan and Iraq, DOD has seen improvements in the readiness of certain key force elements in recent years, such as Army Brigade Combat Teams and Marine Corps Infantry Battalions. DOD still faces low overall readiness rates, however, which the services expect to persist into the next decade. The department recognizes the importance of recovering the readiness of the force and has been taking steps, such as the establishment of service readiness recovery plans and changes to its force management process, but there are other areas where the department could refine its approach that might bring meaningful improvements to the readiness recovery effort. With the challenges posed by ongoing demand for forces around the world and the consequent high pace of operations for portions of the force, decreased time for maintenance and training, and budget uncertainty, it is important that DOD incorporate sound planning into its readiness recovery efforts. The effort for recovering readiness supports the department’s mission of providing military forces needed to deter war and to protect the security of the United States. However, we found some fundamental challenges with the overall readiness recovery effort. Specifically, the services’ readiness recovery plans do not include comprehensive goals and strategies for achieving the goals, metrics on which to measure progress against identified goals, and a full consideration of external factors including how they will influence the underlying assumptions of readiness recovery. In addition, DOD has not validated the service-established readiness rebuilding goals, nor does it have metrics on which it can evaluate readiness recovery efforts to determine the extent to which they reflect the department’s priorities and are achieving intended goals. Without metrics against which to measure the services’ progress toward agreed-upon, achievable readiness recovery goals, DOD will be unable to determine the effectiveness of readiness recovery efforts or assess its ability to meet the demands of the National Military Strategy, which may be at risk. To ensure that the department can implement readiness rebuilding efforts, we recommend that the Secretary of Defense direct the Secretaries of the Departments of the Army, the Navy, and the Air Force to take the following three actions: Establish comprehensive readiness rebuilding goals to guide readiness rebuilding efforts and a strategy for implementing identified goals, to include resources needed to implement the strategy. Develop metrics for measuring interim progress at specific milestones against identified goals for all services. Identify external factors that may impact readiness recovery plans, including how they influence the underlying assumptions, to ensure that readiness rebuilding goals are achievable within established time frames. This should include, but not be limited to, an evaluation of the impact of assumptions about budget, maintenance time frames, and training that underpin the services’ readiness recovery plans. To ensure that the department has adequate oversight of service readiness rebuilding efforts and that these efforts reflect the department’s priorities, we recommend that the Secretary of Defense take the following two actions: Validate the service-established readiness rebuilding goals, strategies for achieving the goals, and metrics for measuring progress, and revise as appropriate. Develop a method to evaluate the department’s readiness recovery efforts against the agreed-upon goals through objective measurement and systematic analysis. In commenting on the classified version of this report, DOD partially concurred with three recommendations and concurred with two recommendations. The June 2016 classified report and this unclassified version have the same recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments, which we incorporated into the report as appropriate. DOD partially concurred with our three recommendations that the secretaries of the Military Departments (1) establish comprehensive readiness rebuilding goals and a strategy for implementing identified goals, (2) develop metrics for measuring interim progress at specific milestones against identified goals, and (3) identify external factors that may impact readiness recovery plans. DOD noted that the department was currently working to define for the services the “ready for what,” which will provide the target for their readiness recovery goals. DOD further noted that the department would continue to work with the military services to refine their goals and the requisite resources, as well as the metrics and milestones required to implement and track their recovery strategies. The department raised concerns with our addressing the recommendation to both the Secretary of the Navy and the Commandant of the Marine Corps in our draft, stating that the Marine Corps is part of the Department of the Navy. We have revised the recommendation to reflect this comment. DOD concurred with our two recommendations that the Secretary of Defense (1) validate service-established readiness rebuilding goals, strategies for achieving the goals, and metrics for measuring progress, revising as appropriate and (2) develop a method to evaluate the department’s readiness recovery efforts against the agreed-upon goals through objective measurement and systematic analysis. The department stated that it would continue to work with the military services to validate and evaluate their readiness recovery goals and the metrics for measuring their progress. We are sending copies of this report to appropriate congressional committees; the Secretary of Defense; the Chairman of the Joint Chiefs of Staff; and the Secretaries of the Air Force, Army, and Navy. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3489 or pendletonj@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. This report is a public version of our June 2016 classified report. DOD deemed some of the information in that report as SECRET, which must be protected from public disclosure. Therefore, this report omits SECRET information and data such as readiness trend data, deployment data, and select details of the services’ readiness recovery plans. Although the information provided in this report is limited in scope, it addresses the same objectives as the classified report (with the exception of removing the discussion of readiness level from the first objective) and includes the same recommendations. Also, the overall methodology used for both reports is the same. To describe the factors that affect reported readiness levels and to identify the steps the department is taking to manage the impact of continued deployments on readiness, we reviewed and analyzed readiness data and information from the Office of the Secretary of Defense, the Joint Staff, the combatant commands, and each of the military services. Our analysis covered data from fiscal year 2008 through fiscal year 2015 to maximize the amount of available and reliable data for us to determine meaningful trends. We also analyzed data from the Joint Staff on the global demand for forces to document trends in demand from fiscal year 2012 through fiscal year 2016. We identified the trend in overall demand as identified by the combatant commands and identified the trend in the portion of this overall demand that DOD provided forces to support. We evaluated the department’s overall strategic-level readiness assessment (RA) and the RA of each of the military services to document trends in reported readiness. To determine historical and current readiness levels and key factors that contributed to those levels, we analyzed Quarterly Readiness Reports to Congress, Joint Forces Readiness Review documents, and the services’ readiness assessments. We also conducted interviews with Office of the Secretary of Defense, Joint Staff, and combatant command officials to discuss global demand trends, and obtained documentation, such as departmental guidance and related briefings, and reviewed these documents to understand DOD’s efforts to reform the departmental process used to source global demands. We interviewed Joint Staff officials to discuss these reform efforts and the subsequent impact the efforts had on overall readiness recovery. In addition, we assessed the reliability of the readiness data and global demand data through standardized questionnaires, reviews of documentation, and discussions with officials about data-collection processes. We concluded that both sets of data were sufficiently reliable for our purposes of reporting current and historical readiness trends and of documenting instances where the Secretary of Defense provided forces in support of requirements in favor of the combatant commands. To assess DOD’s implementation and oversight of department-wide readiness rebuilding efforts, we reviewed DOD’s plans for managing readiness rebuilding efforts as outlined in Readiness Deputy’s Management Action Group meeting documentation and summaries and a variety of readiness reporting documents and briefings submitted by the services, the Joint Staff, and combatant commanders. We reviewed DOD strategic-level documents and guidance, such as the Guidance for the Employment of the Force and the Global Force Management Implementation Guidance, to understand DOD’s investment in readiness recovery. We then analyzed the service plans for rebuilding readiness, including reviewing relevant documents and interviewing officials to identify and understand (1) the underlying assumptions and analysis behind those plans, (2) the long-term goals and time frames for achieving these goals, and (3) the interim goals and means to assess progress. We evaluated the extent to which the service readiness recovery goals face significant challenges within the time frames identified by analyzing service documents, including internal readiness recovery projections, milestones, and risks associated with readiness recovery, and interviewing service officials and operational units. By reviewing the service readiness recovery plans and obtaining service officials’ views on force elements that are key to rebuilding readiness, we selected several key force elements from each service to complete a more detailed, though non-generalizable, case study assessment on plans for rebuilding readiness of specific force elements, including historical reported readiness and demand and sourcing trends; readiness recovery strategies; and specific risks to readiness recovery for these force elements. We analyzed these documents and reviewed DOD’s efforts to oversee department-wide readiness rebuilding to determine if they included the key elements of sound strategic planning that GAO has identified in the course of our prior work. Specifically, we focused on six key elements that should be incorporated into sound strategic planning to facilitate a comprehensive, results-oriented framework. We selected key elements that the department would benefit from considering in its effort to achieve readiness recovery and meet the intent outlined in strategic guidance. Key elements include (1) a mission statement; (2) long-term goals; (3) strategies to achieve goals; (4) external factors that could affect goals; (5) metrics to gauge progress; and (6) evaluations of the plan to monitor goals and objectives. We determined these leading practices and the key elements to be the most relevant to evaluate DOD’s oversight of department-wide readiness rebuilding efforts. We compared DOD’s efforts to rebuild readiness with these key elements of sound planning practices to identify any gaps that may impact DOD’s ability to recover the readiness of the force. We interviewed Office of the Secretary of Defense and Joint Staff officials to discuss DOD’s role in the readiness rebuilding effort, changes being implemented to allow the services to better focus on rebuilding their readiness, and steps being taken to address challenges in achieving readiness recovery. We also interviewed officials at select combatant commands to discuss their coordination with DOD for readiness recovery, as well as any impacts resulting from service readiness recovery efforts. We conducted this performance audit from June 2015 to September 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contact named above, individuals who made key contributions to this report include Patty Lentini and Kristy Williams, Assistant Directors; Paul Seely; Mike Silver; Sabrina Streagle; Nicole Volchko; Erik Wilkins-McKee; and Richard Winsor.
For over a decade, DOD deployed forces to support operations in Iraq and Afghanistan, and is now supporting increased presence in the Pacific and emerging crises in the Middle East and Eastern Europe. These deployments have significantly stressed the force. The House Report accompanying the National Defense Authorization Act for Fiscal Year 2016 included a provision that GAO review DOD's efforts to rebuild military readiness. This report (1) describes the factors that affect reported readiness levels and DOD's efforts to manage the impact of deployments on readiness, and (2) assesses DOD's implementation and oversight of department-wide readiness rebuilding efforts. This report is a public version of a previously issued classified product and omits information DOD identified as SECRET, which must be protected from public disclosure. GAO analyzed and reviewed data on reported readiness rates and departmental readiness rebuilding efforts. GAO interviewed DOD, Joint Staff, and combatant command officials regarding current demand and readiness rates and challenges with rebuilding military readiness. GAO also conducted separate reviews of the readiness of the military services. The Department of Defense (DOD) recognizes that more than a decade of conflict, budget uncertainty, and force structure reductions have degraded military readiness, and the department has efforts under way to manage the impact of deployments on readiness. The military services have reported persistently low readiness levels, which they have attributed to emerging and continued demands on their forces, reduced force structure, and increased frequency and length of deployments. For example, the Air Force experienced a 58 percent decrease in the number of fighter and bomber squadrons from 1991 to 2015 while maintaining a persistent level of demand from the combatant commands for the use of its forces. In addition, the Navy has experienced an 18 percent decrease in its fleet of ships since 1998 and an increase in demand, resulting in the deployment lengths for many ships increasing from 7 months to a less sustainable 9 months. DOD officials have indicated that overall demand has been decreasing since 2013, but the department has reported that the ability to rebuild capability and capacity is hindered by continued demand for some forces. To mitigate the impact of continued deployments on readiness, the Joint Staff has focused on balancing the distribution of forces for high-priority missions with the need to rebuild the readiness of the force. Efforts include revising major plans to better reflect what the current and planned force is expected to achieve and improving the management of DOD's process for sourcing global demands by, among other things, balancing the supply of forces with the minimum required to meet global demands. However, it is too soon to tell what impact implementation of these initiatives will have on DOD's readiness recovery efforts because the department is still working to complete implementation. DOD has stated that readiness rebuilding is a priority, but implementation and oversight of department-wide readiness rebuilding efforts have not fully included key elements of sound planning, putting the rebuilding efforts at risk. Key elements of sound planning for results-oriented outcomes include a mission statement supported by long-term goals, strategies for achieving the goals, metrics, and an evaluation plan to determine the appropriateness of the goals and effectiveness of implemented strategies. In 2014, DOD tasked the military services to develop plans for rebuilding readiness. Each service developed a plan based on the force elements that were experiencing a high pace of deployments or facing challenges in achieving readiness recovery. In 2015, the services reported their readiness rebuilding plans to DOD, which identified readiness goals and timeframes for achieving them, but these goals were incomplete and some of the timeframes have been extended. GAO found that the services have also not defined comprehensive strategies, with the resources required for achieving the identified goals, nor have they fully assessed the effect of external factors such as maintenance and training on readiness rebuilding goals. Moreover, the services have not fully established metrics that the department can use to oversee readiness rebuilding efforts and evaluate progress towards achieving the identified goals. Without DOD incorporating key elements of sound planning into recovery efforts, and amid competing priorities that the department must balance, successful implementation of readiness recovery plans may be at risk. GAO is making five recommendations, including that DOD and the services establish comprehensive readiness goals and strategies for implementing them, as well as associated metrics that can be used to evaluate whether readiness recovery efforts are achieving intended outcomes. DOD generally concurred with GAO's recommendations.
Over the years, the Park Service’s basic mission of protecting the national park system for the enjoyment of current and future generations has not changed. Since the first park unit was created at Yellowstone over 100 years ago, the system has grown to encompass 369 units covering about 80 million acres and includes parks, monuments, and historic sites. The value of the infrastructure of buildings, roads, bridges, utility systems, and other facilities constructed to provide access to or to make use of natural resources on Park Service lands has grown to an estimated $35 billion. In addition, the Park Service’s duties and responsibilities have expanded to include protecting endangered and threatened species, maintaining or restoring environmental quality, identifying and assessing the effects of its own activities on the environment and natural resources, and developing long-range plans. Recognizing the continuing growth and popularity of the national park system and the Park Service’s increasing responsibilities, the Congress has increased appropriations to operate and maintain the Park Service by more than 30 percent (in constant dollars) over the past 10 fiscal years to about $1.1 billion in fiscal year 1994. However, despite these funding increases, we and others have shown that the health of the national park system is deteriorating. As recently as August 1995, we reported that the scope and quality of services for visitors within the park system have been declining and that many park units lack sufficient data to determine the overall condition or trend of their cultural and natural resources. The Congress and the Department of the Interior have taken measures to help deal with the parks’ deteriorating conditions. Among other things, additional funding sources have been made available to park units. Specifically, park units have been permitted to keep some of the funds that are generated from in-park activities without going through the annual congressional appropriations process. We are referring to these funds as special account funds. Park Service headquarters officials identified eight special accounts and provided us financial data for these accounts. The total value of these accounts was about $45 million in fiscal year 1994. Of the eight accounts, five are authorized to recover the costs of particular in-park activities. The other three accounts are not designed to recover costs, but provide park units with cash and noncash benefits to be used for a variety of purposes within the parks. In fiscal year 1994, according to data provided by the Park Service, the cost-recovery accounts totaled $6.5 million; the noncost-recovery accounts were valued at $38.5 million. Following are descriptions of each of these eight special accounts. The living history account is used by park units that offer special interpretive programs or sell merchandise that interprets the history of the park unit. For example, at Lowell National Historical Park, tours of a restored 19th century cotton mill are provided so that visitors can experience the actual workings of this historic industrial facility. Park units charge for these kinds of activities and use the funds collected to sustain the interpretive program and the production of merchandise. In fiscal year 1994, 30 park units used the living history account; the total amount collected was $2.2 million. Special-use permits are issued by park unit managers for activities within a park unit when a service or privilege is provided to a visitor beyond that received by the general public. When fees are charged, the parks are to recover and retain the costs of providing the necessary services associated with the permitted activity. At the park units we visited, the associated costs were usually personnel costs of the park unit employees needed to oversee the permitted activity. Weddings, receptions, television commercials, and filmmaking are types of activities that required permits and supervision by park unit personnel, and fees were charged accordingly. In fiscal year 1994, 98 park units had special-use fees. In total, $3.8 million in fees was collected in fiscal year 1994. A park unit uses the mess operations account for collections made for food provided by the Park Service to Park Service and state employees in the field. These employees include firefighters and trail maintenance crews. In fiscal year 1994, three park units used this special account, and $68,000 was collected. The Park Service also leases historic properties within park units to tenants who pay rent. The rental revenue is then used to maintain the property. In fiscal year 1994, seven park units used this special account, and $285,000 was collected. The damaged park resources account is used by park units to recover from the public the cost of restoring damaged resources. When a resource is accidentally damaged, the park unit recovers whatever costs it can from those who damaged the resource and applies those funds to restoring the resource. In fiscal year 1994, seven park units used this special account, and the total amount collected was $128,000. Donations to an individual park unit include cash from the general public that is put in the donation boxes at visitors’ centers as well as checks that are mailed by individuals, corporations, or other groups. If a donation is not marked for a specific purpose, the park unit manager has discretion in how to spend it. In fiscal year 1994, the donations at 273 park units totaled $8.2 million. Cooperating associations generally support a park unit by providing staff at bookstores, volunteers who assist in interpretive programs, and/or cash. Park unit managers work with cooperating association staff to determine the types of support to be provided. Nationwide, there are 65 cooperating associations serving almost every park unit. In fiscal year 1994, the services and cash these associations provided were worth $16.4 million. Over the past few years, the Park Service has been requiring that the concessioners in park units establish commercial bank accounts into which the concessioners deposit funds for improving, rehabilitating, and constructing the facilities that directly support their services. The use of these accounts has increased over the past few years as park unit managers look for ways to improve the facilities that serve visitors beyond what is normally provided through the annual appropriations. Expenditures from concessioners’ special accounts are to be made only for improvements authorized by park unit managers. For example, replacing a roof on a lodge could be paid for from a concessioner’s special account. According to data from the Park Service’s headquarters, in fiscal year 1994, 21 park units had this type of account; headquarters officials estimated that the deposits totaled $13.9 million. However, Park Service officials acknowledged that the data were not complete because the Park Service did not have a system in place for fiscal year 1994 to routinely or systematically collect information on concessioners’ special accounts. In March 1995, the Park Service introduced a computerized tracking system for concessioners’ special accounts to address this situation. Although the system is still being implemented, Park Service officials stated that it should provide more complete data on the number of concessioners’ special accounts and the amounts in them for fiscal year 1995. Table 1 provides information about the eight special accounts for which the Park Service provided us information. It shows the number of park units that used each account, the total amount in each account, and the legislative authority for each account. To provide a perspective on the amount of special account funds available at individual park units, we gathered information on a judgmental sample of 27 park units that included 20 of the largest parks in the national park system. The details on the amount of funds in special accounts at these park units are included in appendix I. In collecting this information, we found that, except for concessioners’ special accounts, there were no significant differences between the amount of special account funds reported by Park Service headquarters and the amounts reported to us by the parks. While the Park Service’s headquarters reported $13.9 million in concessioners’ special accounts nationwide, the 27 park units in the sample reported to us a total of $19.4 million—a difference of $5.5 million. Park Service officials explained most of the discrepancies as primarily due to park unit managers’ differing interpretations of what is to be included in concessioners’ special accounts. Table 2 compares the funds available in concessioners’ special accounts as reported by Park Service headquarters with the amounts reported by the 14 park units in our sample that had such accounts. We discussed the $5.5 million difference between the Park Service headquarters’ total for concessioners’ special accounts and the individual figures we obtained at the 14 park units with concession officials at Park Service headquarters. On the basis of these discussions, we found that the discrepancies were due to differing interpretations among Park Service concessions officials—both at headquarters and at the individual park units—as to what should be counted as concessioners’ special accounts. For example, the $7.909 million in concessioners’ special accounts reported to us by a Yellowstone National Park concession official includes funds for a cyclic maintenance program which had $3.6 million in deposits for fiscal year 1994. According to Park Service headquarters officials, these deposits should not be considered as a concessioners’ special account because the $3.6 million is money that is used to fulfill normal contractual maintenance requirements. However, a Yellowstone National Park concession official told us that this fund was established so that the concessioner could repair and maintain the historic structures which had fallen into disrepair due to neglect by the previous concessioner. According to the Yellowstone National Park concession official, maintenance provided by these funds is not routine; it is more extensive than that required of other concessioners, such as the preservation of historic log structures, because of the poor condition of the structures. We have included the $3.6 million in our totals because, as indicated by a Yellowstone National Park concession official, the expenditures from this account are over and above normal maintenance and similar to expenditures made from concessioners’ special accounts at the parks we visited. According to Park Service headquarters officials, other differences between what the park units and headquarters reported could be due to different reporting cutoff dates between the concessioners and the Park Service so that some deposits appeared in a preceding or succeeding year. For example, Statue of Liberty National Monument officials reported to us $1.425 million in concessioners’ special accounts for fiscal year 1994, while Park Service headquarters included this amount as a fiscal year 1993 deposit. Park Service management has been aware of the problem with tracking concessioners’ special accounts since at least May 1992. At that time, we reported on the Park Service’s inability to track accounts set aside by concessioners to improve government-owned facilities that they used (a forerunner to concessioners’ special accounts). In response to our recommendation that the Park Service develop procedures to track these accounts, the Park Service introduced its “Special Account Tracking System” in March 1995. Although still in the process of implementing this system, concession officials at headquarters thought it would improve the accuracy and consistency of the data maintained by Park Service headquarters on concessioners’ special accounts for fiscal year 1995. Once this system is fully implemented, we plan to review whether it is providing more accurate information. To determine whether special accounts were being used for authorized purposes, we conducted detailed, on-site reviews of expenditures at six park units. The six park units used six of the eight types of special accounts. These park units did not receive any income from leasing historic properties or have any damaged resources that were subject to reimbursement. Our review showed that the special account expenditures at the six park units were consistent with the purposes for which the accounts were established. However, we noted that a concessioners’ special account with deposits of $299,500 had been improperly established at one park unit. No expenditures had been made from the account, however, and park unit officials are in the process of taking action to correct the situation. The six park units we visited had three special accounts that were essentially for recouping the costs of specific in-park activities. These cost-recovery accounts were for living history demonstrations, special-use permits, and mess operations. Two of the six park units we visited had living history accounts, and they used most of the fee revenues to defray the salary expenses of the park unit employees who provided special tours or educational experiences. The remaining funds were used to purchase supplies to support these activities. At Lowell National Historical Park, staff provided tours of the locks and floodgates of the canal surrounding the town and provided interpretation for the Boott Cotton Mills Museum, including the weaving room and interactive exhibits about the industrial revolution. In addition, under a cooperative agreement with the University of Massachusetts, teachers provided educational experiences to students, including such hands-on activities as working on an assembly line, weaving, and role-playing as immigrants and inventors. In fiscal year 1994, the park unit collected $164,000 from the Boott Cotton Mills exhibits and the educational program with the University of Massachusetts. As authorized, the fees from these activities were used to pay the salaries of the teachers and the park unit employees that provided the services and to purchase supplies to support the activities. Carlsbad Caverns National Park also used the living history account. At Carlsbad, visitors are charged a fee to tour portions of the caverns with fragile or sensitive resources that need protection or when the number of tour participants must be limited due to physical conditions or some other reason. For example, because touring the Hall of the White Giant cavern requires crawling through tight passageways and some free climbing, knee pads and gloves are recommended, and the number of participants in each tour group is limited to eight. In fiscal year 1994, $170,000 was collected by the park unit in fees from these tours that were deposited into its living history account. The fees were used to pay the salaries of the park rangers that provided the tours. Five of the park units we visited had issued special-use permits during fiscal year 1994 and were collecting fees for the expenses incurred by the park unit as a result of the activity for which the special-use permit was issued. In fiscal year 1994, the Jefferson National Expansion Memorial collected $76,000; Mesa Verde National Park collected $1,000; Grand Canyon National Park collected $71,000; Lowell National Historical Park collected $7,000; and Sequoia and Kings Canyon National Parks collected $11,000. The specific types of special-use activities at these park units varied considerably. For example, the activities at Sequoia and Kings Canyon National Parks included weddings and cabin rental management, the activities at Grand Canyon National Park included commercial filming and whitewater rafting, and the activities at Mesa Verde National Park included photography workshops. The five park units that charged for special-use permits generated about $166,000 for fiscal year 1994 from these activities. In most instances, the expenses incurred were the salary costs of park unit staff who provided the special services. Other costs were for support expenses, such as supplies. The documentation we reviewed indicated that the expenditures supported only special-use activities. Sequoia and Kings Canyon National Parks used the mess operations account for collections made for meals furnished by the Park Service to Park Service and state employees—firefighters and trail maintenance crews—in the field. In fiscal year 1994, Sequoia and Kings Canyon National Parks collected $39,000 that was used to provide meals and purchase food preparation equipment. In fiscal year 1994, the National Park Service valued the amount of funds in noncost-recovery accounts at $38.5 million. Funds from these accounts, which are used to provide benefits for a variety of purposes, come from three sources—donations, cooperating associations’ donations, and concessioners’ special accounts. Table 3 shows the amount of funds available during fiscal year 1994 for the three noncost-recovery accounts at the six park units we visited. The Secretary of the Interior can accept donations and use them for the purposes of the national park system. In fiscal year 1994, donations at the six park units we visited totaled $205,000 and included cash from donation boxes in visitors’ centers or other locations as well as donations sent directly to a park unit. The expenditures of donated funds varied at the six park units we visited, but all were used to further the purposes of the park system. For example, at Carlsbad Caverns National Park, donations were used to pay overtime salaries and purchase photography supplies. At Grand Canyon National Park, donations were used to purchase a computer and for search and rescue operations. The Jefferson National Expansion Memorial used its donated funds to purchase supplies, pay for training and travel for interpretive staff, and provide educational displays. At Mesa Verde National Park, a computer and printer for the park’s Interpretative Division was purchased with donated funds. Sequoia and Kings Canyon National Parks used donated funds for a trail reconstruction project. Lowell National Historical Park received donations in fiscal year 1994 but reported no expenditures, choosing to spend the donations at a later date when a particular need surfaces. Park units are authorized to do this. The park cooperating associations were created to aid the Park Service in its mission of education and service. They provide noncash benefits to park units in the form of salaries for the nonpark personnel working in bookstores in visitors’ centers, compensation for the volunteers who help with interpretive and other educational programs, and the publication of park unit newspapers. The cooperating associations may also provide cash donations to the park units. In fiscal year 1994, the benefits from the cooperating associations at the six park units we visited were valued at $2.8 million. With one exception, the benefits the cooperating associations provided supported education and service to the park units and their visitors as authorized. For example, at the Jefferson National Expansion Memorial, the cooperating association provided the park with 44 full-time and part-time paid cooperative staff to assist visitors in the Museum of Westward Expansion, the Old Courthouse, the park library, and at various other exhibits. In addition, the association paid for travel and training for Park Service employees in the interpretive branch for a total donation of $985,000 in fiscal year 1994. At Grand Canyon National Park, donations valued at $1.2 million from the cooperating association provided a stipend and paid other expenses, such as uniforms and supplies, to students who worked part-time in all areas of the park and paid for about seven full-time association employees who spent about half of their time providing information to visitors in stores run by the cooperating association. In addition, the association provided a new trailside exhibit and renovated the historic Kolb Studio’s bookstore and art gallery. At Sequoia and Kings Canyon National Parks, cooperating association donations totaled $87,000 in fiscal year 1994. These donations included salaries of 11 part-time guides and 3 part-time ticket collectors for the cave tours, several interpretive exhibits, and free publications, such as the park unit’s newspaper. The cooperating association also provided $3,600 toward the salary of a seasonal park ranger. When the Park Service headquarters official responsible for the cooperating association program visited the park, he told association representatives that this was not an authorized expenditure. At that time, the cooperating association discontinued the practice. At the other park units we visited—Lowell National Historical Park ($8,000), Carlsbad Caverns National Park ($392,000), and Mesa Verde National Park ($134,000)—the cooperating associations also provided salaries for bookstore staff and published the park unit’s newspaper. Included in the cooperating association’s donation to Mesa Verde National Park were funds toward the construction of a new interpretive center near the park entrance. Concessioners’ special accounts are contractual arrangements between the Park Service and the concessioners. These arrangements occur when the concessioners and park unit managers agree that the concessioners will establish commercial bank accounts that are to be used to rehabilitate and construct the facilities that directly support the concessioners’ services. These commercial bank accounts are established in addition to, or sometimes in lieu of, franchise fees, which, in contrast, are deposited in the U. S. Treasury. Any expenditures from these special accounts must be authorized by park unit managers. Concessioners’ special accounts were established at three of the six park units we visited—Mesa Verde National Park, Grand Canyon National Park, and Sequoia and Kings Canyon National Parks. At these park units, deposits to the concessioners’ special accounts totaled $1.4 million in fiscal year 1994. We found that the expenditures from the concessioners’ special accounts at these park units were made for authorized purposes. Mesa Verde National Park had one concessioner’s special account. The deposits for fiscal year 1994 totaled about $44,000. Expenditures were made from this account to purchase and install bear-proof trash cans at a government-owned, concessioner-operated campground. Grand Canyon National Park had seven separate concessioners’ special accounts. The deposits to these accounts totaled about $895,000 for fiscal year 1994. Expenditures were made from these accounts for several projects, such as painting and repairing an historic railway depot and leasing a van to transport employees’ dependents to and from the concessioner-operated day-care facility. In several instances, funds had not yet been expended from the concessioners’ special account but were being accumulated to fund more costly projects. For example, a concessioner’s special account with a fiscal year 1994 balance of about $9,900 was earmarked for the construction of a backcountry toilet facility estimated to cost about $40,000. When we visited Grand Canyon National Park, we identified one concessioner’s special account where the concessioner had made deposits of about $299,500 for 1994. The funds deposited into this concessioner’s special account had, in previous years, been deposited into the U.S. Treasury as franchise fees. We noted, however, that the deposits to the concessioner’s special account had occurred before the effective date of the agreement between the concessioner and park unit officials to establish the account. Since then, action has been initiated by park unit officials to remove the 1994 payments from the concessioner’s special account and deposit them into the U.S. Treasury. In addition, park officials told us that deposits occurring after the effective date of that agreement were going into the concessioner’s special account. At Sequoia and Kings Canyon National Parks, the concessioner’s special account was used to replace the Grant Grove food market, which was destroyed by fire in 1992. This was a government-owned, concessioner-operated facility and was a major project for the park. Construction on the building began in July of 1994 and was completed in the spring of 1995 at a total cost of about $1.2 million. We provided a draft of this report to the Department of the Interior for its review and comment. We met with officials from the Office of the Assistant Secretary for Fish, Wildlife, and Parks, including the Assistant to the Assistant Secretary; the Office of the Solicitor; and the National Park Service to obtain their comments. Generally, these officials agreed that the information provided in the report was accurate. In response to their comments, we incorporated technical corrections and clarifying information into this report where appropriate. We performed our review between April 1995 and April 1996 in accordance with generally accepted government auditing standards. Our scope and methodology are explained in appendix II. As requested, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after the date of this letter. At that time, we will make copies available to interested congressional committees and Members of Congress; the Secretary of the Interior; the Director, National Park Service; and other interested parties. We will also provide copies to others upon request. Please call me at (202) 512-3841 if you or your staff have any questions. Major contributors to the report are listed in appendix III.
Pursuant to a congressional request, GAO examined the National Park Service's (NPS) special account funds, focusing on: (1) the sources and amounts of these funds; and (2) whether the expenditures of these funds are consistent with intended purposes. GAO found that: (1) the value of the eight NPS accounts reviewed totalled $45 million; (2) five of these accounts recovered the costs associated with in-park activities and the other three provided cash and noncash benefits; (3) in 1994, cost-recovery accounts totalled $6.5 million and non-cost-recovery accounts totalled $38.5 million; (4) cost-recovery accounts are funded through living history demonstrations, reimbursements from mess operations, historical property leases, payments for park damages, and special-use fees for funding; (5) non-cost-recovery accounts receive funding from various donations and cooperating associations that operate bookstores on park premises; (6) the associations provide a variety of in-park services related to park themes and construct facilities that support concession services; (7) significant discrepancies exist between NPS and individual park financial data on the amount of funds in special accounts established by concessioners; (8) the actual amount of money in the special fund accounts is several million dollars higher than reported by NPS; (9) NPS attributes some of these discrepancies to inaccurate tracking of concessioners' accounts; and (10) all but one of the expenditures from the special fund accounts were for authorized purposes.
Federal law mandates support for farmers through various programs, including direct payments. USDA, through its Commodity Credit Corporation (CCC), calculates direct payments using a formula that factors in “base acres,” a measure of a farm’s crop production history based on the number of acres planted on the farm during certain past years. The term base acres refers to a farm’s average planted acreage of specific crops during those years; the term does not refer to specific physical acres on that farm. The direct payment formula uses a fixed percentage of the average number of acres planted on the farm from 1998 through 2001 and multiplies that number by the farm’s historical crop yield and a statutorily fixed payment rate. The percentage and payment rates for each crop are specified in legislation, commonly referred to as farm bills, passed by Congress roughly every 5 years. For 2009 through 2011, this percentage was set at 83 percent; for 2012, it was set at 85 percent. Figure 1 illustrates the process for calculating a producer’s direct payment. Through this system, a producer’s direct payments are based on the historical production of a particular crop. Moreover, producers have almost complete flexibility in deciding which crops to plant, and they receive payments as long as they meet eligibility criteria, even if they decide to plant different crops or not plant crops at all. In years in which they do not plant, however, the farm bill requires that the relevant land be maintained in accordance with sound agricultural practices. For example, producers must take steps to minimize the growth of weeds on the land. Producers also are required to report planting information each year on forms called acreage reports. The United States has classified direct payments as meeting World Trade Organization rules for nontrade distorting payments; direct payments are not tied to specific production or prices, and they are generally deemed not to distort international agricultural markets. After learning of instances where farm payments were made to individuals not involved in farming, Congress enacted the Agricultural Reconciliation Act of 1987, commonly referred to as the Farm Program Payments Integrity Act. The act, among other measures, sets eligibility criteria to ensure that only individuals and entities “actively engaged in farming” receive certain farm program payments. Specifically, according to the 2008 Farm Bill, direct payments, Average Crop Revenue Election (ACRE), and counter-cyclical payment recipients are required to be actively engaged in farming. FSA is responsible for ensuring that direct payment recipients meet program eligibility criteria. FSA carries out this responsibility through its headquarters office, 50 state offices, and approximately 2,200 county offices. Producers file with their local FSA county office a farm operating plan in which they document the number of recipients qualifying for payments, the name of each payment recipient, and each recipient’s role in the farming operation and share of profits and losses. Producers must update this plan when a change in their operation occurs, such as a change in the farm’s ownership. FSA reviews these plans to determine, among other things, the number of recipients who qualify for payments and whether they are actively engaged in farming. To be considered actively engaged in farming, an individual recipient must make significant contributions to the farming operation in two areas: (1) capital, land, or equipment and (2) personal labor or active personal management. An entity, such as a corporation, limited partnership, or trust is generally considered actively engaged in farming if the entity separately makes a significant contribution of capital, land, or equipment, and its members collectively make a significant contribution of personal labor or active personal management to a farming operation. In 2010, FSA issued a rule stating that members of a legal entity are excepted from the requirement to make contributions of active personal labor or active person management if (1) at least 50 percent of the interest is held by members that are providing active personal labor or active personal management and (2) total payments, including direct payments, counter- cyclical payments, and ACRE payments are less than or equal to one “payment limitation”—a statutorily set limit on the value of the payment made to the producer(s). FSA’s regulations define active personal management to include such tasks as arranging financing for the operation, supervising the planting and harvesting of crops, and marketing crops. For both individuals and entities, their contributions to the farming operation must also be commensurate with their share of the farming operation’s profits or losses. To help oversee direct payments, FSA monitors farm operations’ land usage through producers’ acreage reports, and at the end of the year it conducts a detailed review of a sample of farm operating plans. Specifically, FSA field offices compare selected plans against supporting documentation to help monitor whether farming operations were conducted in accordance with their plans. These end-of-year reviews include an assessment of whether payment recipients met program requirements. FSA selects its sample of farming operations for these reviews on the basis of, among other criteria, the restructuring or formation of a farming operation in the past year and the number of farming operations in which an individual or legal entity is involved. According to FSA officials, the selection process emphasizes farm operations involving six or more producers. From 2003 through 2011, USDA made more than $46 billion in direct payments, which was concentrated among certain counties, among recipients located within 100 miles of farms qualifying for payments, and among certain types of producers. We also found that producers of different qualifying crops planted varying percentages of their base acres in those crops. Cumulatively, almost one-fourth of the total value of direct payments made during this period went to producers who did not, in a given year, grow any of the crop associated with their base acres—as they are allowed to do. According to our analysis of USDA data, more than $46 billion in direct payments were made from 2003 through 2011, with counties in the Midwest and Mississippi River Basin accounting for a large share of the value of payments made and smaller amounts distributed among other counties throughout the United States. In addition, our analysis showed that total payments varied widely by county: in 2011, about 9 percent of counties received less than $250,000 in payments countywide, and about 8 percent of counties received at least $5 million countywide. Figure 2 shows the distribution of direct payments in 2011, the most recent year for which data are available. With regard to the geographic distribution of payment recipients, according to our analysis of USDA data, from 2008 through 2011, about 97 percent of the value of payments, or about $18.8 billion, went to recipients located within 100 miles of the farm on which their direct payments were based. In addition, for that period, about 1.4 percent of the value, or about $269 million, went to recipients located 300 miles or more from the farm. Furthermore, our analysis shows that cumulatively from 2008 through 2011, 0.56 percent of direct payments, or $109 million, was made to recipients located 800 or more miles from the farm. For the complete results of our analysis on ownership characteristics of land for which direct payments were made, see appendix III. With regard to the ownership characteristics of farms, direct payments may be made to producers with varying degrees of involvement in the farming operation, including individuals or entities that either (1) own and operate the farm (owner-operators), (2) operate but do not own the farm According to (tenants), or (3) are an owner of the farm (other owners). our analysis of USDA data from 2003 through 2011, 86 to 88 percent of acreage for direct-payment eligible crops was operated by producers who were listed as owner-operators or tenants, while 12 to 14 percent of acreage was operated by producers who were listed as other owners— but not necessarily operators. In addition, we found that from 2003 through 2011, the share of acreage operated by owner-operators decreased and the share operated by tenants increased, while the share operated by other owners was relatively consistent. Specifically, from 2003 to 2011, the percentage of acreage, including acreage qualifying for direct payments, which was operated by owner-operators decreased from 77 million acres in 2003 (30 percent of all eligible acreage) to 67 million acres (26 percent) in 2011. Acreage operated by tenants increased from 145 million acres in 2003 (56 percent of eligible acreage) to 159 million acres (62 percent) in 2011. Meanwhile, the acreage operated by other owners decreased slightly, from 35 million acres in 2003 (14 percent of eligible acreage) to 30 million acres (12 percent) in 2011. In addition, our analysis identified variation in the ownership characteristics of farms receiving direct payments, depending on crops grown. For example, owner-operators operated 12 percent of acreage, including base acreage, for cotton in 2011 and 44 percent of acreage for oats that year. Also in 2011, tenants operated 78 percent of the acreage for rice and 58 percent of wheat acreage. Other owners operated 5 percent of the acreage for oats, and 10 percent of corn acreage that year. For crop-specific analyses for corn, cotton, rice, soybeans, and wheat, see appendix III. Since direct payments allow producers almost complete flexibility in which crops to plant, we analyzed USDA data to determine the type and quantity of crops that producers who received direct payments chose to grow. According to our analysis, from 2003 through 2011, producers planted from 2 to 126 percent of their base acres with the crop associated with their base acres. For example, over the period, producers with cotton base acres planted 59 percent of their base acres with cotton, whereas producers with soybean base acres planted soybeans on all of their base acres for soybeans, as well as on additional acres on their farms; in other words, they planted an area equivalent to 126 percent of their base acres with soybeans. Specifically, our analysis showed that from 2003 through 2011, producers with cotton base acres cumulatively planted 100 million acres of their 169 million base acres with cotton, whereas producers with soybean base acres cumulatively planted 593 million acres, including 472 million base acres, with soybeans. For the complete results of our analysis of the type and quantity of crops that producers who received direct payments chose to grow, averaged for years 2003 through 2011, of all crops and by crop type, see appendix IV. In addition, we analyzed USDA data to determine the extent to which producers did not grow any of the crop for which their base acres were allocated—something they are allowed to do. Cumulatively, USDA paid $10.6 billion—almost one-fourth of total direct payments from 2003 through 2011—to producers who did not, in a given year, plant any of the crop for which they had base acres. Specifically, during this period, producers cumulatively did not plant more than 633 million acres with the crops associated with their base acres in a given year. This amounted to an average of 70 million acres each year, or 26 percent of the 268 million base acres on average that are annually eligible for direct payments. For the complete results of our analysis on the extent to which producers did not grow any of a crop for which they had base acreage in a given year, by crop, see appendix V. Also according to our analysis of USDA data, about 2,300 farms, or about 0.15 percent of the 1.6 million farms receiving direct payments in 2011, reported all their land as “fallow,” that is, producers did not plant any crops of any type on this land, for each year of the last 5 years (i.e., 2007 through 2011), as allowed under the farm bill. These producers received a total of about $2.9 million in direct payments in 2011. Our analysis of USDA data showed that these approximately 2,300 farms, comprising in total about 132,000 acres, were distributed among 402 counties in 40 states. In addition, according to our analysis, one county in Louisiana had the most farms (190) with all their acreage reported fallow from 2007 through 2011; producers on these farms received a total of about $203,000 in direct payments in 2011 for this land. For the results of our analysis, by state, of the number of fallow farms from 2007 through 2011 and the value of direct payments made for such farms in each state, see appendix VI. Figure 3 shows the geographic distribution of the farms that our analysis indicated had all their acreage as fallow each year from 2007 through 2011. In addition, according to our analysis of USDA data, 622 farms reported all of their farm’s acreage as fallow for each of the previous 10 years, from 2002 through 2011. Those farms were distributed among 178 counties in 28 states. Direct payments generally do not align with the principles significant to integrity, effectiveness, and efficiency in farm bill programs, identified in our April 2012 report, which could be used to guide implementation of the 2012 Farm Bill. These payments align with the principle of being “distinctive,” in that they do not overlap or duplicate other farm programs. However, they do not align with the five other principles. Specifically, (1) direct payments may no longer be relevant, (2) they do not appropriately target benefits, (3) they may no longer be affordable, (4) they may have unintended consequences, and (5) oversight of direct payments is weak. Direct payments were expected to be transitional when first authorized and may no longer be relevant. According to the conference report accompanying the 1996 Farm Bill,payments—the precursors to direct payments, which were similar in design—were established to help farmers make a transition to basing their planting decisions on market signals rather than on government programs. Accordingly, production flexibility contract payments were scheduled to decrease over time and expire in 2002. Subsequent farm bills, however, including those passed in 2002 and 2008, have continued these payments as “direct payments.” In a press statement released in February 2012, the Chairwoman of the U.S. Senate Committee on Agriculture, Nutrition and Forestry, referred to direct payments as “an indefensible program of the past.” In April 2012, this committee’s website posted draft legislation on the reauthorization of agricultural programs through 2017 that proposed eliminating direct payments. In addition, direct payments may no longer be needed to comply with certain aspects of international trade agreements. Proponents of direct payments say that such payments help the United States meet certain commitments under international trade agreements, which set ceilings on government payments classified as trade distorting. Unlike other farm program payments, direct payments do not depend on current market prices or production, so the World Trade Organization generally considers them to be nontrade-distorting, and the United States does not count them against these payment ceilings. In recent years of high crop prices, the United States has not been in danger of meeting or exceeding its limits for trade-distorting payments. For example, in 2009—the most recent year for which the United States notified the World Trade Organization of its use of subsidies—the United States used about $4.3 billion of its $19.1 billion authorized allocation of trade-distorting subsidies. Direct payments do not appropriately target benefits (i.e., distribute benefits consistently with contemporary assessments of need) in three key ways. First, farmers receive direct payments even in years of record farm income. Production flexibility contract payments, the precursors to direct payments, were established after a period in the early 1990s of relatively low farm income. In August 2011, however, USDA reported that all three measures of farm-sector earnings—net farm income, net cash income, and the value of the farm sector’s production of goods and services from farming versus its outlays to nonfarm sectors (i.e., “net value added”) were forecast to rise more than 20 percent in 2011 over recent historical highs or near-highs. Second, according to USDA, the average income for farm households is higher than that of the average for U.S. households. For example, in 2010, average farm household income was 25 percent higher than that of the average U.S. household. Moreover, in 2008, we reported that individuals who receive farm program payments, including direct payments, were more than twice as likely as other tax filers to have higher incomes. Third, direct payments are concentrated among the largest recipients—based on farm size and income—because the payments are tied to land and paid on a per-acre basis. According to our review of FSA direct payment data, in 2011, the top 10 percent of payment recipients received 51 percent of direct payments, and the top 25 percent of payment recipients received 73 percent of direct payments. In addition, according to USDA, larger farms, including those receiving direct payments, have higher operating profit margins. Specifically, in 2010, farms with $1 million or more in sales had a 24 percent operating profit margin, on average, whereas farms of any size had an 8.8 percent operating profit margin, on average. Furthermore, according to USDA data, larger farms, including those receiving larger direct payments, are generally financially better able to cover their debt than smaller-sized farms. For example, according to USDA’s Agricultural Resource Management Survey data for 2010, farms with sales of $1 million dollars or more were more highly leveraged (i.e., they had higher debt-to-asset ratios), but they had higher debt-coverage ratios (i.e., they had more financial capacity to cover interest and principal payments on debt) than “all farms” or farms in smaller economic size classes. Yet, as discussed, it is these larger farms that are receiving the preponderance of direct payments. When direct payments were first authorized in 2002, the nation’s annual deficit equaled 1.5 percent of gross domestic product (GDP), and debt was 59 percent of GDP, according to the Office of Management and Budget. In 2011, the deficit was projected to be 10.9 percent, and debt was projected to be 103 percent of GDP, respectively. In July 2003, we testified before the House Committee on Ways and Means about the need to improve the economy, efficiency, and effectiveness of government programs, policies, and activities and to undertake a fundamental reassessment of what government does and how it does it.We stated that this undertaking would require looking at current federal programs in terms of their goals and results and determining whether (1) other approaches might succeed in achieving the goal, (2) taxpayers are getting a good “return on investment” from the program, and (3) the program’s priority is higher or lower today given the nation’s evolving challenges and fiscal constraints. In light of the nation’s difficult fiscal situation and pressure to reduce government spending, the President’s fiscal year 2013 budget proposes eliminating direct payments. In addition, USDA’s Acting Undersecretary for Farm and Foreign Agricultural Services testified before the Senate Committee on Agriculture, Nutrition, and Forestry in March 2012 that eliminating direct payments could save $31.1 billion, over 10 years, while maintaining other farm programs that target assistance when and where it is most needed. In addition, in April 2012, the Congressional Budget Office estimated that repealing direct payments would save about $24.8 billion from fiscal year 2014 through fiscal year 2018. Studies by USDA have found that direct payments result in higher prices to buy or rent land because in some cases the payments go directly to landowners—raising land values—and in other cases the payments go to tenants, prompting landlords to increase cash rental rates. For example, in June 2009, USDA’s Economic Research Service reported that the primary economic effects of direct payments are increases in producers’ incomes and land values. In this way, direct payments may compound challenges for beginning farmers. We reported in September 2007 that beginning farmers face multiple challenges, including a need for funds to purchase farmland. In this regard, an increase in the price of land as a result of direct payments—or other farm program subsidies—may potentially raise the amount of debt beginning farmers need to incur to buy their own farm or additional farmland. During the course of our work, we identified cases where direct payments support recipients who FSA officials said own farmland that would not be economically viable in the absence of these payments. For example, in 1 county, 190 farms were fallow—they did not grow any crop of any type— for 5 consecutive years, and producers claimed payments for these farms. According to FSA county officials, these recipients are unable to profitably farm their land or lease it to other producers because the land is of poor quality and lacks access to irrigation. Another FSA county official from another state said that the producers associated with the 32 farms in that county that were fallow for 5 consecutive years were generally unable to obtain financing for their farming operations and could not profitably farm their land. Nevertheless, these landowners remain eligible for direct payments under a provision of the 2008 Farm Bill known as the “landowner exemption.” Under this exemption, landowners can remain eligible for direct payments as long as the landowners’ interest in their acreage depends directly on the output of that acreage. In practice, therefore, landowners can remain eligible if they (1) operate the land themselves, (2) lease the land for a rent that depends on the production of a crop, or (3) do not lease or operate the land and therefore receive no production-related revenue from it. In practice, however, it appears the landowner exemption allows landowners to receive payments for land that is no longer economically viable for farming. Direct payments may have less potential than other farm programs to distort prices and production, but economic distortions can nonetheless result from these payments. Furthermore, a trade-off exists between being less market distorting—as direct payments are considered to be—and targeting benefits to adjust to need. During the course of our work, we identified several concerns with regard to FSA’s oversight of direct payments: FSA has not developed a systematic process to report on acreage that may no longer be usable for agriculture and therefore ineligible for direct payments; FSA conducts relatively few end-of-year reviews and generally does not complete these reviews within expected time frames; and FSA has not kept data on enforcement. FSA has not systematically reported or corroborated the extent to which land may no longer be eligible for direct payments because it has been converted to nonfarm uses. The 2008 Farm Bill instructed the Secretary of Agriculture to establish procedures to identify such land and each year, to “ensure, to the maximum extent practicable, that payments are received only by producers,” submit to Congress a report describing the results of USDA’s actions to identify and reduce base acres for land that has been subdivided and developed for nonfarm use. The 2008 Farm Bill uses base acres to determine direct payments, Average Crop Revenue Election (ACRE) payments, and counter-cyclical payments. FSA issued its first report in response to this mandate in September 2011, covering 2009 and 2010. According to this report, about 190,000 acres—about 129,000 in 2009 and 61,000 in 2010—were converted to nonfarm use during this period. However, the report noted that these estimates were likely low, stating that USDA’s periodic Natural Resources Inventory estimated that an average of 440,000 cropland acres were converted to nonfarm uses annually from 1982 through 2007. FSA’s report had several methodological limitations that we identified. For example, FSA relied exclusively on surveying the 50 FSA state offices for information on such conversions. It did not use or corroborate the state offices’ results with other possible sources of information, including geospatial information on land use gathered by USDA’s National Agriculture Imagery Program, which provides geospatial imagery data to support FSA compliance activities. FSA also did not provide its state offices with guidance for collecting information on such conversions. As a result, these offices, and their associated county offices, used a variety of methods to collect this information. For example, FSA officials in one county office said they identified base acreage reductions by consulting In another case, FSA county records of County Committee meetings. office officials said they used Base Acreage Yield Adjustment reports to identify base acreage that was already permanently reduced, and Out of Balance Tracts reports to identify other base acreage that may signal the need for a base acreage reduction.and consequently unsystematic process, FSA may have underrepresented the extent to which land may no longer be eligible for direct payments because it has been converted to nonfarm uses in its required report to Congress. As a result of these varying methods FSA headquarters officials said that, because in the past they were not required to report land converted from agriculture to residential or other nonfarm use, they had not systematically tried to track such land. They also stated that because producers, including direct payment recipients, are required to report planting information each year and certify the accuracy of this information, FSA has been able to identify land subject to base acre reductions manually through such reports. These officials stated that FSA relied on such manual records to report on land that was converted to nonfarm uses in its September 2011 mandated report, and it has begun compiling data for the next report, covering 2011, using the same methodology. However, these officials noted that in October 2011 FSA updated its data collection systems to compile the mandated report covering 2012 base acre reductions through a computerized tracking system. This system includes a reporting code to identify whether the base acre reduction was made because land was converted to nonfarm uses, including residential or commercial uses. All of the FSA county officials we spoke with said that geospatial imagery was very helpful in identifying land that may no longer be usable for agriculture, and some noted this was particularly so as budget constraints have precluded more frequent on-site farm inspections. For example, some of these officials spoke of instances where they knew from geospatial imagery that land had been converted from agricultural use, and the producer had not informed FSA. Nevertheless, officials said, the National Agriculture Imagery Program did not provide such imagery regularly and they received updated imagery only every few years, limiting their ability to identify land that may no longer be usable for agriculture—and therefore ineligible for direct payments. An imagery program official stated that the program had received inconsistent funding since its establishment in 2002. As a result, this official said, in 2008 the office began collecting imagery data in 3-year cycles rather than annually, as would meet program needs. The official stated that three USDA agencies and the Department of the Interior are funding the approximately $40-million-per-year effort. The official added that the imagery program’s own requirements and FSA’s needs continue to call for collecting and reporting data annually, but funding constraints preclude the program from doing so. FSA’s detailed end-of-year reviews, in which FSA officials assess whether direct payment recipients met program requirements, such as being actively engaged in farming, have key weaknesses. Specifically, FSA conducts relatively few end-of-year reviews, and generally does not complete these reviews within expected time frames. FSA guidance states that the purpose of end-of-year reviews is to maintain the integrity of payment limitation and payment eligibility provisions by verifying that farming operations were carried out as producers reported on their farm operating plans. One such provision for direct payments is that all payment recipients be actively engaged in farming, which, according to the 2008 Farm Bill, generally includes making a “significant contribution” that is at risk and commensurate with the recipient’s share of profits and losses from the farming operation. Recipients of payments under ACRE and counter-cyclical payment programs also are required to be “actively engaged” in farming. According to FSA officials, to verify the extent to which producers’ contributions meet these, as well as other, eligibility requirements, FSA selects a judgmental sample of farming operations for review on the basis of, among other criteria, (1) whether the operation has undergone an organizational change in the past year by, for example, adding another entity or partner to the operation and (2) whether the operation receives payments above a certain threshold. FSA officials said that their selection process for end-of-year reviews is designed to direct limited resources toward categories of recipients among whom officials most expect to find wrongdoing such as fraud or other deliberate misrepresentation of a farming operation. These officials explained that the recipient categories emphasized for review include joint operations, particularly those comprising three or more entities, because such operations offer more potential and incentive for partners to exaggerate their contributions. Producers in farming operations selected for end-of-year review must provide documentation to verify that the information they report on their farm operating plan, including their contributions of land, capital, equipment, labor, and management, is accurate. By reviewing such documentation, FSA officials can determine whether the contributions, in terms of risk and share of profits and losses, made by each participant in a farming operation match the contributions reported for that participant on the farm operating plan. FSA officials said that end-of-year reviews are a key means of identifying potentially improper payments. In addition, some FSA county officials said end-of-year reviews were useful in identifying irregularities and fraud and cited cases in their experience where producers returned direct payments determined to have been erroneously disbursed. Nonetheless, we identified two key weaknesses in FSA’s end-of-year review process. First, FSA selects relatively few cases for annual end-of-year review. Our analysis of FSA data showed that in 2008 and 2009, FSA selected for review 0.04 percent of farming operations receiving direct payments. By comparison, for fiscal year 2010, the Internal Revenue Service selected at least 0.7 percent of taxpayers, from every income level—an average of 1.1 percent of all taxpayers—for examination. According to FSA headquarters officials, they would like to select additional cases for review, but the selection rate is relatively small because the reviews are resource intensive. Increasing budget constraints and USDA’s announced plans to close some of FSA’s 2,200 county offices and reduce field staffing may further limit the number of cases for review the agency can select in the future. These officials said that, because of resource constraints, they select the sample according to categories of recipients where they most expect to find wrongdoing, and waive categories of recipients, such as landowners and spouses among whom they least expect to find misrepresentation. However, we found that when FSA waives reviews for some of the cases selected, the agency does not replace them with reviews of other cases, as we recommended in April 2004. At the time, we reported that FSA was not reviewing a valid sample of farm operating plans to reasonably assess the overall level of compliance because its selection methodology did not replace waived cases, resulting in a smaller sample size that might have affected the validity of the sample results. In response to our recommendation, FSA reduced the number of compliance reviews it waived each year but did not act to replace reviews that were waived with new cases. FSA continues to select a small sample of cases for review: in 2008 and 2009, respectively, 23 and 154, or just under 6 and 13 percent, of selected cases were waived, further decreasing the number of farming operations reviewed. Second, FSA often completes end-of-year reviews late with respect to its own expected time frames. According to an FSA official in charge of selecting cases and monitoring the end-of-year review process, FSA headquarters generally selects cases for review within the first 6 months of the following year and generally expects county office staff to perform their assigned reviews within a year of receiving the cases selected. As of September 2011, however, 271 of 380 pending reviews for 2008 (71 percent) were more than 6 months past the expected 18-month completion time frame, which includes the selection, assignment, and conducting of these reviews. In May 2012, FSA reported that as of February 2012, nearly 24 percent of pending reviews for 2008 were still incomplete. Table 1 summarizes the status of end-of-year reviews for 2008 and 2009, as of September 2011 and February 2012. In addition, in February 2012 FSA reported that county offices in three states—California, Louisiana, and Mississippi—had not completed some of their 2006 or 2007 end-of-year reviews. FSA officials said they do not regularly collect data on the number of end-of-year reviews completed and pending. FSA officials also said that taking corrective action against payment recipients becomes more difficult as reviews are delayed. For example, with the passage of time, it is more difficult for FSA to collect evidence of potential misrepresentation or fraud, as well as for producers to provide the requested documentation. Furthermore, according to FSA officials, completed end-of-year reviews are needed for USDA’s Office of Inspector General (OIG) to investigate cases of potential fraud or other illegal activity. Federal internal control standards call for agencies to obtain, maintain, and use relevant, reliable, and timely information for program oversight and decision making, as well as for measuring progress toward meeting agency performance goals. In addition, the Office of Management and Budget directs agency managers to take timely and effective action to correct internal control deficiencies. Furthermore, FSA’s handbook on determining eligibility for farm program payments states that “etecting schemes, fraudulent representations, and other equally serious actions of persons and legal entities to circumvent payment eligibility and payment limitation provisions is essential for producer compliance.” The issues we identified in FSA’s end-of-year compliance review process leave the agency with a less effective management oversight tool. For example, in light of these problems, FSA is less able to identify potential fraud, waste, and abuse; avert potentially improper payments; and enforce farm bill provisions and related implementing regulations. USDA does not have data to demonstrate that it is using available enforcement mechanisms against payment recipients found to have misrepresented their farming operation so as to increase their direct payments improperly, and the agency generally has not centrally tracked data on such cases of misrepresentation. Specifically, when asked, FSA officials were unable to provide the number of direct payment cases FSA has referred to OIG for further investigation and potential prosecution by U.S. Attorneys’ Offices, but according to FSA officials, the number of such cases has been small. For example, regarding potential fraud, FSA officials cited only one case that was currently under active litigation. FSA state offices are required to report any known or suspected violations of criminal statutes to OIG for investigation, but according to OIG officials, their investigators will pursue cases of potential fraud only if they anticipate a “good outcome,” that is, a successful prosecution. The potential amount of funds to be recovered is another consideration. According to FSA officials, the number of cases accepted for investigation varies by region. For example, according to these officials, in regions with significant drug crime, such as southern Texas, U.S. Attorneys give priority to drug cases and accept virtually no farm program cases. This situation notwithstanding, OIG and FSA officials said that FSA offices in Texas refer very few compliance and payment limitation cases to OIG. According to OIG officials, depending on the circumstances, FSA may be able to take administrative action against a producer in an attempt to recover inappropriately disbursed funds even if the producer is not prosecuted by state or federal authorities for violations of law, by following its own procedures or consulting with the Office of General Counsel. However, both OIG and FSA officials said that, in cases of alleged fraud, the payment recipient may not be subject to additional enforcement mechanisms unless prosecuted and convicted. FSA regulations provide that any producer found to have committed fraud may be debarred from receiving further payments for up to 5 years.provide that any producer found to have engaged in misrepresentation may be debarred from receiving further payments for up to 2 years. However, if FSA does not maintain comprehensive data on payment recipients that may have misrepresented their farming operation, including by name of producers, it is unclear how it can consistently pursue and recover improper payments. FSA headquarters officials stated that most payment recipients are honest and comply with direct payment eligibility requirements, and that the level of enforcement is appropriate. However, the officials could not provide data on compliance. Some FSA county officials expressed concerns about discouraging producers from farming, should the producer be debarred from receiving further payments. They also said that they can and do recover improper payments from producers without pursuing potential prosecution. However, FSA’s reluctance to pursue these cases and enforcement mechanisms could encourage some producers to engage in and profit from submitting false information with little fear of being caught or punished. Moreover, FSA officials acknowledged that the agency lacks comprehensive data on its enforcement actions. Specifically, FSA officials said that the agency does not keep a centralized, national database or list of direct payment recipients found to be at fault for misrepresentation, including recipients convicted of fraud, debarred from future payments, referred to OIG for investigation, or found by county or state FSA offices to have received improper payments by misrepresenting their farming operation—including in cases in which these payments were later recouped. FSA does, however, maintain data at the national level on payments it reduced because it determined a certain producer was ineligible before making the payment. According to these data, in 2011 FSA reduced payments to certain direct payment recipients by almost $20 million for not being actively engaged in farming; by more than $89 million for exceeding payment limitations; by over $37 million for exceeding income limitations; and by $3,393 for fraud. Table 2 summarizes the amount of these reduced payments for 2010 and 2011. Under the Improper Payments Information Act of 2002, as amended, federal agencies are required to estimate the level of improper payments in their programs. In November 2011 USDA reported that its error rate for making direct and counter-cyclical payments was 0.05 percent, which was below its 2010 error rate of 0.96 percent and below its target of 0.40 percent. However, because USDA does not keep comprehensive data on its enforcement actions or the amount of money it recovers after improper disbursements are made, the level of improper payments reported by FSA for the direct payments program may be understated. Direct payments allow producers flexibility in the type and amount of crops to plant by making payments based on historical production trends, rather than current production. However, the fiscal health of our nation, recent and expected high national budget deficits, and pressures to reduce federal spending mean that every federal dollar should be scrutinized to ensure it is spent efficiently and for the most worthwhile purposes. Maintaining a safety net for farmers is worthwhile, but during times of record-high crop prices and farm incomes, providing payments that do not align with principles significant to integrity, effectiveness, and efficiency in farm bill programs raises questions about the continued need for direct payments. In a March 2011 report, we and others proposed options to reduce or eliminate direct payments. FSA monitors land usage and conducts a detailed review of a sample of farm operating plans at the end of the year to help oversee direct payments and other farm programs, including ACRE and counter-cyclical payments, which require payment recipients to be actively engaged in farming. There have been proposals to eliminate direct payments, but as long as they remain in effect, it is worth noting several weaknesses concerning FSA’s oversight of these programs that our work identified. First, because FSA does not have a systematic process to identify land that may no longer be usable for agriculture and therefore eligible for direct payments—or ACRE or counter-cyclical payments—FSA’s reports to Congress may underreport the extent to which land may no longer be eligible for these payments. Second, because FSA does not regularly update the geospatial imagery FSA county offices use to corroborate that direct payments are made only for lands usable for agriculture, FSA could potentially be making payments to individuals and entities that should not be receiving them. Third, because FSA’s process for selecting and performing end-of-year reviews has key weaknesses, FSA is less able to identify potential fraud, waste, and abuse; avert potentially improper payments; and enforce farm bill provisions and related implementing regulations. We acknowledge the budget constraints that, according to FSA officials, make the case for a judgmental sample and limit the number of end-of-year reviews FSA conducts. However, similarly to what we reported in April 2004—that FSA was not reviewing a valid sample of farm operating plans to reasonably assess the overall level of compliance—FSA continues to select a small sample of cases to review and does not complete reviews in a timely manner, exposing the agency and taxpayers to potential waste, fraud, and abuse of taxpayer dollars. While FSA officials state that most producers are honest and comply with eligibility requirements, it is in the interest of all producers, as well as taxpayers, to maintain the integrity of direct payments and other farm program payments. FSA regulations provide enforcement mechanisms for producers found to have engaged in misrepresentation or to have committed fraud, but FSA does not maintain comprehensive data on payment recipients that have misrepresented their farming operation, including data by producer name. As a result, it is unclear how consistently FSA has pursued and recovered improper payments. In sum, as a result of FSA’s decision to not pursue a more comprehensive oversight process—including maintaining comprehensive data on misrepresentation and tracking the referral of cases for enforcement—the number and value of improper payments and program fraud may be underrepresented. In light of the need to identify potential savings in the federal budget and questions about the continued need for direct payments, Congress should consider eliminating or reducing these payments. To help ensure that direct payments, while they remain in effect, and other farm programs, including ACRE and counter-cyclical payments, are made in a manner consistent with farm bill provisions and related implementing regulations, and to minimize the potential for improper payments, we recommend that the Secretary of Agriculture direct the Administrator of the Farm Service Agency to take the following four actions: Develop and implement a systematic process to report on land that may no longer be usable for agriculture, as required for annual reporting to Congress. Ensure the more timely and consistent regular collection and distribution of geospatial imagery needed to corroborate that payments are only made for lands usable for agriculture. Consider options within given budget constraints to improve FSA’s end-of-year reviews by selecting a larger sample of cases to review and ensuring that these reviews are completed in a timely manner. Maintain comprehensive data on misrepresentation and enforcement actions taken nationwide, as needed for management oversight and reporting purposes. We provided a draft of this report to USDA for review and comment. In written comments, which are reproduced in appendix VII, USDA generally agreed with two of our recommendations and disagreed with two others. USDA also noted our Matter for Congressional Consideration to eliminate or reduce direct payments, stating that the President’s financial year 2013 budget recommended eliminating direct payments while maintaining a strong safety net for farmers. Regarding our first recommendation that USDA develop and implement a systematic process to report on land that may no longer be usable for agriculture, as required for annual reporting to Congress, USDA disagreed, stating that it considers its current process to be adequate. Among other points, USDA noted that it already selects a statistical sample of producers for spot checking to determine that all land reported as cropland remained in cropland status for the year the spot check was conducted. Nevertheless, as discussed in this report, only about 2,000 producers─and potentially 0.13 percent of the approximately 1.6 million producers receiving direct payments─could be selected for annual spot checks. Further, in a September 2011 report in response to a congressional mandate USDA stated that its estimates of base acres converted to nonfarm uses in 2009 and 2010 were likely understated. We also note, as discussed in the report, that USDA’s current procedures to collect these data are subject to methodological limitations and inconsistencies in how its field offices collect these data. For example, USDA relied exclusively on surveying the 50 FSA state offices for information on such conversions, and did not use or corroborate the state offices’ results with other possible sources of information, such as geospatial imagery. USDA also did not provide its state offices with guidance for collecting information on such conversions. As a result, these offices, and their associated county offices, used a variety of methods to collect this information. Given this very small sample and the department’s likely underestimation of the extent of conversions to nonfarm uses, we maintain that development of an improved process is needed for identifying land that may no longer be usable for agriculture. For added clarity, we revised the report to make clear that USDA uses a statistical sample and that its field offices may spot-check other producers if there are concerns. In addition, USDA noted that the 2008 Farm Bill, among other sources requires producers to file acreage reports on all cropland on the farm. In response to USDA’s comment, we added clarifying language to the report to identify the sources that USDA cites as requiring such reporting. Regarding the second recommendation that USDA ensure the more timely and consistent collection and distribution of geospatial imagery needed to corroborate that payments are only made for lands usable for agriculture, USDA stated that it agrees that geospatial imagery is a useful tool to identify land use changes. It also said, however, that its ability to update this imagery more frequently would require increased funding from Congress. As discussed in the report, USDA already leverages resources from other agencies, such as the Department of the Interior, to help cover the costs of collecting this imagery. Further opportunities may exist to do so. In addition, USDA could consider options to reallocate more funding to geospatial imagery within its existing budget resources. Regarding the third recommendation that USDA consider options within given budget constraints to improve FSA’s end-of-year reviews by selecting a larger sample of cases to review and ensuring that these reviews are completed in a timely manner, USDA disagreed. USDA stated that it concurs that timely, high-quality end-of-year reviews are important; however, it also stated that its current practices already meet this standard. According to USDA, in the early 1990’s its Office of Inspector General determined that a judgmental sample completed in USDA headquarters was the most consistent and beneficial in terms of detecting problematic issues and potential compliance problems. In addition, USDA said that, in consideration of efficiency, FSA has made a targeted selection and devoted its limited resources to identifying farming operations considered most likely to have potential payment eligibility and payment limitation compliance issues. However, as discussed in our report, USDA selects very few cases for end-of-year reviews. For example, in 2008 and 2009, only 0.04 percent of operations receiving direct payments were selected. In addition, as noted in the report, these reviews were often not done in a timely fashion as measured by USDA’s own time frames. For example, as of September 2011, 271, or 71 percent, of 380 pending reviews for 2008 were more than 6 months past USDA’s expected completion date. Given this very small sample and the lack of timeliness associated with many of these reviews, we continue to believe that USDA should consider options to increase the number and improve the timeliness of these reviews. To eliminate potential confusion about FSA’s use of a judgmental sample, we revised our third recommendation by removing the words “the quantity and quality” from an earlier draft to clarify our emphasis on improving the scope and timeliness of these reviews. Regarding the fourth recommendation that USDA maintain comprehensive data on misrepresentation and enforcement actions taken nationwide as needed for management oversight and reporting purposes, USDA agreed and stated there is value in maintaining data on misrepresentation and enforcement actions. It stated that the development of such a capability has been planned for a number of years but that other projects, such as the implementation of the 2008 Farm Bill and the development and implementation of a robust process for verifying producer compliance with adjusted gross income limits have taken precedence. We understand that USDA has many competing priorities, but its decision to not pursue a more comprehensive oversight process— including maintaining comprehensive data on misrepresentation and tracking the referral of cases for enforcement—means the number and value of improper payments and program fraud may be underrepresented. We are sending copies of this report to the Secretary of Agriculture, appropriate congressional committees, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or shamesl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VIII. The objectives of our review were to: (1) provide information regarding the geographic distribution and ownership characteristics of payment recipients, as well as the dollar amount of direct payments made for land with qualifying acreage and amount and types of crops grown on qualifying acreage from 2003 through 2011 and (2) examine whether direct payments are aligned with principles significant to integrity, effectiveness, and efficiency in farm bill programs. To conduct this work, we analyzed U.S. Department of Agriculture (USDA) data, interviewed agency officials, reviewed applicable laws, regulations, and guidance, and reviewed and updated past GAO work. Specifically, to provide information about the geographic distribution and ownership characteristics of payment recipients and to determine the dollar amount of direct payments made for land with qualifying acreage, we obtained disaggregated data from USDA’s Farm Service Agency (FSA) indicating the number, amount, and payee for direct payments made from program years 2003 through 2011—that is, from the program’s first full year of operation through the most recent year for which complete program data are available. In particular, we reviewed the Food, Conservation, and Energy Act of 2008 (2008 Farm Bill) to determine which crops are eligible for direct payments and determined that we would include barley, canola, corn, cotton (upland), crambe, flax, mustard, oats, peanuts, rapeseed, rice, safflower, sesame, sorghum, soybeans, sunflower, and wheat in our analysis. Further, we reviewed and evaluated USDA documents for collecting data from direct payment recipients regarding land usage, in particular the Farm Operating Plan for Payment Eligibility Review for individuals and entities, to identify appropriate data elements for use in our analyses. In particular, we obtained data from USDA’s compliance share file that indicates how producers—whether individuals or entities—are involved and whether they own a particular farm field or area of land for which direct payments were made. Producers report they either (1) own and operate the farm (“owner-operators”), (2) operate but do not own the farm (“tenants”), or (3) are an owner of the farm (“other owners”). We also obtained disaggregated USDA data indicating the number of base acres and planting history for each farm for which direct payments were made. When analyzing direct payments spending, we assigned a payment according to its program year; that is, the year in which the payment was associated with since it is possible for payments to be made after the end of the program year for which they are made. USDA does not collect the zip codes of farms with which direct payment are associated. We therefore obtained data for the centroid point—the geometric center—of the county in which the farm resides and calculated the distance from it to the centroid point of the payment recipient’s zip code. In addition, USDA provided reliable address files for payees from 2008 through 2011. USDA data do not differentiate between the program’s base acres and a farm’s other acres, and producers report one aggregated number of acres for each crop planted. To determine the relative percentage of base acres planted with the base acre crops, we compared the acreage of a farm planted in a particular crop with its base acres of that crop. Because a producer may plant 100 percent of the farm’s base acres in any crop, plus a portion of additional acres on the farm that exceeds the base acre amount, more than 100 percent of base acres may be planted in a certain crop. We assessed the reliability of USDA’s data by (1) performing electronic testing of required data elements, (2) reviewing existing information about the data and the system that produced them, and (3) interviewing agency officials knowledgeable about the data. We determined that the data were sufficiently reliable for the purposes of this report. We used geographic information system (“GIS”) software to map selected results of our quantitative analyses. We also interviewed FSA officials regarding the results of our data analysis; these officials indicated that they generally found these results to be credible. To examine whether direct payments are aligned with principles significant to integrity, effectiveness, and efficiency in farm bill programs, we reviewed our past work, particularly more recent work that identifies relevant principles to consider for farm bill reauthorization. These principles are relevance, distinctiveness, targeting, affordability, effectiveness, and oversight. The resulting principles and associated key questions may not represent all potential principles that could be considered. We collected additional data where possible to determine how circumstances regarding direct payments may have changed more recently and evaluated direct payments according to the principles identified for in our earlier work. We also reviewed our March 2011 report, which discussed observations regarding direct payments. In applying the identified principles to direct payments, we considered information on the program’s original purpose; its potential, if any, to duplicate payments under other programs; who benefits from the program; the nation’s deficit and debt challenges; the program’s potential, if any, to have unintended consequences; and program oversight measures taken by FSA. However, based on our past work, we believe these principles to be significant to integrity, effectiveness, and efficiency in farm bill programs since 2003, when much of the 2002 Farm Bill was implemented. Further, USDA’s Office of Inspector General shares this point of view and issued a companion report to our report using the same principles and based on its own past work for this time frame. In Arizona, we visited FSA county offices in Maricopa, Pima, and Pinal counties. In Louisiana, we visited FSA county offices in Madison, Morehouse, and Richland parishes. planted with crops in recent years, and/or (3) the county is experiencing rapid urban development. The information gathered at these locations cannot be generalized to the experience of all FSA county offices. We conducted this performance audit from August 2011 to June 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In April 2012 we identified certain principles as applicable to Congress’s deliberations for the 2012 Farm Bill and significant to the integrity, effectiveness, and efficiency in farm bill programs, such as direct payments. Specifically, we identified these principles to be relevance, distinctiveness, targeting, affordability, effectiveness, and oversight. Key questions associated with these principles are shown below. Our list of principles may not represent all potential principles that could be considered. 1. Relevance: Does the program concern an issue of national interest? Is the program consistent with current statutes and international agreements? Have the domestic and international food and agriculture sectors changed significantly, or are they expected to change, in ways that affect the program’s purpose? 2. Distinctiveness: Is the program unique and free from overlap or duplication with other programs? Is it well coordinated with similar programs? 3. Targeting: Is the program’s distribution of benefits consistent with contemporary assessments of need? 4. Affordability: Is the program affordable, given the nation’s severe budgetary constraints? Is it using the most efficient, cost-effective approaches? 5. Effectiveness: Are program goals clear, with a direct connection to policies, resource allocations, and actions? Does the program demonstrate measurable progress toward its goals? Is it generally free of unintended consequences, including ecological, social or economic effects? Does the program allow for adjustments to changes in markets? 6. Oversight: Does the program have mechanisms, such as internal controls, to monitor compliance and help minimize fraud, waste, and abuse in areas where these are most likely to occur? Data from USDA’s compliance share file indicate how producers— whether individuals or entities—are involved and whether they own a particular farm field or area of land for which direct payments were made. Producers report they either (1) own and operate the farm (“owner- operator”), (2) operate but do not own the farm (“tenants”), or (3) are an owner of the farm (“other owners”). Our analysis of USDA data found that ownership characteristics of land for which direct payments were made have changed from 2003 through 2011, as shown in table 3. Moreover, we found that the ownership characteristics regarding the operation of land for which direct payments were made varied according to which crops were grown. Table 4 shows these variations by crop, for all covered crops, corn, cotton, oats, rice, soybeans, and wheat in 2011. Our analysis of USDA data found variation in the extent to which producers grew the crop associated with their base acres. Table 5 depicts the results for each crop eligible for direct payments, as well as the totals for all eligible crops, from 2003 through 2011. Our analysis of USDA data found that some producers chose to not grow any of the crop associated with their base acres in a given year—as they are allowed to do. Table 6 depicts our results of this analysis, by crop, and the corresponding value of direct payments made for each crop. According to our analysis of USDA data, 2,327 farms, or about 0.15 percent of the 1.6 million farms receiving direct payments in 2011, reported all their land as “fallow” from 2007 through 2011. That is, producers did not plant any crops of any type on this land in any year during this 5-year period, as they are allowed to do in accordance with planting flexibility rules. Table 7 presents the results of our analysis, by state, of the number of such fallow farms in each state, including the direct payments received by producers on these farms in 2011. In addition to the individual named above, James R. Jones, Jr. (Assistant Director); Alisa Beyninson; Ellen W. Chu; Michael Kendix; and Michelle Munn made key contributions to this report. Important contributions were also made by Benjamin Bolitzer, Kevin Bray, Tom Cook, Melinda Cordero, Greg Dybalski, Barbara El Osta, Rebecca Makar, John Mingus, Susan Offutt, Anne Rhodes-Kline, Ardith A. Spence, Kiki Theodoropoulous, and Michelle K. Treistman. Farm Bill: Issues to Consider for Its Reauthorization, GAO-12-338SP. April 24, 2012. Crop Insurance: Savings Would Result from Program Changes and Greater Use of Data Mining, GAO-12-256. Washington, D.C.: March 13, 2012. Follow-up on 2011 Report: Status of Actions Taken to Reduce Duplication, Overlap, and Fragmentation, Save Tax Dollars, and Enhance Revenue, GAO-12-453SP. Washington, D.C.: February 28, 2012. Opportunities to Reduce Potential Duplication in Government Programs, Save Tax Dollars, and Enhance Revenue GAO-11-318SP. Washington, D.C.: March 1, 2011. 2012 Annual Report: Opportunities to Reduce Duplication, Overlap and Fragmentation, Achieve Savings, and Enhance Revenue, GAO-12-342SP. Washington, D.C.: February 28, 2012. USDA Crop Disaster Programs: Lessons Learned Can Improve Implementation of New Crop Assistance Program, GAO-10-548. Washington, D.C.: June 4, 2010. Crop Insurance: Opportunities Exist to Reduce the Costs of Administering the Program, GAO-09-445. Washington, D.C.: April 29, 2009. Federal Farm Programs: USDA Needs to Strengthen Controls to Prevent Payments to Individuals Who Exceed Income Eligibility Limits, GAO-09-67. Washington, D.C.: October 24, 2008. Agricultural Conservation: Farm Program Payments Are an Important Factor in Landowners’ Decisions to Convert Grassland to Cropland. GAO-07-1054. Washington, D.C.: September 18, 2007. Beginning Farmers: Additional Steps Needed to Demonstrate the Effectiveness of USDA Assistance, GAO-07-1130. Washington, D.C.: September 10, 2007. USDA Needs to Strengthen Management Controls to Prevent Improper Payments to Estates and Deceased Individuals. GAO-07-1137T. Washington, D.C.: July 24, 2007. Federal Farm Programs: USDA Needs to Strengthen Controls to Prevent Improper Payments to Estates and Deceased Individuals, GAO-07-818. Washington, D.C.: July 9, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable, GAO-07-944T. Washington, D.C.: June 7, 2007. Crop Insurance: Continuing Efforts Are Needed to Improve Program Integrity and Ensure Program Costs Are Reasonable, GAO-07-819T. Washington, D.C.: May 3, 2007. Crop Insurance: More Needs to Be Done to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse, GAO-06-878T. Washington, D.C.: June 15, 2006. Crop Insurance: Actions Needed to Reduce Program’s Vulnerability to Fraud, Waste, and Abuse, GAO-05-528. Washington, D.C.: September 30, 2005. Farm Program Payments: USDA Should Correct Weaknesses in Regulations and Oversight to Better Ensure Recipients Do Not Circumvent Payment Limitations, GAO-04-861T. Washington, D.C.: June 16, 2004. Crop Insurance: USDA Needs to Improve Oversight of Insurance Companies and Develop a Policy to Address Any Future Insolvencies,. GAO-04-517. Washington, D.C.: June 1, 2004. Farm Program Payments: USDA Needs to Strengthen Regulations and Oversight to Better Ensure Recipients Do Not Circumvent Payment Limitations, GAO-04-407. Washington, D.C.: April 30, 2004. Federal Budget: Opportunities for Oversight and Improved Use of Taxpayer Funds, GAO-03-1030T. Washington, D.C.: July 17, 2003.
Through one facet of the farm safety net, USDA provides farmers and other producers with fixed annual payments, called direct payments, based on their farms’ historical crop production. Direct payments do not vary with crop prices or crop yields. In March 2011, GAO reported on observations and options regarding direct payments and suggested to Congress that they be eliminated or reduced. GAO was asked (1) to provide information regarding the geographic distribution and ownership characteristics of payment recipients, as well as the dollar amount of direct payments made for farms with acreage that qualified, and the amount and types of crops grown on such acreage for years 2003 to 2011, and (2) to examine whether direct payments are aligned with principles significant to integrity, effectiveness, and efficiency in farm bill programs. To conduct this work, GAO analyzed USDA data and interviewed agency officials. From 2003 through 2011, the U.S. Department of Agriculture (USDA) made more than $46 billion in direct payments to farmers and other producers. These producers planted varying percentages of acres that qualified for payments based on their historical planting yields and designated payment rates (qualifying acres). Cumulatively, USDA paid $10.6 billion—almost one-fourth of total direct payments made from 2003 through 2011—to producers who did not, in a given year, grow the crop associated with their qualifying acres, which they are allowed to do. About 2,300 farms (0.15 percent of farms receiving direct payments) reported all their land as “fallow,” and producers did not plant any crops on this land for each year for the last 5 years, from 2007 through 2011; in 2011, these producers received almost $3 million in direct payments. Direct payments generally do not align with the principles significant to integrity, effectiveness, and efficiency in farm bill programs that GAO identified in an April 2012 report. These payments align with the principle of being “distinctive,” in that they do not overlap or duplicate other farm programs. However, direct payments do not align with five other principles. Specifically, they do not align with the following principles: Relevance : When the precursors to direct payments were first authorized in 1996 legislation, they were expected to be transitional, but subsequent legislation passed in 2002 and 2008 has continued these payments as direct payments. However, in April 2012, draft legislation for reauthorizing agricultural programs through 2017 proposed eliminating direct payments. Targeting : Direct payments do not appropriately distribute benefits consistent with contemporary assessments of need. For example, they are concentrated among the largest recipients based on farm size and income; in 2011, the top 25 percent of payment recipients received 73 percent of direct payments. Affordability : Direct payments may no longer be affordable given the United States’ current deficit and debt levels. Effectiveness : Direct payments may have unintended consequences. Direct payments may have less potential than other farm programs to distort prices and production, but economic distortions can result from these payments. For example, GAO identified cases where direct payments support recipients who USDA officials said own farmland that is not economically viable in the absence of these payments. Oversight : Oversight of direct payments is weak. With regard to oversight, USDA has not systematically reported on land that may no longer be eligible for direct payments because it has been converted to nonfarm uses, as required for annual reporting to Congress. In addition, GAO identified weaknesses in USDA’s end-of-year compliance review process. For example, USDA conducts relatively few reviews and generally does not complete these reviews within expected time frames. Continuing to provide payments that generally do not align with principles significant to integrity, effectiveness, and efficiency in farm bill programs raises questions about the purpose and need for direct payments. Congress should consider eliminating or reducing direct payments. GAO also recommends that USDA take four actions to improve its oversight of direct payments including developing a systematic process to report on land that may no longer be usable for agriculture, and considering ways to increase the number of cases selected for end-of-year reviews and completing these reviews in a timely manner. USDA generally agreed with two of GAO’s recommendations and disagreed with two others, stating that it believes its current processes or practices are adequate. GAO continues to believe that it is important for USDA to take the recommended actions.
Unlike most American workers, when railroad workers are injured on the job, they are not covered by state no-fault workers’ compensation insurance systems. Instead, they must seek to recover their losses from the railroads under the provisions of the Federal Employers’ Liability Act (FELA). Under FELA, injured workers must either negotiate a settlement with the railroad or file a lawsuit against the railroad to recover their losses. FELA allows injured workers to recover noneconomic damages, such as pain and suffering, in addition to economic damages, such as medical expenses and lost wages. In contrast, the benefits paid under no-fault workers’ compensation systems are largely limited to medical expenses and lost wages. Under FELA, if a lawsuit is filed, workers must show negligence on the part of the employer; under no-fault systems, issues of negligence are not a factor. Railroad management’s and labor’s opinions differ over how well FELA is working. Management, which favors replacing FELA, believes that FELA creates an adversarial environment between the railroads and their employees and is unnecessarily costly. On the other hand, railroad labor believes that FELA is working well and allows injured employees to receive better compensation for their injuries than they would under no-fault alternatives. Labor also believes that FELA provides railroads with an extra incentive to operate safely. Compensating railroad workers injured on the job is governed by the provisions of FELA. If negotiations between an injured worker and a railroad fail to result in a settlement, then the worker can sue to recover both economic damages and noneconomic damages. In contrast, most American workers are covered by state workers’ compensation systems that are essentially no-fault insurance systems. Although compensation under these systems varies from state to state, the benefits are largely limited to economic damages—lost wages, medical expenses, and rehabilitation costs. There are also two federally administered no-fault workers’ compensation systems. Civilian federal employees are covered by the Federal Employees’ Compensation Act, and employees in the maritime industry are covered by the Longshore and Harbor Workers’ Compensation Act. FELA was enacted in 1908, at a time when railroads were the largest employer in the United States and rail work was particularly hazardous. Prior to the act’s passage, injured railroad workers had difficulty recovering losses resulting from workplace injuries. Under the common-law doctrine of negligence, railroads often avoided paying compensation for on-the-job injuries by arguing, for example, that employees assumed the risk of injury at the time they accepted employment or that an injury had been caused by a fellow employee. At about the same time, efforts were underway in various states and at the federal level to enact employers’ liability legislation that would limit these defenses and increase employers’ liability for workplace injuries. In an effort to better protect workers against financial loss and to make the railroads more accountable and responsible for work-related injuries, FELA limited the railroads’ defenses against liability for compensating injured workers. As such, it provided railroad workers with more protection than other employer liability laws of the time. FELA covers virtually all railroads operating in interstate service, including the freight railroads, the National Railroad Passenger Corporation (Amtrak), and most commuter railroads. Under the act, injured workers can seek recovery of all their losses, including economic losses, such as actual and future wage losses, and noneconomic losses, such as pain and suffering. If negotiations between a railroad and an employee do not produce a settlement, employees can seek recovery of their losses in a state or federal court. Should a lawsuit be filed, an employee must show that the railroad was negligent in order to recover damages. However, an employee’s recovery for losses might be reduced to the extent that the employee’s own negligence caused an injury, and in some instances, the employee could receive nothing. As a result, injured workers may not recover all of their losses, and some workers might not recover any. In addition to compensation under FELA, injured employees may also be eligible for retirement benefits, sickness benefits, and disability annuities from the Railroad Retirement Board. In 1994, the railroads paid about $1.2 billion in FELA costs, and nearly 75 percent of all FELA injury claims for the large railroads (excluding the occupational illnesses of hearing loss and asbestosis) were settled between the railroads and the injured employees without a lawsuit. While the total number of injury claims has declined since 1990, the number of lawsuits has remained relatively stable at about 3,100 cases per year. (See table 1.1.) Over the same period, railroad employment declined from 296,000 to 267,000. The average payout per negotiated claim increased from about $24,000 in 1990 to about $34,000 in 1994, while the average payout per lawsuit remained relatively stable at about $160,000. (See table 1.2.) In contrast to railroad workers, workers in most other industries are covered by state no-fault compensation systems. Workers’ compensation legislation was initially enacted by most state legislatures in the early 20th Century. One of the principal goals of this legislation was to provide injured workers with adequate benefits while limiting employers’ liability to compensating workers only for their lost wages and medical costs. Payments were to be prompt and predetermined to relieve employees and employers of uncertainty and eliminate the need to litigate the claims. The benefits available under no-fault compensation programs depend on the nature and extent of an injury. For less serious injuries, only medical benefits might be paid. For more serious injuries or illnesses, in addition to medical benefits, an employee might receive wage-loss benefits, vocational rehabilitation, or “scheduled” benefits—for injuries resulting in permanent impairments, such as the loss of a limb or a bodily function. Each state sets its own benefit levels, and benefits vary considerably from state to state. Two groups of employees are covered by federally administered no-fault systems. The Federal Employees’ Compensation Act (FECA) covers federal civilian employees, and the Longshore and Harbor Workers’ Compensation Act (LHWCA) covers those in the maritime industry. Enacted in 1916, FECA covers more than 3 million federal civilian employees and authorizes the federal government to compensate employees when they are temporarily or permanently disabled as a result of an injury or illness sustained while performing their duties. The Department of Labor’s Office of Workers’ Compensation Programs administers this program. Disputes may be handled in one of the Labor Department’s district offices or by the Branch of Hearings and Review. Appeals can also be made to the Department’s Employees’ Compensation Appeals Board. FECA cases cannot be appealed to a court. Enacted in 1927, LHWCA covers about 500,000 longshore workers for disability due to a job-related injury or occupational disease occurring on the navigable waters of the United States or in adjoining shore areas.The Department of Labor also administers this program. Disputes are handled informally in one of the Labor Department’s district offices or before the Department’s Office of Administrative Law Judges or the Benefits Review Board. Unlike FECA cases, LHWCA cases may be appealed to a federal appeals court. There are important differences between FELA and no-fault compensation systems. First, both state and federal workers’ compensation systems cover an employee’s work-related injury regardless of negligence on the part of the employer or employee by imposing strict liability on an employer for compensating most economic damages suffered by injured workers. However, they do not allow compensation for noneconomic damages. Second, benefits under no-fault systems are generally paid as losses occur, rather than in a lump sum as they are under FELA. While some states permit lump-sum payments, at least one state—Texas—has essentially banned them. Under FECA and LHWCA, compensation continues as long as a disability continues. Both FECA and LHWCA authorize higher benefit levels than most state workers’ compensation systems. While many no-fault claims are handled directly between employees and their employers or insurance companies, no-fault systems are not free from dispute or litigation. As the National Research Council reported in 1994, disputes may arise over issues such as eligibility for benefits, the level of benefits, and the readiness of workers to return to work. Disputes may also arise over the permanency of injuries. For the most part, adjudicative bodies within a state (or the Labor Department, in the case of FECA and LHWCA) and the judicial system handle the resolution of these disputes. In recent years, litigiousness has tended to increase in no-fault compensation systems. Some states have also been concerned about increasing medical costs in workers’ compensation claims, and some (such as California and Texas) have made efforts to control these costs. Railroad management and labor disagree over how well FELA is working and whether it should be replaced or changed. Although the railroad industry has undergone substantial change over the years, including technological improvements designed to improve safety, the nearly 90-year old system for compensating injured railroad workers has changed little. In general, railroad management is dissatisfied with FELA and believes it should be replaced or substantially changed. In particular, management believes that because FELA involves issues of negligence, it creates an adversarial environment between railroads and their employees. Management also believes that FELA is unnecessarily costly. In addition, management sees little reason why railroads should be treated differently from other industries in terms of workers’ compensation. Railroad labor, on the other hand, believes that FELA is working well and should not be replaced or changed. In labor’s view, FELA is a model system that fairly compensates injured workers and provides an incentive for railroads to operate safely. Labor believes the problem is not that FELA provides workers with excessive compensation but that no-fault compensation systems generally provide too little. FELA has remained relatively unchanged in its nearly 90-year history despite substantial changes in the industry. Enhancements in braking and signaling, for example, have improved the safety of train operations. The Association of American Railroads (AAR), the trade association of the major railroads, issued a report criticizing FELA and claiming that it has adversely affected the railroad industry. That report included data showing that, as railroad employment has declined and the number of injuries has fallen since 1981, FELA payouts have increased. In 1994, railroads paid about $4,200 per employee in FELA costs, up from about $2,250 per employee in 1985. AAR believes that FELA needs to be replaced. Many in railroad management believe that FELA is no longer appropriate to the modern railroad operating environment. Among the problems with FELA cited by railroad management are (1) the adversarial environment created between employers and employees because FELA requires the parties to establish fault, (2) the high degree of involvement by attorneys in FELA cases, (3) the unpredictability of FELA costs, (4) the practice of filing FELA lawsuits in court jurisdictions that have historically rendered judgments favorable to the plaintiffs, and (5) the high administrative costs. In general, railroad management questions why the rail industry must be treated differently from other industries regarding injury compensation. The National Association of Railroad Trial Counsel, an organization of 1,200 lawyers who provide legal services to railroads, believes that because both the right to recover and the amount of the recovery depend on assigning fault, FELA not only inhibits good employer-employee relations but also frustrates attempts to determine the causes of accidents. In contrast to management’s view, railroad labor believes that FELA is effective and should not be replaced or modified. Railroad labor believes that FELA offers the railroads incentives to operate safely and gives workers the opportunity to recover full compensation for their injuries. Railroad labor does not believe that FELA should be replaced with a no-fault compensation system like state workers’ compensation because, in labor’s view, injured workers would not be adequately compensated under a no-fault system. Railroad labor also takes issue with the criticisms of FELA voiced by railroad officials. For example, railroad labor points out that FELA is not a particularly litigious system because over 75 percent of the FELA cases are settled without any third-party intervention. Moreover, in labor’s view, FELA provides the railroads with an incentive to operate safely and if they do so, they could lower their injury compensation costs. Attorneys representing railroad labor also took issue with railroad management’s belief that FELA lawsuits are filed in jurisdictions favorable to plaintiffs. In their view, the practice of selecting court venues favorable to the plaintiffs to try FELA cases is no longer an issue because most states have acted to limit where suits can be filed. Concerned about the cost of FELA, the Chairwoman, Subcommittee on Railroads, House Committee on Transportation and Infrastructure, asked us to identify the implications for railroads’ costs and railroad workers of (1) replacing FELA with a no-fault compensation system or (2) modifying FELA. We were also asked to assess how FELA particularly affects the small railroads and determine the availability and affordability of insurance to protect against large FELA payouts. As agreed with the requester’s office, we focused our analysis on comparisons between FELA, FECA, and LHWCA. This approach was taken to avoid duplicating work previously reported by the National Research Council that compared FELA with state workers’ compensation programs. To identify the cost and other effects of replacing FELA with a no-fault system with FECA- and LHWCA-level benefits, we used a computerized cost model developed by Mercer Management, Inc., for AAR. This model and the assumptions we used in performing our cost analysis are described in appendix I. As input for our analysis, we obtained information on all of the FELA claims closed in 1994 by four large railroads—Burlington Northern, CSX, Norfolk Southern, and Union Pacific. These railroads employed about 60 percent of all employees at large railroads in 1994 and also had previously participated in a 1991 unpublished study of FELA by AAR. To examine how the administrative and dispute resolution mechanisms of no-fault compensation systems compare with those of FELA, we reviewed data from the Department of Labor on the FECA and LHWCA programs. We reviewed similar information on the California, Illinois, Nebraska, Pennsylvania, and Texas workers’ compensation systems. We selected these states because they had the largest number of freight railroad employees as of March 1995. Finally, we analyzed information from the Federal Railroad Administration to determine the number of lost workdays resulting from on-the-job injuries by the type of railroad. To evaluate the cost and other impacts of modifying FELA, we examined a number of proposals selected on the basis of discussions with officials at AAR and with the requester’s office. We used data on claims closed under FELA in 1994 provided by the four railroads mentioned in the above paragraph to evaluate the financial impact of capping noneconomic damages under FELA. This information identified the number of claims that could have been affected by a cap and the dollar value of these claims. To analyze the impact of placing a cap on plaintiffs’ attorneys’ fees under FELA, we interviewed officials at selected railroads and obtained the views of railroad labor organizations. We also reviewed reports prepared by the Workers’ Compensation Research Institute—a nonpartisan, not-for-profit organization that conducts research on workers’ compensation issues. To assess the use of arbitration, we interviewed officials from selected railroads and obtained information from the Federal Judicial Center on the use of arbitration in FELA cases in federal courts. The National Center for State Courts provided us with information on the use of arbitration in state courts. To evaluate the proposal to limit the jurisdictions where FELA cases might be tried, we reviewed state venue provisions in the 10 states with the most railroad employees in 1995, interviewed officials at selected railroads and attorneys who handle FELA cases, and obtained written comments from railroad labor organizations. To assess how FELA affects the small railroads compared with the large railroads, we designed a questionnaire to obtain cost and other information from the small railroads. After pretesting the questionnaire with officials from seven railroads, we surveyed 560 small railroads operating in the United States and asked them about their experience with FELA in 1994. To determine the universe, we used AAR’s Profiles of U.S. Railroads, 1994 Edition and Supplement, a compilation of information on all railroads offering freight service in 1993, and the July 1995 membership list of the American Short Line Railroad Association. We received 437 responses, for a response rate of 78 percent. The employee hours of the respondents to our survey represented 93 percent of the employee hours worked on the small railroads in 1994. The results of our survey of the small freight railroads are presented in appendix II. We also requested information on 1994 FELA claims and costs from 16 railroads identified by the American Public Transit Association as offering commuter service as well as from Amtrak. We received data from 12 commuter railroads and Amtrak. Information on these railroads’ FELA settlements and costs can be found in appendix III. The organizations we contacted in the course of our review are listed in appendix IV. In addition, we received assistance from a consultant, Mark Dayton, who was the Study Director for the National Research Council’s 1994 study of FELA. Our work was conducted from June 1995 through July 1996 in accordance with generally accepted government auditing standards. We provided the Departments of Transportation and Labor with copies of a draft of this report. We met with officials from these agencies, including the Chief of the Industry Finance Staff at the Department of Transportation’s Federal Railroad Administration, and the Deputy Director, Division of Federal Employees’ Compensation and the Director, Division of Longshore and Harbor Workers’ Compensation at the Department of Labor. The Department of Transportation officials said they had no reason to disagree with the contents of the report and made no comments. The Department of Labor officials provided us with technical comments on the FECA and LHWCA programs, which we have incorporated where appropriate. Railroad management advocates replacing FELA with a no-fault compensation system, in part because of a belief that a no-fault system would be less costly. Whether replacing FELA with a nationwide no-fault system with FECA- or LHWCA-level benefits would reduce railroads’ injury compensation costs depends on many factors. Prime among these is the number of railroad workers who are permanently disabled and are unable to return to work at their preinjury wages. Some injured railroad workers leave the railroad after receiving their FELA settlement, but railroad management believes that some of these workers are capable of returning to work and, therefore, would not receive permanent disability payments under a no-fault compensation system. However, the number of such workers is not known. The higher the proportion of this group of injured workers that can return to work at their preinjury wages, the higher the probability that railroads’ injury compensation costs would be reduced under a no-fault system. A no-fault system could reduce railroads’ administrative costs by eliminating the need to investigate negligence and to assess noneconomic damages. However, the time it takes to resolve claims that are contested under no-fault systems might not differ much from what it is under FELA. One of the most important factors in determining the cost differences between FELA and a no-fault compensation system is the number of railroad workers who are permanently disabled by on-the-job injuries. On the basis of our analysis of FELA claims at four large railroads, the lower this number is, the greater the likelihood that the railroads’ compensation costs would be reduced under a no-fault compensation system. Under no-fault compensation systems, when injured workers recover and return to work at their preinjury pay level, their wage compensation benefits cease. In addition, under a no-fault compensation system, those workers who return to work would likely receive less than they would have under FELA because they would be compensated only for economic damages and not for noneconomic damages as they could have been under FELA. Finally, while it is difficult to estimate precisely the impact on death benefits of replacing FELA with a no-fault system, the cost difference would likely be small because death benefits are a relatively small portion of the total compensation outlays. Replacing FELA with a no-fault system with FECA- or LHWCA-level benefits would reduce the railroads’ injury compensation costs only if many of the workers who currently leave a railroad after receiving their FELA settlement are physically capable of returning to work. Under a no-fault compensation system, benefits end or are reduced once an injured worker returns to work or takes another job. If those injured railroad workers who did not return to work under FELA were so severely injured that they could not return to any work, then under the no-fault alternative, they would receive permanent total disability payments as long as their total disability continued. The present value of this amount could be considerably greater than the lump-sum payment that a worker actually accepted under FELA. Officials from several railroads told us that once a settlement is made and an employee leaves a railroad, they do not keep information on any subsequent employment of these individuals. However, officials from several railroads believe that at least some of the workers who accept a FELA settlement and leave a railroad are physically able to return to the workforce. For the four large railroads in our analysis, we estimate that if all of the workers injured on the job who left the railroad after taking a FELA settlement were able to return to work, the railroads’ overall injury compensation costs in 1994 under either FECA- or LHWCA-level benefits would have been about one-third what they were under FELA. (See fig. 2.1.)Under FECA, we estimate that injury compensation costs would have been $168 million and that under LHWCA, they would have been $149 million, instead of the $479 million actually paid. But if all of these workers were permanently and totally disabled, we estimate that these railroads’ injury compensation costs would have been about one-third higher than they were under FELA—$650 million under FECA and $609 million under LHWCA. As the number of injured railroad workers who are permanently disabled declines, the estimated total compensation costs decline. Conversely, as the number of railroad workers who are permanently disabled increases, estimated compensation costs increase under the no-fault alternatives. As figure 2.1 shows, the estimated compensation costs with FECA-level benefits would have been the same as they were under FELA if 65 percent of the workers who left the railroad after their FELA settlement were actually permanently and totally disabled. This break-even point would be 70 percent with LHWCA-level benefits because of the different benefit levels of FECA and LHWCA. According to our analysis, the four large railroads would have paid less for less severely injured workers under a no-fault system than they did under FELA. For those workers who did not leave the railroad but returned to work after their settlement, these four railroads paid $147 million under FELA. In contrast, we estimate they would have paid $50 million and $42 million under the provisions of FECA and LHWCA, respectively—about a $100 million difference. The benefits paid under both FECA and LHWCA would be limited to lost wages and possibly some scheduled benefits (for the loss of, or the loss of the use of, a body part), which are usually calculated as some number of weeks’ wages. Because most economic losses were probably also compensated in the FELA settlement, the difference can likely be attributed to payments for noneconomic damages. Therefore, for those workers who return to work, moving to a no-fault system that does not include noneconomic damages would have saved these railroads about 20 percent of their total compensation costs. Changes in the cost of death claims as a result of replacing FELA with a no-fault system with FECA- or LHWCA-level benefits would likely be small. In 1994, the four large railroads in our analysis paid about $10 million in death benefits. Using the simulation model with a 10-percent discount rate, we estimate that these railroads would have paid about $11 million and $12 million, respectively, under a no-fault system with FECA- or LHWCA-level benefits. Estimating the change in costs for death benefits is uncertain for two reasons. First, the railroads’ data files we examined for our analysis did not identify clearly whether or not some death claims were work-related. Some of the death claims closed in 1994 for the four railroads involved heart attacks and resulted in no payment under FELA. Given this information, it is likely that these deaths were not work-related. However, so as not to underestimate the cost of death claims under the no-fault alternatives, we assumed that all of the death cases reported, whether compensated under FELA or not, were work-related, and we included their costs in our analysis. Second, all federal and state workers’ compensation statutes authorize death benefits to the surviving spouse and dependents of an employee whose death results from a job-related injury or illness. Because the railroads do not necessarily need to record information on spouses and dependents for FELA settlements, we do not know the extent to which this missing information affected the estimates of the FECA and LHWCA death benefits. As a result, death benefits could also be underestimated in our analysis, especially for employees with relatively young surviving spouses and/or dependents. Nevertheless, because death benefits are a relatively small portion of the total FELA costs, it is unlikely that the net effect of any over- or underestimates would significantly affect our estimate of total compensation payments. Replacing FELA with a nationwide no-fault compensation system could reduce the railroads’ administrative costs for handling claims. Currently, the large railroads generally handle all of the administrative tasks of negotiating and settling FELA injury claims, including processing claims, investigating injury claims, negotiating settlements, litigating claims, and making payments. Claims for medical benefits are processed within the railroads’ overall employee health insurance programs. AAR estimates that railroads paid about $169 million in 1994 in administrative costs under FELA. Under the no-fault alternatives, administrative costs would likely be less because claims administration would be simplified. Railroad claims staff would be primarily concerned with determining how extensive and severe the injury is, whether the injury was job-related, and whether continuing impairment exists. They would not be involved in investigating negligence or negotiating the value of noneconomic losses. As a result, the administrative time and cost required per claim would likely be less than they are under FELA. However, the costs for employee rehabilitation programs might increase under a no-fault system. Rehabilitation does not appear to receive much emphasis under FELA. Although rehabilitation expenses can be compensated under FELA, it appears that not many employees elect to undergo rehabilitation. According to an official from one railroad, most railroads offer rehabilitation programs to injured employees. However, he said that few employees take advantage of such programs, in part because doing so could jeopardize their FELA settlements. Rehabilitation plays a much larger role in no-fault compensation systems. As we recently reported, both federal and state workers’ compensation programs emphasize returning employees to work with their original employer.Under FECA, federal employees who refuse to cooperate in vocational rehabilitation programs or to make a good faith effort to be reemployed could potentially lose benefits. The impact on railroad costs of changing to a no-fault system that emphasizes rehabilitation is difficult to forecast. While the outlays for rehabilitation itself might be higher, overall compensation costs could be less if rehabilitation allows workers to return to work sooner. In 1994, the average time from the date of an accident to the date of a settlement for three large railroads from which we obtained data on this issue—Conrail, Norfolk Southern, and Burlington Northern—ranged from about 7 to 10 months for direct settlements, 19 to 25 months for cases in which the claimant was represented by an attorney but there was no lawsuit, and 36 to 46 months for cases in which lawsuits were filed. The time it takes to process cases under FELA may not be that different from what it is under FECA and LHWCA when the claimant is represented by an attorney. Contested cases that went through all appeal levels averaged about 26 months to be decided under FECA and about 30 months under LHWCA. These periods do not include any additional time that might elapse between the time of the injury and the filing of an appeal or any additional time that might elapse if LHWCA cases go to court. Even if this additional time is short, the overall time taken to process contested FECA and LHWCA cases can be lengthy. The resolution of contested cases under state workers’ compensation may also take a long time. No-fault compensation systems were developed in part to provide injured workers with benefits in a timely manner. However, resolving contested cases under these systems can be lengthy. Although the Department of Labor noted that most FECA claims are approved for payment the first time they are presented—about 92 percent of all claims received in fiscal year 1994—claims can later be appealed. On average, in fiscal year 1995, holding a hearing took about 10 months and obtaining an appeals board decision took about 16 months. Therefore, it could take, on average, about 26 months to resolve a FECA case that requires both a hearing and an appeals board decision. This period does not include any additional time that might elapse between the time of an injury and the time a case is contested or the time it takes to prepare an appeal. The time it takes to receive a decision from the Employees’ Compensation Appeals Board has increased substantially over the last 6 years—from about 3 months in 1990 to almost 16 months in 1995. The Labor Department attributed this rise to an increase in the number of appeals and to loss of staff to process the appeals. Resolving contested LHWCA cases can also take a long time. In fiscal year 1995, it took about 12 months, on average, to process an LHWCA case before an administrative law judge and about 18 months to process a case before the Benefits Review Board. Therefore, it could take, on average, about 30 months to process an LHWCA case that is heard by both an administrative law judge and the Benefits Review Board. The time between when the injury occurs and when the case is contested is additional as is the time between the appeal processes. Over the past 5 years, the time taken to process cases at the Benefits Review Board has ranged from 15 months to 27 months; in fiscal year 1995, it averaged about 18 months. Resolving contested cases under state workers’ compensation sometimes can be even slower than it is under FELA. For example, in 1994 in Illinois—a state with a large number of freight railroad workers—a contested case took, on average, about 45 months to be processed through the various levels of appeal at the Illinois Industrial Commission. Cases could then be appealed further to the state court system. According to data from the California Workers’ Compensation Institute, a trade organization that collects data on California workers’ compensation, in 1994 the percentage of litigated insurance claims open for at least 28 months had increased from 27 percent in 1993 to 38 percent in 1994.However, not all states take a long time to resolve contested claims. For example, in 1994 contested cases took, on average, about 14 months to process in Nebraska. From a financial perspective, railroads might or might not see their injury compensation costs reduced if FELA were replaced by a no-fault compensation system with FECA- or LHWCA-level benefits. The outcome would depend to a great degree on how many employees who leave the railroads after receiving their settlements would be physically able to resume working. However, without better information on these workers, it is difficult to conclude that the railroads would be better off financially under a no-fault system paying FECA- or LHWCA-level benefits. In evaluating any proposals for replacing the current FELA system, it will be important to obtain a better sense of the likely number of injured railroad workers who are physically able to return to work and those who would be permanently disabled. As an alternative to replacing FELA with a no-fault compensation system, the Congress could modify FELA by adding certain restrictions. Such restrictions could include capping awards for noneconomic losses, limiting the fees received by plaintiffs’ attorneys, requiring the use of arbitration to resolve disputes, or restricting where FELA suits can be filed. The Congress could also permit railroads and their employees to opt out of FELA into some other compensation arrangement. While some of these modifications might reduce the railroads’ FELA costs, they could also adversely affect some injured railroad workers by reducing the compensation they receive in a settlement or limiting the availability or the quality of their legal counsel. The Congress could allow workers to continue under the current FELA provisions through “grandfathering” and subject only newly hired employees to any or all modifications. However, the workers would then be under different rules or different systems, and workers with similar injuries would thus have different compensation benefits. In addition, permitting railroads and their employees to opt out of FELA might make disputes about collectively bargained injury compensation subject to the provisions of the Railway Labor Act, possibly leading to federal intervention to resolve these disputes. Over the past several years, the Congress has proposed capping awards for noneconomic damages in product liability litigation. Recently, the Congress has considered placing a $250,000 cap on noneconomic damages awarded in personal injury suits arising from accidents involving Amtrak.A similar cap could be placed on the noneconomic portion of FELA awards. Because the railroads do not specifically identify the proportions of FELA awards that are for economic and noneconomic damages, we could not estimate precisely the impact of such a cap on FELA costs. However, using assumptions about the proportion of FELA awards that might be for noneconomic damages, we developed hypothetical estimates of the potential impact of a $250,000 cap on four large railroads’ 1994 FELA costs. On the basis of the hypothetical distributions shown in table 3.1, the potential reduction in costs associated with these claims ranged from about $7 million to about $48 million. Although FELA could be modified to cap awards for noneconomic damages, such an action could reduce the benefits received by injured workers. Under the hypothetical distributions shown in table 3.1, the compensation received by an injured worker would be reduced dollar for dollar for any amounts over $250,000 that the worker would have received for noneconomic damages. The railroad labor organizations we contacted uniformly opposed a cap on noneconomic damages, believing it would adversely and unfairly affect their members. Several railroad labor organizations and plaintiff attorneys said a cap would allow railroads to avoid paying the full cost of injuries. In an effort to reduce FELA’s costs, the Congress could place a cap on the amounts payable to plaintiffs’ attorneys. Railroad labor organizations told us that attorneys representing injured workers generally receive no more than 25 percent of a FELA award. AAR estimates that in 1994, attorneys representing injured workers at large railroads received between $182 million and $240 million in fees. Whether the railroads’ FELA costs would decline as the result of a cap depends to a large extent on what cap was established and the relationship between a cap and a FELA settlement. The railroads’ FELA costs could decline if a cap was set at less than the 25 percent that the plaintiffs’ attorneys receive from a FELA award, assuming that a lower attorneys’ fee would lead to a lower settlement amount. On the other hand, a cap on plaintiffs’ attorneys’ fees might have little impact on FELA costs if settlement amounts stay the same or increase as attorneys push for higher settlements to compensate for the lower percentage allocated to legal fees. A cap on what the plaintiffs’ counsel could receive might also benefit injured workers to the extent that lower legal fees might allow workers to keep a larger share of a settlement. Railroad officials with whom we spoke were split on the possible effects of a cap on costs, and some suggested that a sliding scale could be a better way to control legal fees. Under a sliding scale, the percentage of an award payable as attorneys’ fees would either decline as the size of the award increases or increase if a case is settled quickly. Other workers’ compensation systems limit attorneys’ fees. While FECA and LHWCA do not necessarily limit the amount of an attorney’s fee, they do require that such a fee be approved before being paid and that the fee be reasonable. In particular, FECA requires that approval of attorneys’ fees be based on the actual necessary work performed. In making this determination, such factors as the complexity of a claim and the amount of time spent actually developing and presenting the claim are assessed. State workers’ compensation systems also limit attorneys’ fees. In four of the five state workers’ compensation systems we reviewed—in the states that employed the most railroad workers in 1995—attorneys’ fees are in some way limited. In general, attorneys’ fees in these four states are limited to between 9 and 25 percent of a worker’s compensation award. In Texas, attorneys’ fees are limited to no more than $150 per hour, and guidelines are used to determine how many hours can be billed and for what types of services. The total fees are not to exceed 25 percent of a benefit award. Although limits on the fees received by the plaintiffs’ counsel might have financial benefits to railroads and injured workers, such limits could affect the availability and/or quality of the workers’ legal representation. This appears to have happened in some state workers’ compensation systems. For example, Texas revamped its state workers’ compensation program in 1991 and set limits on attorneys’ fees. In April 1995, the Workers’ Compensation Research Institute reported that initial indications were that the limits placed by Texas on the fees for plaintiffs’ attorneys had caused a number of attorneys who previously had practiced workers’ compensation law to leave the field. The institute’s report concluded that at a minimum, it was more difficult for claimants with low-value claims to find attorneys to handle their cases. The institute noted similar problems in California, reporting in December 1992 that California’s typical 9- to 12-percent limit on attorneys’ fees may have contributed to the devolution of work to paralegals and to the refusal by some attorneys of cases that were more complicated and time-consuming. Arbitration is a mechanism typically used in contract and other commercial disputes to resolve issues quickly and at low cost. The Congress could modify FELA to require that compensation disputes be arbitrated before being tried in a court of law. As we reported in July 1995,arbitration and other approaches to resolve disputes are being used to avoid the time and cost of litigation and to minimize the adversarial relationship between employers and employees resulting from disputes. The court system has also looked to arbitration and other approaches to resolve disputes quickly and to reduce backlogs in court dockets. The use of arbitration to resolve workplace injury cases has varied. It does not appear to be widely used in the rail industry. Information from the Federal Judicial Center indicates that for 1990-95, of the approximately 6,600 cases identified as FELA cases in the 18 federal district courts with mandatory or voluntary arbitration programs, about 11 percent (710 cases) were successfully closed as a result of arbitration. The remaining cases either went on to trial or were resolved in some other manner. In all of the courts, arbitration was nonbinding, and a trial could be requested following an arbitration decision. In October 1993, the National Center for State Courts reported that over half of the states had experimented with arbitration programs associated with courts since they were introduced in 1952. However, no information was available on the arbitration of FELA cases at the state level. According to the center, the characteristics of state arbitration programs varied, but typically, arbitration was based on the amount of money at stake—frequently $50,000 or less. Finally, three of the five state workers’ compensation programs we reviewed—in California, Illinois, and Texas—had arbitration programs. The success of these programs appears to be limited. For example, in 1994 over 50 percent of Illinois’ arbitration decisions were appealed, and in Texas no arbitration hearings were held. Although arbitration has the potential for saving time and costs, it may be difficult to adapt to a system like FELA. Railroad officials and their attorneys agreed that so far, arbitration has not been very effective in resolving FELA cases. One railroad official told us that arbitration is not useful when a serious disagreement exists between the parties, such as a dispute about negligence. The National Association of Railroad Trial Counsel commented that without fundamental change to FELA itself, arbitration would merely transfer FELA’s negative aspects to an arbitration setting. Some attorneys representing injured workers also do not support arbitration in FELA cases. In their view, for arbitration to be successful, the parties must be able to agree on liability. The larger the gap between the two sides on this and other issues, the more likely it is that a case will proceed to trial and a jury verdict. FELA gives plaintiffs the right to bring cases in either a federal or state court. Railroad management frequently complains that FELA permits injured workers and their attorneys to file suit in localities where judges and juries are favorable to plaintiffs. According to the railroads, these jurisdictions are often far from the scene of an accident where the injury occurred. The Congress could modify FELA to limit the places where lawsuits can be filed. The monetary impact of changing the venue rules is hard to forecast because we do not have data comparing awards in similar FELA cases in different jurisdictions. Any potential benefit to the railroads must be weighed against taking away injured workers’ right to choose a state court that they believe is the best place for the case to be heard as well as the states’ overriding decisions about who can bring cases in their courts. Although FELA gives plaintiffs the right to bring a suit in either a state or a federal court, plaintiffs are still limited to bringing cases in courts that have jurisdiction to hear the case. Jurisdiction over a defendant in state court is limited by the Fourteenth Amendment to the Constitution to those instances in which the defendant has at least “minimal contacts with the state.” This restriction protects defendants from being sued in a state with which they have no relationship. The rule for determining whether states have jurisdiction is broad and flexible. Suits may generally be brought against companies where they regularly do business. In addition to the constitutional restrictions, venue laws in the 10 states whose venue statutes we reviewed generally restricted suits to the jurisdiction where the claim arose, where the defendant does business, or where the plaintiff resides. While these laws do not leave plaintiffs free to file in any court they wish, plaintiffs generally have the latitude to choose a locality that they believe will provide them with the best outcome. Finally, bringing suit in a court with jurisdiction to hear the case does not necessarily obligate the court to hear the case. Many states have adopted the doctrine of forum non conveniens, which permits courts to dismiss a case when it “is a seriously inconvenient forum for the trial of the action provided a more appropriate forum is available to the plaintiff.” Such a dismissal is left to the trial judge’s discretion and will only be overturned on appeal for abuse of that discretion. The Congress could restrict the venue in which FELA cases can be heard within a state. Proponents of such a change believe that doing so would reduce the railroads’ FELA costs and alleviate inconveniences caused by cases being filed far from where the injury occurred. Opponents believe that restricting where suits can be filed would hinder railroad workers’ access to adequate compensation and could be inconvenient for workers who travel for their jobs and are injured away from home. The cost impact of restricting venue at the state level is uncertain. We did not analyze individual FELA cases, so we are unable to estimate the potential cost savings, if any, of restricting venue. In addition, venue alone does not determine the size of FELA awards. Other factors also play a role, such as the comparative negligence of injured workers and the merit of the arguments in individual cases. Another modification that the Congress could make is to permit railroads and their employees to elect to opt out of FELA. That is, the Congress could allow the railroads and their employees to decide for themselves, through collective bargaining, what workers’ compensation arrangement they prefer. FELA would have to be amended to allow for such agreements. While the option to opt out would give both parties more freedom in arriving at a mutually advantageous solution, making injury compensation part of the overall collective bargaining agreement may have the added consequence of bringing disputes over injury compensation under the Railway Labor Act. Also, for railroads without unions, as is typical with many small railroads, the possibility arises that workers at some railroads could be covered by different compensation systems, increasing the railroads’ administrative costs and giving employees with similar injuries different compensation opportunities. This situation could negatively affect employees’ morale. Finally, opting out would require changes in either federal or state laws to ensure that injured rail workers are covered by a workers’ compensation program in the absence of FELA. If FELA is made a matter for collective bargaining, federal involvement in the railroad industry might increase. In particular, disputes about the selection of an injury compensation system during contract negotiations could come under the Railway Labor Act. This act governs labor-management relations in the railroad industry and is designed to reduce the likelihood of strikes. The Railway Labor Act does so by mandating a lengthy contract negotiation process and by using federal agencies, such as the National Mediation Board, when necessary, to mediate disputes. If a dispute is not resolved, the President may convene an emergency board to propose recommendations. If a dispute threatens interstate commerce, the Congress may impose emergency board recommendations or other conditions on both railroads and unions. Unless disputes about injury compensation are specifically excluded from the Railway Labor Act, such mechanisms could be triggered, and the federal government could be directly involved with any subsequent settlement of such disputes. Allowing railroads to opt out in a nonunion environment could also raise the issue of injury compensation coverage. There are two aspects to this issue. One is partial coverage of a workforce. Some state workers’ compensation programs do not allow partial coverage of a workforce. Instead, all privately employed individuals must be covered unless certain numerical thresholds are met, employees fall into an excepted group, or a waiver is granted. The second issue is potential exemption from coverage. In January 1995, the Department of Labor reported that 15 states allowed exemptions from their workers’ compensation programs if employers had fewer than a threshold number of workers or met other conditions. While the requirements varied, in general, exemptions could be granted for employers with less than three to five employees. Our survey of the small railroads found that 116 railroads (about 30 percent of the 398 respondents) employed five or fewer employees. The opting-out alternative would necessitate changes in federal law. Not only would FELA have to be amended, but legislation might be required to provide for alternative coverage. For example, FECA and LHWCA could be modified to cover all railroad workers currently subject to FELA. FECA currently covers employees of the Alaska Railroad who incurred any injuries or illnesses before the railroad was transferred to the state of Alaska in 1985. FECA also covers those railroad workers who are federal civilian employees. LHWCA also covers those workers who work for a railroad but who are engaged in maritime activities, such as loading and unloading vessels. In assessing this option, the Congress would need to consider the extent to which the federal government would be responsible for handling and/or adjudicating railroad workers’ claims for benefits and the potential impact on federal agencies’ budgets and operations from assuming these responsibilities. Allowing railroads to opt out of FELA might also require changes in state law. FELA currently preempts state law in the coverage of work-related injury compensation of railroad workers. However, in our review of state workers’ compensation law in the 10 states with the most railroad workers, we found that some railroad workers might not be covered if the railroads opt out of FELA and changes are not made in state law. For example, in 3 of the 10 states—Georgia, Nebraska, and Virginia—interstate railroad workers are specifically excepted from state workers’ compensation programs. Railroad workers also might not be covered in Texas if a railroad elects not to be covered by state workers’ compensation. Coverage of railroad workers in the other six states was less clear and could depend on a number of factors, including legal interpretations about the extent to which states have the power to regulate businesses engaged in interstate commerce. The Congress could elect not to subject the current railroad workers to any one or all of the proposed modifications to FELA. In fact, if the Congress chose to replace FELA with a nationwide no-fault system or allow the railroads to come under state workers’ compensation systems, it still could choose to allow existing employees to remain under the current FELA system. Such “grandfathering,” however, may have problems. First, the railroads might have to handle injury claims under two systems or under two sets of rules and restrictions, likely adding to costs rather than reducing them. Second, two railroad workers suffering from the same injuries might have access to different types and levels of compensation. Although grandfathering might assuage opposition to replacing or modifying FELA, doing so might create significant problems. Decisions about modifying FELA are complex and must be viewed in several ways. From the railroads’ perspective, there may be opportunities to reduce costs. For example, capping the noneconomic portion of FELA awards and attorneys’ fees might act to reduce the railroads’ costs, depending on the portion of FELA settlements represented by noneconomic damages and the relationship between attorneys’ fees and settlements. Similarly, restricting where FELA suits can be filed might reduce costs, depending on how many suits continue to be filed in jurisdictions perceived as being favorable to plaintiffs. From the injured workers’ perspective, however, the issues are different. Modifying FELA could reduce the amount of compensation they receive or limit the availability of legal counsel. There are other complexities as well, such as whether arbitration would actually save time and money if applied to a compensation system that involves issues of negligence like FELA, how opting out could change the character of injury compensation for railroads and their workers, and whether opting out could lead to federal involvement in resolving disputes about compensation. If the Congress decides that it wants to modify FELA, it will need to take into account the possible consequences of some of the proposed changes. For example, permitting current employees to remain under FELA while new employees are under a new system could create tension in the workplace. FELA applies to employees of nearly all railroads regardless of size or type of service provided. We surveyed the small freight railroads to determine, among other things, the impact of FELA on overall operating costs. We also collected information from passenger railroads—commuter railroads and Amtrak—on their experience with FELA in 1994. Our survey found that small freight railroads experienced lower FELA costs than the large freight railroads. In part, small freight railroads have lower costs because, on average, fewer workdays are lost per on-the-job injury and they have a lower percentage of injuries that result in lost-work time. In general, data obtained on passenger railroads showed similar results. Like the large freight and passenger railroads, the small freight carriers purchase insurance to protect against large FELA payouts and other liabilities. Most large railroads have high deductibles and are considered self-insured for FELA purposes. In 1994, the passenger and small freight railroads experienced lower FELA compensation costs than the large freight railroads. As shown in table 4.1, the passenger carriers paid about $83.7 million, or $0.96 per hour worked, while the small freight carriers paid about $42 million in compensation costs, or $0.96 per hour worked. In contrast, the large railroads paid $2.26 per hour worked—more than twice what the passenger and small freight railroads paid. The cost differences may be traced, in part, to two factors: (1) the average number of lost workdays and (2) the wage rate. Data on injuries from the Federal Railroad Administration showed that the passenger and small freight railroads generally average fewer lost workdays per injury than the large freight carriers. For example, in 1994, the average number of lost workdays per injury for both the small freight railroads and the passenger carriers was less than half that of the large railroads—30 days each compared with 77 days. Also, the proportion of injuries that resulted in lost workdays was lower at the small freight railroads than it was at the large freight carriers and passenger railroads. In 1994, only 54 percent of the injuries at the small freight railroads resulted in lost workdays, compared with 67 percent at the large carriers and 75 percent at passenger railroads. We did not attempt to analyze the reasons for these differences. At the same time, average wages and salaries were more than 20 percent higher at the large railroads—$46,714 compared with $38,730 at the small railroads that responded to our survey—resulting in higher compensation for lost wages per day lost. Average wages and salaries at passenger railroads were even lower—$36,690. Adding the administrative and legal expenses for FELA increased the passenger and small freight railroads’ costs by about 21 percent and 41 percent, respectively. As shown in table 4.2, the small freight carriers paid about $17 million in administrative and legal costs, or $0.39 per hour worked. In contrast, the passenger carriers paid $0.21 per hour worked, and the large freight railroads paid about $0.34 per hour worked for these expenses—about 46 and 13 percent less than the small freight railroads, respectively. In part, this is because of certain economies of scale in processing claims. The passenger and large railroads might have in-house counsel, for example. Although the small railroads experienced lower overall FELA costs than the large railroads, the costs were not the same for all types of small railroads. For example, in 1994, the switching and terminal railroads experienced significantly higher compensation costs under FELA than the regional and local carriers. As shown in table 4.3, these railroads paid about $1.30 per hour worked, or almost 67 percent more in such costs than the regional carriers and 41 percent more than the local carriers. The higher compensation costs experienced by the switching and terminal railroads may be attributable to at least three factors, including the nature of the work these railroads perform, the degree of union representation, and the average level of wages. First, switching and terminal railroads, by definition, perform switching services in terminal areas; therefore, their employees are exposed to potentially dangerous activities connected with moving and placing freight cars and locomotives. Second, according to our survey, in 1994, the switching and terminal railroads had more employees represented by labor unions than the regional and local railroads—74 percent of employees compared with 61 percent and 33 percent of employees at the regional and local railroads, respectively. Those switching and terminal railroads that were unionized had higher annual FELA compensation costs than the nonunion switching and terminal companies—$2,858 per employee compared with $874 per employee. Finally, in 1994, the switching and terminal railroads paid average annual wages that were comparable to those of the regional carriers and higher than those of the local railroads—$40,707 compared with $40,204 and $32,806 at the regional and local railroads, respectively. Switching and terminal railroads also experienced the highest administrative and legal costs. As shown in table 4.4, these railroads paid almost $9 million in administrative and legal costs, or $0.71 per hour worked. In contrast, regional railroads paid $0.24 per hour worked, while local railroads paid $0.31 per hour worked. The switching and terminal railroads’ legal costs alone amounted to $7.4 million—about three times the legal costs of either the regional or local carriers. Two factors that may have contributed to this result are the number of cases in which an employee filed a lawsuit and the number of cases in which the railroads hired outside defense attorneys. In 1994, 32 percent of the switching and terminal railroads’ FELA cases involved a lawsuit, compared with 23 percent at regional railroads and 13 percent at local railroads. This situation may have necessitated the need for outside defense attorneys. In 1994, the switching and terminal railroads settled 41 percent of their cases with the assistance of outside defense attorneys, compared with 29 percent at the regional railroads and 20 percent at the local railroads. The availability of insurance to cover a large FELA award is critical to a small railroad because a large FELA award has the potential to severely affect the railroad’s financial health. Although liability insurance that includes FELA coverage has not always been readily available and affordable, it appears that it currently is. At the time of our review, most small railroads had liability insurance that included coverage for FELA payouts. Like the large freight and passenger railroads, most small railroads purchase insurance to protect against large FELA payouts and other liabilities. Fifty percent of the small railroads that responded to our survey had fewer than 13 employees and payrolls under $400,000. Seventy-eight percent had annual operating revenues of less than $5 million. A large FELA award, if paid entirely out-of-pocket, could threaten these railroads’ survival. To reduce the impact of large FELA awards, these railroads purchase insurance from private companies. The results of our survey showed the critical role that insurance plays in protecting the small railroads against large FELA awards. Almost 88 percent of the small railroads had some form of insurance, typically railroad liability insurance that included FELA coverage, and 68 percent of these policies had deductibles that ranged from $25,000 to $100,000 per claim. In the event of a large FELA award, a railroad with liability coverage would be responsible for its deductible. Only about 12 percent of the railroads that responded to our survey reported that they were self-insured. Railroads can self-insure if it is cost-effective to do so. For example, some railroads choose to self-insure themselves because they have the resources to cover their potential liabilities. Similarly, a railroad with a history of only a few minor injuries per year could also choose to self-insure itself for FELA purposes, finding it cheaper than paying insurance costs. Our review of the accident and injury histories of the self-insured railroads that responded to our survey showed that about 25 percent of these railroads had no work-related injuries from 1990 through 1994. An additional 25 percent had five or fewer injuries during this period. Although insurance protects the small railroads from paying out-of-pocket for large FELA awards, most FELA settlements are within the limits of the deductible. As shown in figure 4.1, on the basis of our survey results, we estimate that in 1994 the small railroads paid about 89 percent of the FELA compensation costs themselves. Liability insurance paid only about 5 percent of FELA costs, and medical insurers paid the remaining 6 percent. According to our survey, only 10 small railroads had FELA payouts that exceeded their deductible levels. These payouts accounted for $2 million and only 17 of the 1,284 cases settled in 1994. To cover employees’ medical expenses, the small railroads either pay the costs directly or, like some large freight railroads, obtain special health insurance. A special health plan provides for 24-hour coverage of both work-related and off-duty injuries and illnesses. Our survey showed that in 1994, about two-thirds of the small railroads purchased some form of health insurance to cover injured employees’ medical costs. FELA insurance has not always been affordable for the small railroads. In the late 1980s, only one domestic company provided small railroads with insurance that included FELA coverage, and according to one insurance company official, premiums were double what they are today. As a result, the small railroads either paid costly insurance premiums or assumed the risk of these liability costs themselves. We identified eight companies that as of August 1995, provided small railroads with liability insurance that included FELA coverage. Because of the increased competition, premiums have declined over the past 5 years. Insurance industry officials estimated that in 1995, small railroads’ annual premiums for liability insurance with FELA coverage generally ranged from $25,000 to $50,000. One official described an average policy as costing $50,000 for $5 million in coverage with a deductible of $50,000 per claim. Our survey results for the small freight railroads generally support the estimates of the insurance providers. Of the 264 railroads that provided us with information on their insurance costs, 54 percent paid less than $50,000 for a liability policy that included FELA coverage. Most of these railroads’ deductible levels ranged from $25,000 to $100,000, and just over half of the railroads had annual premium costs that were 10 percent or less of their payroll. Many of the railroads with annual premiums of $200,000 or more had more employees and higher payrolls. The cost of premiums for half of these latter railroads was also in the 10-percent-of-payroll-or-less range. A small railroad could reduce its liability insurance costs by pooling its resources with other small railroads and obtaining a group policy. Such purchasing groups were authorized for FELA purposes by the Liability Risk Retention Act of 1986. A purchasing group would spread all or any portion of its members’ liability exposure and costs. While 10 percent of the railroads in our survey reported that they were part of purchasing groups, upon further review, we found that most of these railroads were technically not in such groups. Rather, these railroads were subsidiaries of railroad management companies and other entities that owned more than one railroad and had group insurance for their railroads. Like a purchasing group, this arrangement serves to spread the railroads’ liability exposure and costs. Most of the railroads that responded to our survey reported very little interest in participating in a purchasing group, and only 16 percent indicated that they had ever seriously considered entering into such an arrangement. For the railroads that had not considered a purchasing group, the most common reason given was that no other railroad had suggested it. Another leading reason was that these railroads did not want to depend on the safety records of other railroads in the underwriting process. FELA does not appear to be any more burdensome for passenger and small freight railroads than it is for the large freight railroads. Our review suggests that compared with the large freight railroads, passenger and small freight railroads are less burdened by FELA and that they currently can insure against catastrophic losses. Therefore, we found no reason, at least on the basis of financial considerations, that these railroads need to be treated differently in any deliberations about whether to either modify FELA or replace it with a no-fault compensation system.
Pursuant to a congressional request, GAO examined how replacing the Federal Employers' Liability Act (FELA) with a no-fault compensation system would affect the railroad industry. GAO found that: (1) the cost of replacing FELA with a nationwide no-fault compensation system depends on the number of injured railroad workers permanently disabled and the number of workers unable to return to work at preinjury wages; (2) the costs under a no-fault compensation system would be the same as or lower than FELA costs; (3) overall injury compensation costs would be lower under a no-fault system if fewer than 70 percent of injured rail workers are able to return to work; (4) railroads would save an average of $100 per employee if injured workers continue to work after receiving settlement; (5) a no-fault compensation system would reduce railroads' administrative costs, but limit the amount of compensation and legal counsel that injured workers receive; (6) small railroads have fewer lost workdays and lower injury rates than large railroads; (7) small railroads have lower FELA costs than large railroads and rely on insurance payments to avoid high FELA payouts; (8) railroads could reduce their administrative costs by placing a cap on compensation for noneconomic losses and limiting plaintiff's legal fees; (9) railroad management and labor disagree over how well FELA is working and whether it should be replaced or changed; and (10) FELA is no more burdensome for passenger and small freight railroads than it is for large freight railroads.
In recognition of their service to our country, the Department of Veterans Affairs (VA) provides medical care, benefits, social support, and lasting memorials to veterans and their families. It is the second-largest federal department with approximately 250,000 employees. In fiscal year 2008, VA reported incurring $97 billion in obligations for its overall operations. VA provides services to veterans and their families primarily through its three line administrations: The Veterans Health Administration operates a nationwide network of 154 hospitals, 995 outpatient clinics, 135 community living centers, 49 residential rehabilitation treatment programs, and 232 community-based counseling centers. The Veterans Benefits Administration provides assistance and benefits such as veterans’ compensation, survivors’ benefits, and employment assistance through 57 veterans’ benefits regional offices. The National Cemetery Administration manages 130 national cemeteries. To support its services to veterans and their families, VA relies on an assortment of business systems, including 13 different systems that currently support its asset and financial management. However, the department has long recognized that its business systems and processes are inefficient and do not effectively support the department’s mission. For example, according to the department, systems are not integrated, manual entry that involves labor-intensive accounting processes is business processes are not standardized, and processes and systems require multiple entry of business information and result in untimely financial reporting. Since fiscal year 1991, the department has reported on the need for an integrated financial management system and has reported financial management system functionality as a material weakness. This weakness continues to exist because many of VA’s systems are outdated, leading to inefficiencies in the reliable, timely, and consistent preparation, processing, and analysis of financial information for the department’s consolidated financial statements. To address this weakness and to improve stewardship and accountability over its resources, VA has for over a decade been pursuing improvements in its business processes and replacement of its existing financial and asset management systems with an integrated financial management system. The department’s first attempt to replace its financial and asset management systems, CoreFLS, began in 1998. The goal of this modernization effort was to develop a single system to integrate the many financial and asset management systems used across the department. VA had planned to complete CoreFLS in March 2006; however, it terminated development of the system in July 2004 after CoreFLS pilot tests determined it did not fully support the department’s operations and that the initiative suffered from significant project management weaknesses. According to VA’s Office of Inspector General (OIG), the department had obligated about $249 million of the $472 million that had been budgeted for the initiative by the time of its termination. Following the failed CoreFLS pilot tests, VA hired Carnegie Mellon University’s Software Engineering Institute (SEI) to perform an independent assessment of the project. In June 2004, SEI identified a number of management and technical deficiencies that had undermined the success of the initiative. SEI identified multiple findings related to problematic technical and functional execution, as well as poor management execution. Technical and functional problems included CoreFLS’s inability to perform essential financial management functions, security weaknesses, and usability. Management problems were identified in the areas of acquisition and program management, business process re- engineering, and transition planning. In addition, in August 2004, VA’s OIG reported multiple findings related to CoreFLS deployment, such as inadequate training, inability to monitor fiscal and acquisition operations, inaccurate data, and project management and security weaknesses. Further, in August 2007, VA’s Management Quality Assurance Service (MQAS) summarized findings from four CoreFLS reviews completed between August 2005 and August 2006. Among the findings, MQAS identified numerous fiscal and contract administration issues resulting from poor administrative internal controls such as improper reimbursements of task orders and travel expenses. Collectively, VA identified 141 findings related to problems with the CoreFLS initiative, which the department categorized into functional areas of responsibility such as acquisition management, organizational change management, program management, and systems engineering. In a subsequent effort to capture lessons learned and ensure that mistakes from CoreFLS would not be repeated in later initiatives, VA developed a repository, in which it aggregated the findings from the three independent reviews of the initiative. In September 2005, in a subsequent effort to replace its financial and asset management systems, VA began work on FLITE. In this regard, the department undertook activities related to planning and requirements development. For example, the department documented business requirements and business processes, initiated coordination for reporting and financial data warehouse development, conducted a market analysis of providers with the software and hosting capability to support VA’s existing financial management system, established key personnel requirements to provide program support and awarded a program support contract, and started developing numerous planning documents (e.g., program management plan, acquisition plan, and concept of operations). According to VA’s planning documents, FLITE is a multiyear development effort that is projected to deliver a fully operational system by 2014 at a total estimated cost of $608.7 million. The overall objectives of the FLITE program are to implement accessible and enterprise-level standardized business processes that result in increased efficiencies and enhanced internal controls; provide VA executives and managers with timely, transparent financial and asset management information to make and implement effective policy, management, stewardship, and program decisions; and provide business data and information in a secure, shareable, open, and efficient manner to facilitate a service-oriented atmosphere. The FLITE program includes two main projects to acquire the integrated asset and financial management system: an asset management component, referred to as the Strategic Asset Management (SAM) initiative, and the financial management component, referred to as the Integrated Financial Accounting System (IFAS). The program also includes a third project, to acquire a data warehouse that is intended to provide financial and logistics data reporting and analysis. SAM is intended to consolidate the asset and inventory management functions and the associated work management processes currently performed by multiple legacy applications into an advanced integrated system. It is to be the system of record for VA’s physical assets and perform asset and inventory management, real property management, information technology (IT) asset management, and work order and project management functions currently performed by multiple legacy applications. VA has chosen IBM’s Maximo Enterprise Asset Management software suite to implement these capabilities. IFAS is to be the financial, procurement, and accounting management component, and, together with SAM, is intended to replace VA’s legacy Financial Management System (FMS) and the Integrated Funds Distribution, Control Point Activity, Accounting, and Procurement (IFCAP) system. The data warehouse is projected to consolidate data from multiple transactional systems, primarily SAM and IFAS, for improved reporting, querying, and analysis capability. It is also intended to allow users to run larger and more complex queries and reports faster, without affecting the performance of the source systems. ce systems. Figure 1 shows a simplified view of the program’s components. Figure 1 shows a simplified view of the program’s components. The program is a collaborative effort between the Assistant Secretary for Information and Technology, who serves as VA’s Chief Information Officer, and the Assistant Secretary for Management, who serves as VA’s Chief Financial Officer. Various groups within VA have different roles and responsibilities for overseeing and managing programs. Figure 2 depicts the relationships between these oversight groups and the FLITE program. The roles and responsibilities of each oversight group are as follows: The VA Executive Board provides the Secretary of Veterans Affairs with a forum for discussing programs with senior leadership before decisions are made. The Strategic Management Council makes recommendations about programs to the VA Executive Board. The Programming and Long Term Issues Board focuses on long term multiyear program planning. The Budgeting and Near Term Issues Board is responsible for overseeing budget formulation and execution activities. The IT Leadership Board is responsible for adjudicating inter- and intraboard issues about programs that cannot be resolved between the Programming and Long Term Issues and Budgeting and Near Term Issues Boards. The FLITE Oversight Board is responsible for making decisions regarding FLITE business requirements, policies, and standards. The FLITE Program Office is responsible for overseeing and coordinating all aspects of the program. The office is responsible for performing these functions through the Program Director’s Office (PDO), which is responsible for business requirements and processes, and the IT Program Management Office (PMO), which is responsible for technical solutions. Project teams are responsible for managing SAM, IFAS, and the data warehouse. In addition, other VA organizations provide the office with quality assurance, acquisition, and technology support. These program- specific and VA supporting organizations are depicted in figure 3. Table 1 describes the components that comprise the program office and supporting VA organizations. VA is employing a multiphase approach for both the SAM and IFAS projects, which are to be implemented by contractors using commercial off-the-shelf systems. Specifically, these components are to be implemented through sequenced acquisitions and phased deployment and integration. The systems are planned to be implemented initially at pilot sites and subsequently refined and validated at beta sites before national deployment. The purpose of the pilot phase is to perform a final validation of the selected commercial off-the-shelf system and associated business processes in a production environment, gain experience in deploying the system, and obtain acceptance from the user community. The beta phase is to further hone the rollout capabilities by deploying the system to a limited number of sites that span the range of VA’s organizational environments. Following the beta phase, the department plans to incorporate lessons learned from both phases and produce a set of repeatable processes that can be employed during national deployment of the system. For SAM, the department’s plans include implementation at one pilot site and 15 beta sites. The SAM pilot contractor is to evaluate and analyze VA’s business processes and requirements for a fit with the Maximo software’s capabilities and produce updated business process documents based on the department’s needs. Also, the contractor is to train the users at the pilot site, as well as provide operations and maintenance and help desk services. The pilot phase is expected to last for an estimated 12 months. Subsequent to the pilot, the department plans to deploy SAM at 15 VA beta sites over a period of approximately 12 months. The component is expected to be deployed nationwide over 21 months, with its completion expected by May 2013. Plans for IFAS include implementing the FMS replacement at five pilot/beta sites and implementing the IFCAP replacement at two pilot/beta sites. The IFAS pilot phase is currently scheduled to begin in the first quarter of fiscal year 2010. The department plans to deploy this component in two separate subphases over approximately 4 years. The first subphase, which will replace FMS with a commercial off-the-shelf financial management system, is expected to take about 2 years to complete. The second subphase, which is planned to be done concurrently with the first phase, will replace IFCAP with the IFAS commercial off-the-shelf financial management system and is expected to take just over 4 years to complete. VA’s approach to implementing the data warehouse calls for developing the warehouse after the underlying data structures of SAM and IFAS are defined and stabilized. The department expects to complete the data warehouse in the first quarter of fiscal year 2014. Figure 4 depicts the program timeline, from program proposal through deployment of the SAM, IFAS, and data warehouse components. In 2009, the program office undertook various activities, including issuing the IFAS request for proposals (February), awarding a program management support contract (March), awarding the SAM pilot project contract and beginning work (April), issuing a request for proposals for independent verification and validation support (July), and initiating planning for the data warehouse (September). According to program officials, as of September 2, 2009, the department had spent approximately $90.8 million on FLITE. This amount included $73.0 million for about 40 contract actions on behalf of the program office: $28.5 million for program management and technical support, $27.8 million for software licenses, $10.9 million for the SAM project, $5.5 million for analyses (e.g., requirements analyses), and $0.3 million for other program activities (e.g., training). Both we and VA’s OIG have previously reported on the FLITE initiative. In a September 2008 report, we noted that key planning documents related to the initiative lacked specificity and detail, and that VA had not addressed all the findings in the CoreFLS findings repository. We recommended that VA add more specificity and details to key planning documents, suchas the concept of operations and work breakdown structure, and address all findings in the CoreFLS findings repository to minimize risk to th successful implementation of FLITE. In response to our report, as of September 2009, VA had updated key planning documents and reported that it had taken actions that addressed all of the findings identified in the repository. In September 2009, VA’s OIG reported on VA’s effectiveness in managing the FLITE program. The office noted, among other things, that although program managers had taken steps toward addressing the CoreFLS findings, deficiencies similar to those found in CoreFLS were also evident in FLITE. For example, OIG reported that FLITE program functions were not fully staffed. VA and its contractor have begun one of the two planned pilot systems— the SAM component. Specifically, in April 2009, the department contracted with General Dynamics Information Technology Inc. to implement Maximo at the VA Medical Center in Milwaukee, Wisconsin. Among the activities the contractor is expected to perform are analyzing business processes, documenting requirements, configuring Maximo, and performing system tests. As of mid-September 2009, VA reported that, with the contractor only 5 months into the 1-year time period planned to complete the pilot, the project had fallen 2 months behind schedule. This 2-month schedule slip was a consequence of the contractor falling behind in its efforts to perform tasks and deliver products that are necessary to implement the pilot system. Specifically, of the 34 tasks planned to be undertaken by mid- September, the contractor reported that 11 had not yet been started— including conducting a security assessment and predeployment testing— and that of 23 tasks that had been initiated, 16 were behind schedule. For example, among the tasks that the contractor noted as behind schedule were analysis of security requirements, business process analysis, and system configuration. Regarding the seven remaining tasks, two had reportedly been completed and five were identified as being on schedule. The contractor reported that it had completed a requirements traceability matrix and was on schedule with respect to starting up a project management office, performing organizational change management activities, and developing quality assurance and control programs. Further, with respect to the delivery of products, the contractor reported that it had delivered only 7 of 37 products due by mid-September. The SAM project management plan and the requirements management plan were among the products that were delivered. Products that had not yet been delivered included the Maximo system configuration document, intended to provide detailed instructions to enable a trained Maximo administrator to incorporate all VA configuration requirements, and the SAM system security plan. VA attributed the project being 2 months behind schedule to a shortage of FLITE program office human capital resources and poor project management by the contractor. Specifically, according to the program director, the program did not have the personnel it needed during the initial months of the SAM pilot project to provide the contractor with the information it needed to make planned progress. Regarding the contractor’s project management, VA stated that the contractor provided a project manager who did not possess the skills necessary to deliver quality and timely products, delayed hiring a project scheduler and used an initial project scheduling approach that was incorrect, used an ineffective and inefficient approach to analyzing VA’s business processes and underestimated the time needed to obtain a thorough understanding of the processes, and underestimated the effort necessary to configure a database server used in the pilot’s development environment. In mid-September, the FLITE program director stated that the department had filled almost all of the program office vacancies and that the contractor had begun to improve its project management weaknesses. Nevertheless, according to the program director, while the department does not expect any further delays in completing the SAM pilot, it does not expect to recover the 2-month schedule slippage that has already occurred. As a result, the department projected completion of the pilot in 14 months, instead of 12 months as originally planned. Additionally, activities are under way to initiate the IFAS pilot. Specifically, the department issued a request for proposals for a pilot contractor in February 2009. A contract for the IFAS pilot is planned for award in late October 2009. VA has taken steps to institute effective management of the FLITE program; however, the department has not yet fully established key capabilities needed to ensure that system components will be implemented as planned. The department recently made progress toward filling program office staff vacancies. Nonetheless, more work is needed to fully establish program management capabilities in areas that are important to the development of its integrated financial and logistics system. Until VA completes efforts to develop and reconcile its cost estimate; comply with EVM system standards; implement performance measures for its schedule; include all relevant federal and system requirements; and perform effective, independent verification and validation, it will have increased risk that FLITE will experience cost overruns and schedule delays and will not provide the capabilities that users need. Our past work has found that the success of federal programs depends on having effective strategic human capital management and, in particular, having the right number of people with the right mix of knowledge and skills. VA has recently taken steps to fill long-standing vacancies in the FLITE program that have adversely impacted the program’s ability to maintain schedules. Specifically, in mid-September, the program acquired 36 staff, filling 111 of 112 required positions. According to the Acting Assistant Secretary for the Office of Management, vacant FLITE program positions were filled by individuals who were reassigned, detailed, or newly hired when the VA Deputy Secretary became aware of the program’s need for staff resources. As a result of the department’s recent actions to fill vacant positions, the office should be better positioned to effectively manage the program. Federal guidelines recommend that operations and maintenance costs over the entire estimated life cycle of an investment be included in a cost estimate. Inclusion of these costs over the time period corresponding to the life of the investment is encouraged by the federal government’s guidance for managing capital assets because such costs are a key element for establishing the total cost of ownership. Further, our Cost Estimating and Assessment Guide describes effective cost-estimating practices, including performance of a risk and uncertainty analysis and development of an independent cost estimate that provides an unbiased test of whether the program’s estimate is reasonable. Typically, the two estimates are reconciled. In August 2008, the FLITE program office developed a program cost estimate of $608.7 million for fiscal years 2007 through 2014—when FLITE systems are expected to achieve full operational capability. However, the office did not project operations and maintenance costs over the entire estimated life of the FLITE investment, and it did not perform a risk and uncertainty analysis as encouraged by best practices. Program officials stated that they did not consider life-cycle operations and maintenance costs in their estimate because they wanted to capture only the cost for developing the FLITE system up to its full operational capability. Also, rather than perform a risk and uncertainty analysis of their own, the program office planned to rely on risk analyses by an outside entity, the Department of the Navy Space and Naval Warfare Systems Command (SPAWAR), that the department engaged to generate a risk adjusted independent cost estimate. Completed in April 2009, the SPAWAR estimate identified costs totaling $1.899 billion for the life of the program and included $1.061 billion of estimated operations and maintenance costs for fiscal year 2015 through fiscal year 2024, which represented the entire estimated life of the initiative. According to SPAWAR officials, they used our Cost Estimating and Assessment Guide as the method for developing the independent estimate. Also, to align with VA’s estimate, SPAWAR used standardized cost elements and definitions to develop a probability-based estimate of $837.8 million for fiscal years 2007 through 2014. This estimate was $229.1 million higher than the department’s estimate for this period. The department’s estimate was not derived based on standardized cost elements and probability-driven risk and uncertainty costs assessments. VA has not yet reconciled its cost estimate with SPAWAR’s estimate. According to department officials, a significant number of end-of-fiscal- year procurement requests and the department’s prioritization of IT acquisitions had affected the timing of plans to reconcile the estimates. Program officials stated that they intend to incorporate federal polices and requirements, as well as address funding, budgetary, or contractual issues necessitated by the reconciliation. According to the officials, the department plans to initiate this work in December 2009 and to complete it in March 2010. Until the reconciliation is completed, effective administration of FLITE program planning, budgeting, acquisition, and performance management activities could be jeopardized if accurate cost data are not available to guide the execution of these functions. Completion of the reconciliation, which should include estimated operations and maintenance costs for the life of the program, is essential to increase the reliability of the FLITE cost estimate and reduce the risk that acquisition plans, budgets, and performance management activities will be unsuccessful or inefficient. OMB and department policies require major programs to use EVM to measure and report program progress. EVM is a tool for measuring program progress by comparing the value of work accomplished with the amount of work expected to be accomplished. Such a comparison permits actual performance to be evaluated, based on variances from the cost and schedule baselines—collectively referred to as a performance measurement baseline. Identification of significant variances and analysis of their causes helps program managers determine the need for corrective actions. Before EVM analysis can be reliably performed, developing a credible cost estimate is necessary to provide program managers with a clear definition of the cost, schedule, and risks associated with the scope of work planned. These inputs are then used to create a performance measurement baseline for EVM analysis. In addition, federal policy requires that systems used to collect and process EVM data be compliant with the industry standard developed by the American National Standards Institute (ANSI) and Electronic Industries Alliance (EIA), ANSI/EIA Standard 748. Program officials have recognized the importance of reliable EVM and finalized the FLITE Program Measurement Earned Value Management Plan in August 2009. The plan identified roles and responsibilities, applicable policy and guidance, and the program’s EVM implementation approach. According to program officials, programwide earned value reporting that will include government, program management support, and SAM project work activities is expected to begin in October 2009. However, while VA plans to begin reporting earned value performance in October 2009, a reliable cost estimate, which is necessary for EVM reporting, is not expected to be completed by that time. Specifically, as noted earlier, the department has not reconciled its cost estimate for the program with SPAWAR’s independent cost estimate. Program officials do not expect reconciliation of the cost estimate to begin until 2 months after earned value reporting is scheduled to begin. Additionally, VA officials have not yet ensured that all EVM systems for FLITE are certified for compliance with ANSI/EIA Standard 748. These compliance assessments are necessary to demonstrate the capability of providing reliable cost and schedule information for earned value reporting. Specifically, the compliance assessment for the SAM pilot contractor’s system has not yet been completed. While program officials did not provide information that explained why a compliance assessment of the contractor’s EVM system had not yet been completed, they stated that the contractor has a plan to obtain system certification. This activity is not expected to be complete until January 2010, 3 months after earned value reporting for the program is scheduled to begin. Until the agency has completed reconciling its cost estimate and ensured that contractors comply with EVM system industry standards, VA will have an increased risk of reporting and managing the program based on unreliable performance data. GAO’s Cost Estimating and Assessment Guide states that the success of a program depends in part on having a reliable schedule that realistically depicts the program’s work activities to a specific degree of detail, reasonably indicates when those work activities will occur, estimates how long they will take to complete, and shows how the work activities are related to each other. For example, a reliable schedule would indicate when one work activity depends upon the completion of another before it can start and that required resources (e.g., labor and materials) are assigned to all activities. Overall, the schedule provides the road map for the orderly execution of a program, helps identify and address potential problems, provides a baseline to gauge progress, and promotes accountability. VA has not yet established a schedule for the program that is reliable. Program officials stated that they baselined (i.e., formally established) an integrated master schedule in January 2009. However, in the program’s August and September 2009 Risk & Issues reports, program officials noted that the integrated master schedule was not complete and did not represent all program requirements. The reports also identified that the SAM pilot schedule (a key component of the overall program schedule) did not include sufficient detail to trace project tasks to contract requirements. Our analysis also concluded that the schedule was unreliable and noted that, in addition to issues VA identified with the program schedule, the integrated master schedule did not include key program management activities for reconciling the program cost estimate and implementing EVM, nor did it identify resources assigned to activities already under way or expected to start in the near future. Further, the schedule did not identify all dependencies and activities and did not break down all dependencies and activities to a sufficient level of detail to measure performance. Program officials acknowledged these deficiencies and stated that program management staffing shortages and delays in receiving a reliable project schedule from the SAM contractor have affected their ability to produce a reliable schedule for the program. They stated that in July 2009, they began working with stakeholders to address schedule issues and plan to improve the reliability of their schedule by finalizing a revised integrated master schedule by October 2009. Until VA completes a revised integrated master schedule that includes all key program activities broken down to a sufficient level of detail and identifies all resources and dependencies, the program’s efforts to measure progress and identify potential problems will be impaired, and the program will have increased risk of missing critical milestones for system delivery. According to SEI guidance, the requirements for a system should describe the functionality needed to meet user needs and perform as intended in the operational environment. Federal agencies also must ensure that their financial management systems comply with federal standards mandated by the Federal Financial Management Improvement Act of 1996. Also according to SEI guidance, an organization can ensure syste requirements are based on business requirements by tracking the requirements from inception of the project and agreement on a spe of business requirements to development of the system requirements, detailed design, code implementation, and test cases necessary for validating the requirements. Requirements must be traceable forwar backward (i.e., bidirectional traceability) through the development life cycle. Traceability helps reduce the risks of fielding a system that does n meet the needs of its users, incurring schedule delays, and increasing costs. VA has developed an initial set of 1,700 requirements that need to be addressed in the development of the SAM and IFAS components. FLIT requirements consist of core financial and procurement requirements related to the IFAS project, as well as inventory, supply, and real prope requirements related to the SAM project. To develop the initial set of requirements for FLITE, program officials stated that they analyzed VA ’s current and planned financial and asset management business processes and researched the Financial Systems Integration Office’s (FSIO) Core Financial System Requirements and Inventory, Supplies, and Materials System Requirements publications. The initial set of requirements wasfurther defined and refined by obtaining input from consultants and VA financial and asset management experts. The department included all mandatory core financial system requirements in its IFAS requirements but did not include all mandatory inventory, supplies, and materials requirements in its SAM requirements. For example, our analysis sho that VA did not include requirements for recording whether goods and services are accepted or rejected and for performing a systematic review wed and follow-up of overdue in-transit items. Program officials explained that they did not include these requirements because they had not determined whether the requirements were applicable to the SAM project. The officials agreed to incorporate the missing requirements. VA is also in the process of finalizing its real property requirements for the SAM beta phase and still plans to develop additional requirements related to procurement for IFAS. Further, the department is identifying data analysis and reporting requirements for the data warehouse. Regarding requirements traceability, SAM project officials acknowledge that mapping system requirements to the related business requirements is fundamental to effective requirements management. However, according to FLITE officials, they made a business decision not to establish bidirectional traceability between the business and system requirements included in the SAM pilot request for proposals. Instead, they decided to require the pilot contractor to establish traceability between the business and system requirements after the contractor analyzes and refines the requirements. According to the officials, the contractor plans to complete these tasks by December 2009. In addition, program officials stated that they plan to establish bidirectional traceability between IFAS business and system requirements under the IFAS implementation contract scheduled to be awarded in October 2009. In this regard, the IFAS request for proposals states that the implementation contractor will be required to finalize IFAS requirements, as well as maintain and document the traceability of all requirements to design, develop, integrate, and test specifications. As the department develops its requirements, it is important that all relevant and applicable federal financial management system requirements be identified and incorporated into the program’s requirements to ensure its planned financial management systems meet users’ needs and comply with applicable federal laws. Further, until they have established traceability between the business and system requirements, VA will not be positioned to know whether the system requirements are complete and effectively address each business requirement. According to recognized industry standards and our prior reports, the purpose of independent verification and validation is to provide an independent review of system processes and products to ensure that quality standards are being met. As we have previously noted, the use of independent verification and validation is a recognized best practice for large and complex system development and acquisition programs such as FLITE and involves an independent organization conducting unbiased reviews of processes, products, and results to verify and validate that they meet stated requirements and standards. VA policy recognizes the importance of addressing independent verification and validation results in a timely manner. Recognizing the importance of independent verification and validation, the department’s Systems Quality Assurance Service was tasked with performing independent verification and validation activities for the FLITE program. In April 2009, this organization developed a Software Quality Assurance Plan to guide independent verification and validation activities for the program. The plan was developed consistent with industry standards and generally contained the required elements. The plan also outlined reviews that would be performed by the Systems Quality Assurance Service, including product (e.g., program and project deliverables), process, internal controls, test readiness, and production readiness reviews. In addition, the Systems Quality Assurance Service is responsible for advising and assisting with the program’s implementation of a suite of tools to support requirements management, change management, risk, and test management. Independent verification and validation of the FLITE program has been focused primarily on the review of program and project deliverables. According to program officials, as of September 2009, the Systems Quality Assurance Service had reviewed 30 FLITE work products and provided findings and recommendations to document owners. Out of 1,064 total findings, 947 (approximately 89 percent) had been fully addressed by the program or had been identified as obsolete by the Systems Quality Assurance Service. Of the 117 remaining findings, 59 had been addressed but had not yet been reflected in revised documents, and 58 required additional attention. Of the 58 findings and recommendations that remained open, the SAM pilot site readiness plan accounted for 18 that were identified in December 2008. According to the Systems Quality Assurance Service, these findings focused on the need for consistency with other project documentation, clarity in the timing of site activities, and incorporation of planned site-level activities into the program work breakdown structure. In addition, according to department officials, the FLITE acquisition strategy has two findings and recommendations that were identified in December 2008 and that remain to be addressed. These findings are related to VA’s approach for acquiring SAM and IFAS integration support and the program’s focus on front-end acquisition activities, rather than full life cycle acquisition processes. Unknown or incomplete system integration requirements may result in significant rework and adversely impact the program’s cost, schedule, and quality. According to FLITE program officials, they have not had the human capital resources they need to address all the independent verification and validation findings and recommendations in a timely manner. As a result, independent verification and validation findings that highlight important program issues (e.g., determining an approach for integrating SAM and IFAS) have not received the attention that they need. As discussed earlier, the staff resources recently added could help address the program’s inability to focus sufficient attention on resolving findings from initial independent verification and validation activities. It remains unclear whether the program office will be positioned to efficiently resolve findings raised when the scope of independent verification and validation activities expands to include system testing and production readiness reviews, which affect the extent to which FLITE components will meet stated requirements and quality standards. The pilot for VA’s new asset management system has experienced a 2- month schedule delay just 5 months after award of the contract. While VA has recently taken steps to address the staffing shortages that have substantially contributed to this delay, it has not yet fully established the management capability necessary for FLITE to be successful. For example, the department’s program cost estimate did not represent total program costs, nor has the estimate been reconciled with an independent estimate—a process that could increase its reliability. Further, it has not conducted EVM that is needed to ensure the reliability of the department’s programwide reporting on the initiative. Also, VA has not yet made revisions that are needed to increase the reliability of the program’s integrated master schedule. In addition, the requirements for the two major program systems, SAM and IFAS, do not yet address all the functions expected of federal asset management and financial management systems. Finally, key findings from independent reviews of the program have not been fully addressed on a timely basis. As a consequence, the department is faced with significant challenges in implementing FLITE’s pilot systems as planned, while simultaneously working to fully establish program management capabilities. Program officials recognize the importance of reconciling their cost estimate, ensuring compliance with EVM system standards, establishing a reliable schedule, ensuring all relevant federal and system requirements are identified and traceable, and addressing all independent verification and validation findings. Further, they have stated that they plan to take such actions. However, just as program officials needed the department’s support in filling long-standing program office vacancies, the full support of the department’s top management is critical to ensuring that planned actions are executed. If the program is not effective in addressing its management weaknesses, the department increases the risk of repeating its unsuccessful earlier attempt to modernize the department’s financial and logistics systems. To help guide and ensure successful completion of FLITE, the Secretary of Veterans Affairs should direct and ensure that the Assistant Secretary for Management and the Assistant Secretary for Information and Technology take the following five actions: Improve the reliability of the program cost estimate by ensuring that the estimate includes system operations and maintenance costs and that the estimate is reconciled with the independent cost estimate. Improve the reliability of program earned value management reporting by ensuring that contractor earned value management systems comply with industry standards. Complete a revised integrated master schedule that includes all key program activities, including reconciliation of the program cost estimate and implementation of earned value management, and identifies all resources and dependencies. Ensure that all relevant and applicable federal financial management system requirements are included in FLITE’s requirements and establish and maintain requirements traceability. Ensure that all comments from independent verification and validation reviews are addressed. The VA Chief of Staff provided written comments on a draft of this report. In its comments, the department concurred with our recommendations and described actions to address them. For example, the department stated that it plans to reconcile the FLITE program cost estimate with the independent cost estimate by the second quarter of fiscal year 2010; ensure that future contractors’ EVM systems comply with industry standards and begin an independent review of the program’s EVM compliance by the first quarter of 2010; and include the reconciled program cost estimate in the integrated master schedule by the third quarter of fiscal year 2010. Further, the department stated that it plans to validate the completeness of FLITE requirements by mid-November 2009 and ensure that outstanding comments from independent verification and validation reviews are addressed by mid-December 2009. If the recommendations are properly implemented, they should better position VA to effectively manage the FLITE program. The department also provided a technical comment, which we have addressed in the report as appropriate. The department’s written comments are reproduced in appendix II. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of the report to interested congressional committees, the Secretary of Veterans Affairs, and other interested parties. In addition, the report will be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact Valerie C. Melvin at (202) 512-6304 or melvinv@gao.gov, or Kay L. Daly at (202) 512-9095 or dalykl@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. As requested, the objectives of our study were to (1) determine the status of the Financial and Logistics Integrated Technology Enterprise’s (FLITE) pilot system development and (2) evaluate key program management processes, including the Department of Veterans Affairs’ (VA) efforts to institute effective human capital management, develop a reliable program cost estimate, use earned value management, establish a realistic program schedule, employ effective requirements development and management, and perform independent verification and validation. To determine the status of the pilot system development, we obtained and analyzed program documentation, including program management plans, contracts, schedules, briefing slides, meeting minutes, and project status reports to identify from these reports the planned FLITE pilot activities and deliverables and determined to what extent these tasks had been completed; and supplemented department program documentation and our analyses by interviewing department and contractor officials, such as the program director, and observing project status meetings. We also evaluated VA’s progress toward implementing our prior recommendations related to adding specificity and details to key planning documents by comparing updated documents, including the Program Management Plan and Strategic Asset Management (SAM) Concept of Operations to prior versions. To evaluate key program management processes, we compared program staffing plans with the program’s staffing resource reports to determine the extent to which program human capital needs have been met; compared the program cost estimate and estimating activities to Office of Management and Budget guidance and GAO’s Cost Estimating and Assessment Guide to determine the estimate’s completeness and the effectiveness of the estimating activities; reviewed department documentation, such as the program’s plan for earned value management implementation, and compared them to federal policy and GAO’s Cost Estimating and Assessment Guide to determine the department’s preparedness for conducting reliable earned value management; reviewed the program schedule and compared it to planned activities, deliverables, and practices described in GAO’s Cost Estimating and Assessment Guide to assess the schedule’s reliability; analyzed program documentation, including the department’s business requirements, concept of operations for FLITE, traceability matrix, and requirements management plan, to determine the extent to which they reflect practices such as those recognized by SEI and include federal financial management system requirements; and reviewed program documentation, such as the software quality assurance plan, quality management plan, and technical review reports, to determine the extent to which the program has addressed independent verification and validation findings. We conducted this performance audit at VA headquarters in Washington, D.C., from November 2008 through October 2009 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. In addition to the contacts named above, key contributions to this report were made by Mark T. Bird, Assistant Director; Michael S. LaForge, Assistant Director; Heather A. Collins; Neil J. Doherty; Rebecca Eyler; David A. Hong; Jacqueline K. Mai; Yvonne D. Moss; Robert L. Williams, Jr.; and Leonard E. Zapata.
Since 2005, the Department of Veterans Affairs (VA) has been undertaking an initiative to develop an integrated financial and asset management system known as the Financial and Logistics Integrated Technology Enterprise (FLITE). FLITE is the successor to an earlier initiative known as the Core Financial and Logistics System (CoreFLS) that the department undertook in 1998 and discontinued in 2004 because it failed to support VA's operations. In light of the past performance of CoreFLS and the Office of Management and Budget's designation of FLITE as high risk, GAO was asked to (1) determine the status of pilot system development and (2) evaluate key program management processes, including VA's efforts to institute effective human capital management, develop a reliable program cost estimate, use earned value management (a recognized means for measuring program progress), establish a realistic program schedule, employ effective requirements development and management, and perform independent verification and validation. To do so, GAO reviewed program documentation and interviewed relevant officials. Contract award and performance of work tasks had been started for one of two planned pilot systems--the Strategic Asset Management system. However, as of mid-September, the project had fallen behind (by 2 months) and the contractor had missed the deadline for initiating and completing planned tasks and delivering work products such as a system security plan. In particular, the contractor had not started 11 of 34 tasks, including conducting a security assessment, and was behind schedule on 16 of the remaining 23 tasks, including analyzing business processes. Program officials generally attributed the delays to VA having insufficient program and acquisition staff to perform necessary activities associated with awarding and executing the pilot contract and to poor project management by the pilot system contractor. A second project--for the Integrated Financial Accounting System pilot--is expected to start in October 2009. VA has taken steps to institute effective management of FLITE; however, the department has not yet fully established capabilities needed to ensure that the program will be successfully implemented. Specifically, VA has (1) recently filled long-standing staff vacancies, and only one program office staff opening remains; (2) not developed a cost estimate that includes total program costs or reconciled its estimate with an independent estimate; (3) not performed key actions necessary for reliable earned value management; (4) not yet established a schedule that is reliable; (5) not identified all mandatory federal financial management system requirements and ensured that system requirements are based on business requirements; and (6) not addressed all of the findings of its independent verification and validation organization in a timely manner. Until VA reconciles its cost estimate, ensures compliance with earned value management system standards, establishes a reliable schedule, ensures all relevant federal and system requirements are identified and traceable, and addresses all independent verification and validation findings, it could continue to experience schedule delays and further increase its risk of not providing the financial and asset management capabilities that users need.
We identified seven categories of noncareer position appointments (positions from which individuals were converted as discussed in this report). Schedule C: Appointments are generally noncompetitive and are for excepted service positions graded GS-15 and below that involve determining policy or that require a close confidential relationship with the agency head or other key officials of the agency. Noncareer SES: Appointments are to positions with responsibility for formulating, advocating, and directing administration policies. Noncareer SES appointees have no tenure and serve “at the pleasure of the department or agency head.” Limited Term SES: Appointments may be made for up to 36 months to a position with duties that will end within 36 months or an earlier specified time period. Limited Emergency SES: Appointments may be made for up to 18 months to meet a bona-fide, unanticipated, urgent need. Presidential appointees, including executive level and noncareer ambassadors: Appointees are made by the President generally to fill high-level executive positions. Appointments support and advocate the President’s goals and policies. Noncareer legislative branch: Appointments are primarily to positions in member and committee offices. Other statutory at-will individuals: Sometimes called administratively determined positions. Appointments are made under specific authority provided to certain agencies to appoint individuals to these positions noncompetitively. Appointees serve at the pleasure of the agency head and can be removed at will. The salary levels can be determined by the agency head within certain limits. Although noncareer appointments are generally noncompetitive, individuals holding noncareer positions may have previously held career positions. For example, Limited Term SES and Limited Emergency SES positions are often filled by federal employees who have previously held career positions and achieved career status. We identified four categories of career position appointments (positions to which individuals were converted as discussed in this report). Career (competitive service): Appointments are made through a governmentwide or an “all sources” merit staffing (competitive) process, including recruitment through a published announcement, rating and ranking of eligible candidates, and establishment of OPM- created or approved qualification standards. Career-conditional (competitive service): Appointments are for permanent positions in the competitive service and are generally the initial positions for new hires. Appointees must complete a 1-year probationary period and a total of 3 years continuous creditable service to attain a career appointment. Career (SES): Appointments are to top-level policy, supervisory, and managerial positions above grade 15 of the General Schedule. Career SES positions require a further review and approval of the merit staffing process by OPM, and the proposed selectee’s executive/managerial qualifications by an OPM-administered SES Qualifications Review Board (QRB) which are composed of members of the SES from across the government. Career Excepted Service (Non-Schedule C): Appointments involve agency positions that are not subject to OPM’s competitive hiring examination. Agencies have authority to establish their own hiring procedures to fill excepted service vacancies. Such procedures must comply with statutory requirements such as merit systems principles and veteran’s preference, when applicable. Career excepted service individuals have adverse action appeal rights to the Merit Systems Protection Board; which is responsible for protecting the federal government’s merit-based system of employment by hearing and deciding cases involving certain personnel actions. As mentioned previously, the majority of authorities and procedures governing appointments to career positions are outlined in Title 5 of the U.S. Code. Merit system principles are one of the fundamental statutory rules that apply to civil service appointments. These principles require that agencies provide a selection process that is fair, open, and based on skills, knowledge, and ability. Another statutory requirement is the prohibition on certain personnel practices, such as granting any individual a preference or advantage in the application process, including defining the manner of competition or requirements for a position to improve the prospects of any particular applicant, or failing to fulfill veteran’s preference requirements. In addition to these statutory requirements, agencies must follow OPM’s regulations in Title 5 of the Code of Federal Regulations, which also outline required procedures for making appointments to career positions, such as providing public notice of all vacancies in the career SES. For career excepted service (non-Schedule C) positions, agencies are not required to follow OPM’s hiring regulations for the competitive service; however, they must apply veteran’s preference and follow merit system principles when making most of these appointments. OPM has oversight authority to ensure that agencies are following the merit system principles when hiring. In accordance with this authority, OPM has traditionally required agencies to seek its pre-appointment approval for the conversion of certain noncareer appointees (Schedule C and Noncareer SES) into certain career positions (competitive service and career SES) during a presidential election period. OPM defines the specific duration of this period every 4 years. In a memorandum to department and agency heads dated March 18, 2004, OPM defined the most recent pre-appointment review period as beginning on March 18, 2004, and concluding on January 20, 2005, Inauguration Day. In that memorandum, OPM also reminded agencies of the need to ensure that agency personnel actions remain free of political influence and meet all relevant civil service laws, rules, and regulations and that all official personnel records should clearly document the continued adherence with merit principles and the avoidance of prohibited personnel practices. OPM also required agencies to seek its approval before appointing a current or former (within the last 5 years) Schedule C or Noncareer SES employee to the competitive service or career SES during this period. As stated previously, OPM does not include conversions to career excepted service (non-Schedule C) positions in this pre-appointment review process. Thirty-six of the 144 reported conversions covered in this review occurred during the most recent presidential election review period. In addition to conducting pre-appointment reviews during election years, OPM also reviews all career SES appointments. For these appointments, OPM first reviews the selection process to ensure merit staffing procedures were followed, then forwards the documents to an OPM-administered SES QRB that reviews and approves the executive/managerial qualifications of agency proposed selectees. Twenty-three of the 41 agencies we reviewed reported 144 conversions of individuals from noncareer to career positions from May 1, 2001, through April 30, 2005. The other 18 agencies reported no conversions during this period. Four agencies, the Department of Health and Human Services (HHS), the Department of Justice (DOJ), the Department of Defense (DOD), and the Department of the Treasury (Treasury) accounted for 95, or 66 percent, of the total 144 conversions reported, as seen in figure 1. Of the 144 reported conversions, individuals were converted from the following categories of noncareer positions: 47 Limited Term SES positions 25 other statutory at-will positions 6 Limited Emergency SES positions The 144 reported conversions were made to the following categories of career positions: 64 career SES positions 33 career excepted service (non-Schedule C) positions Appendix III provides more detail on the characteristics of the noncareer and career positions to which the individuals were converted, e.g., title of positions, grades, salaries, and appointment dates. Agencies appear to have used appropriate authorities and followed proper procedures in making the majority (93) of the 130 conversions at GS-12 level or above. However, for 18 conversions, it appears that appropriate authorities were not used or proper procedures followed. For 19 conversions, agencies did not provide us with enough information to make a determination—many of these conversions were to excepted service positions where agencies develop their own hiring procedures and have limited documentation requirements. For 93 of the 130 conversions at the GS-12 level and above, our review of the merit staffing files and official personnel files at the respective agencies indicated that the agencies generally followed the procedural requirements associated with each appointing authority called for by federal law and regulations, including merit system principles such as fair and open competition and fair and equitable treatment of applicants. For example, agencies generally complied with the competitive service examination process which is intended to ensure that merit system principles are followed. The process includes notifying the public that the government will accept applications for a job, rating applications against minimum qualification standards and assessing applicants’ competencies or knowledge, skills, and abilities against job-related criteria to identify the most qualified applicants. For 18 conversions, it appeared that in making the conversions, agencies did not adhere to merit systems principles or may have engaged in prohibited personnel practices. We found that most of these cases involved one of several categories of improper procedures, and a few cases involved more than one. Each of these conversions is discussed in more detail in appendix IV. In 7 of the 18 instances, it appears agencies created career positions specifically for particular noncareer individuals, tailored career positions’ qualifications to closely match the noncareer appointees’ experience or pre-selected an applicant for a career position. In one case, it appears that Department of Homeland Security (DHS) created a career excepted service (non-Schedule C) position and appointed a former noncareer appointee to it without providing an opportunity for other potential candidates to apply. In two cases, correspondence between HHS officials suggested that new competitive service positions were created specifically for particular individuals holding statutory at-will positions under Title 42 of the U.S. Code. For two cases, it appears HHS tailored the mandatory qualifications in the vacancy announcement to closely match the Limited Term SES individuals’ experience, giving them a distinct advantage in the qualifications rating process for the career SES position. In one case, HHS may have tailored a career position to match a Schedule C appointee’s experience, giving the political appointee an unfair advantage in the competitive process. In one case, it appears DOD preselected a former Schedule C appointee for a career position, giving the political appointee an unfair advantage in the competitive process. In 4 of the 18 instances, it appears agencies did not apply veteran’s preference properly when converting noncareer appointees to competitive service and career excepted service (non-Schedule C) positions. In one case, it appears HHS selected a Schedule C appointee over an applicant with veteran’s preference, who had scored higher than the noncareer appointee in the competitive examination process. Although agencies may make such selections they are required to justify such decisions in writing. It appears the agency did not document its justification. In one case, documents suggest the Environmental Protection Agency (EPA) did not apply veteran’s preference points to applicants’ scores, despite the presence of seven veteran’s preference eligibles on the applicant roster for a competitive service position. Had the points been assigned, five of the applicants would have scored higher than the Schedule C appointee (who was eventually selected) in the competitive examination process. Additionally, it appears that the agency may have created the career position specifically for this individual. In one case, it appears HHS did not apply veteran’s preference points for two applicants. Had the points been assigned, the applicants would have scored higher than the Title 42 statutory-at-will employee (who was eventually selected) in the competitive examination process. In one case, it appears that DOJ converted a former Schedule C employee to a career excepted service (non-Schedule C) position without providing the opportunity for veteran’s preference eligibles to apply. In 3 of the 18 instances, agencies converted individuals who appeared to have limited qualifications and/or experience relevant to the career excepted service (non-Schedule C) positions. In one case, DOJ converted a former Schedule C appointee even though it appears he did not meet the position’s minimal qualifications as outlined in the vacancy announcements. In two cases, the Department of Housing and Urban Development (HUD) and DHS selected former Schedule C appointees who appeared to have limited experience relevant to the career positions. For the four remaining conversions, agencies did not follow several different proper procedures during the appointment process: For two cases, HHS posted vacancy announcements for SES positions for less than the designated minimum time requirement for SES Merit Staffing Procedures. In one case, OPM raised concerns to the Small Business Administration (SBA) that the appointment did not adhere to merit system principles during the competition. Based on our review, it appears that SBA proceeded with the conversion without fully addressing OPM’s concerns. In one case, OPM cited weaknesses to the Consumer Product Safety Commission (CPSC) concerning the selected candidate’s qualifications for the SES position. In our view, it appears that CPSC proceeded with the conversion without fully addressing OPM’s concerns. Concerning OPM’s role in these 18 conversions, 7 of the 18 conversions were subject to OPM review and approval; 2 because they fell within the presidential election pre-appointment review period as prescribed by OPM and 5 because they were to SES level positions. Of the 2 conversions subject to OPM’s presidential election pre-appointment review process, in one instance, it appeared the agency did not submit the conversion to OPM for its review. In the other instance, OPM reviewed the file but did not take action before the January 20, 2005, deadline for the pre-appointment review period. Of the 5 conversions to SES level positions, an OPM-administered QRB reviewed and approved the selectee’s qualifications for each of these appointments, although in one case OPM initially rejected the selectee’s qualifications, then approved the selection after the agency revised and resubmitted its application. For 19 conversions, we did not have sufficient information to make a determination as to whether appropriate authorities and proper procedures were followed. More details on these conversions can be found in appendix V. Sixteen of these conversions were to career excepted service (non- Schedule C) positions at DOJ. For the remaining 3 conversions, agencies could not locate the required files or documents. As mentioned previously, agencies are not required to follow OPM’s competitive examination process when making appointments to career excepted service (non-Schedule C) positions. Rather, individual agencies and components within these agencies may develop their own procedures and guidelines for hiring, including documentation requirements for hiring decisions. Although OPM requires agencies to maintain records of the rating, ranking, and selection process for competitive service appointments, these documentation requirements do not apply to the excepted service. Agencies are, however, required to follow merit system principles and apply veteran’s preference when making most appointments, including those to excepted service positions. Given the limited documentation requirements for these positions, we could not obtain adequate records from DOJ to reconstruct its hiring process. More specifically, DOJ could not provide us with documentation of their decision-making process for these appointments, such as, in some cases, a copy of the selectee’s application. For the 3 remaining conversions, HHS could not locate certain files. In two cases, HHS could not locate the Official Personnel File or the merit staffing file for 2 conversions of Limited Term SES individuals to the career SES. In the other case, HHS could not locate the merit staffing files for a former Schedule C appointee that converted to a competitive service position. As we have stated in previous reports, the ability to convert noncareer employees to career positions is an appropriate and valuable process available to agencies. However, the noncompetitive nature of the appointment process for the noncareer positions can create concerns about whether these individuals have benefited from favoritism or improper advantage when being converted to career positions, even the appearance of which could compromise the integrity of the merit system. OPM has established a process to help ensure that conversions occurring during presidential election periods and to SES level positions, meet merit system principles. While this process has in fact helped ensure that some improper conversions are prevented, questionable conversions can sometimes still occur. Also, conversions to excepted service positions are excluded from the presidential election period pre-appointment review process. While these conversions to excepted service positions are exempt from the OPM competitive examination process, they are subject to other statutory requirements, and therefore could involve some of the same concerns, as demonstrated by our review. As we discussed previously, as a part of its oversight authorities, OPM also conducts periodic reviews of agencies’ examining and hiring activities to ensure they are consistent with the merit systems principles and other laws and regulations. OPM should review the 18 conversions we identified where certain agencies appeared not to have used proper authorities or to have followed proper procedures, and take any appropriate corrective actions. Also, OPM should determine whether conversions to excepted service positions should be subject to its pre-appointment review during presidential election periods or its periodic reviews of agencies’ examining and hiring activities, and if so, what information the agencies should provide to OPM in that regard. To help ensure that federal agencies are following appropriate authorities and proper procedures in making conversions of noncareer to career positions, we recommend that the Director, OPM: review the 18 conversions we identified where it appears that certain agencies did not use appropriate authorities and/or follow proper procedures in making these conversions and determine whether additional actions are needed, and determine whether conversions to career excepted service positions should be subject to OPM review—such as through the pre-appointment review OPM conducts of other conversions during presidential election periods, and/or during OPM’s periodic audits of agencies’ examining and hiring activities, and if so, determine what information agencies should provide on such conversions. We obtained comments on a draft of this report from the Director of OPM. OPM agreed with our recommendation to review the 18 cases we identified where agencies did not use the appropriate authorities or adhere to merit system principles and veteran’s preference. OPM also noted that 9 of the 18 cases we identified came from HHS where OPM recently withdrew a delegated examining authority from one of its components following an oversight review. With respect to our recommendation that OPM determine whether conversions to career excepted service positions should be subject to the pre-appointment review OPM conducts during the presidential election periods, OPM stated it would incorporate a review of agency practices in this area during its normal review of agency delegated examining units. Because we view such action as responsive to the intent of our recommendation to enhance the oversight over conversions agencies make from noncareer to career excepted service positions, we revised the recommendation to include OPM’s intended action. OPM’s written comments are in appendix VI. We also verified the number of conversions reported by each agency for the period from May 1, 2001, through April 30, 2005, including those that reported no conversions during this period with cognizant agency officials. In addition, we provided draft summaries of the 18 conversions on which we had questions and the 19 conversions where we could not make a determination, to the agencies that had reported them. The Department of Justice, Environmental Protection Agency, Department of Housing and Urban Development, Department of Defense, Small Business Administration, and Consumer Product Safety Commission provided technical clarifications to the discussion of their respective conversions, which we incorporated as appropriate. As agreed with your offices, unless you announce the contents of this report earlier, we plan no further distribution until 30 days after the date of this report. At that time, we will send copies to the Director of the Office of Personnel Management and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6806 or stalcupg@gao.gov. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix VII. For the purpose of this review, we identified seven categories of noncareer position appointments. These noncareer appointments were: Schedule C, Noncareer SES, Limited Term SES, Limited Emergency SES, Presidential appointees, Noncareer legislative branch, and Other statutory at-will positions. Individuals holding noncareer positions may have previously held career positions. For example, Limited Term Senior Executive Service (SES) and Limited Emergency SES positions are often filled by federal employees who have previously held career positions (prior to being appointed to the noncareer positions from which they were being converted). We identified four categories of career position appointments: Career (competitive service), Career-conditional (competitive service), Career (SES), Career Excepted Service (Non-Schedule C). Definitions of these noncareer and career positions can be found in the background section of this report. We also reviewed our prior work on conversions for information on these position categories. The criteria used to select the executive branch departments and agencies for this review were (1) all 15 departments and (2) 26 agencies that had oversight or other regular responsibilities for federal workforce issues, and agencies that were of particular interest to the congressional requesters of the review. Under these criteria, we identified 41 departments and agencies, which are listed in appendix I. To determine the number of individuals who converted from noncareer to career positions, we asked the 41 agencies to complete a data collection instrument (DCI) for the conversions made from May 1, 2001, through December 31, 2003, and provide the information to us by April 15, 2004. To follow up on conversions made on or after January 1, 2004, we asked the agencies to provide the number of conversions to us on a monthly basis even if the agency had no conversions, beginning on May 15, 2004, through April 30, 2005. In the DCI, we asked the agencies to provide specific information about the conversions: the career positions to which the individuals were appointed, including the position title, pay grade, annual salary, and the date of appointment. We also asked for information on the convertee’s former noncareer position. In addition, we asked the agencies to provide the Standard Form-50B (SF-50B) for all appointments. The SF-50B is the official record of the personnel action. We used the SF-50B to verify the information that agency officials provided in the DCI. We also verified the number of conversions reported by the agencies with agency officials for the period from May 1, 2001, through April 30, 2005, including those that reported no conversions during this period. During our review, we also cross-referenced conversions the agencies reported to us that were made during the presidential election review period from March 18, 2004, to January 20, 2005, with the information agencies reported to OPM during the same period. According to OPM, it received and reviewed 24 proposed conversions during the period. We reviewed the authorities used and procedures followed for conversions reported to us at the GS-12 level and above. We first identified the authority that the agency cited for the appointment on the SF-50B and verified that it was the appointment authority for that conversion. We also examined the contents of the individual’s official personnel file (OPF), and when appropriate, the merit staffing case files to determine if there was evidence that the criteria for using the authority was met. To determine if proper procedures were followed, we reviewed merit staffing files and OPFs to determine what steps were taken in the application and conversion process. Merit staffing files document promotion and hiring decisions for specific career appointments. The OPF contains SF-50Bs, position descriptions, and records from individuals’ previous appointments, including former noncareer positions. If we had questions concerning a conversion, we interviewed officials at the appointing agency and requested documentation to support their statements. We compared the procedures used in the conversion process to the federal personnel laws and regulations contained in Title 5 of the U.S. Code and Title 5 of the Code of Federal Regulations. We also referred to agencies’ merit staffing plans. We based our analysis on our review of documents within the case file and additional supporting documents provided by the agency in response to our questions. Agencies are not required to follow OPM’s competitive hiring provisions for career excepted service positions and can establish their own hiring procedures for these positions, but they are not required to have written copies of these procedures. We requested, and when available, obtained and reviewed copies of the hiring procedures from each agency reporting using this authority to determine if proper procedures were followed. Specifically, we requested additional information from the Departments of Homeland Security, Housing and Urban Development, Treasury, and Justice (DOJ), and the Consumer Product Safety Commission (CPSC). Because each individual component in DOJ has its own appointing authority, we contacted the following eight individual components that reported conversions: Office of the Solicitor General, Executive Office of U.S. Attorneys, Executive Office of Immigration Review, Office of Legal Counsel, Civil Rights Division, Tax Division, Criminal Division, and the Federal Bureau of Investigation (FBI). We also met with Department of Justice’s Director of Attorney Recruitment and Management to discuss the process DOJ uses to convert individuals to Excepted Service (Non- Schedule C) positions. The CPSC, the Department of Treasury, and four components (Civil Rights Division, Criminal Division, FBI, and Executive Office of U. S. Attorneys) within DOJ provided GAO with written procedures. HUD and three components (Executive Office of Immigration Review, Tax Division, and Office of Legal Counsel) within DOJ reported they did not have written procedures. DHS and one component within DOJ (the Office of the Solicitor General) did not respond to our request for written procedures. We verified the number of conversions reported by the agencies with agency officials for the period from May 1, 2001, through April 30, 2005, including those that reported no conversions during this period. In addition, we provided draft summaries of the 18 conversions where it appeared agencies did not follow proper procedures and the 19 conversions where we could not make a determination to the respective agencies. The Department of Justice, Environmental Protection Agency, Department of Housing and Urban Development, Department of Defense, Small Business Administration, and Consumer Product Safety Commission provided technical clarifications to the discussion of these conversions, which we incorporated as appropriate. We conducted our work in Washington, D.C. from March 2004 through March 2006 in accordance with generally accepted government auditing standards. Deputy Assistant to the Secretary of Defense (Chemical, Demilitarization and Threat Reduction) Deputy Assistant Secretary of the Navy (Infrastructure Strategy and Analysis) Director Human Resources Center (Atlanta) Medical Officer (Pediatrics) AD: Administratively determined; rate set by agency. EJ: The Department of Energy Organization Act Excepted Service. Code is for use by the Department of Energy only. ES: Senior Executive Service GG: Grade similar to General Schedule IJ: Immigration Judge Schedule. The code is for use by the Department of Justice only. IR: Internal Revenue Service Broadband Classification and Pay System Positions only. Code is for use by the Internal Revenue Service (Department of Treasury) only. NH: Business Management and Technical Professional. DOD Acquisition Workforce Demonstration Project. Code is for use by the Department of the Air Force, Department of the Army, Department of Defense, and Department of the Navy only. SK: Securities Exchange Commission individuals formerly under the GS, GM, and EZ pay plans. Code is for use by the Securities and Exchange Commission only. For 18 of these conversions, it appears that agencies did not follow proper procedures or may have violated other statutory or regulatory requirements. OPM has oversight authority to ensure that agencies are following the merit system principles when hiring. In accordance with this authority, OPM has traditionally required agencies to seek its pre- appointment approval for the conversion of certain noncareer appointees (Schedule C and Noncareer SES) into certain career positions (competitive service and career SES) during a presidential election review period. OPM defined the most recent pre-appointment review period as beginning on March 18, 2004, and concluding on January 20, 2005, Inauguration Day. Additionally, career SES positions require a further review and approval of the merit staffing process by OPM, and the proposed selectee’s executive/managerial qualifications by an OPM-administered SES Qualifications Review Board (QRB) which are composed of members of the SES from across the government. Seven of the 18 conversions were subject to OPM review and approval; 2 because they fell within the presidential election pre-appointment review period as prescribed by OPM and 5 because they were to SES level positions. Of the 2 conversions subject to OPM’s presidential election pre- appointment review process, in one instance, it appeared the agency did not submit the conversion to OPM for its review. In the other instance, OPM reviewed the file but did not take timely action before the January 20, 2005, deadline for the pre-appointment review period. Of the 5 conversions to SES level positions, an OPM-administered QRB reviewed and approved the selectee’s qualifications for each of these appointments, although in one case a QRB initially rejected the selectee’s qualifications, then a different QRB approved the selection after the agency revised and resubmitted its application. Department of Defense (DOD) The eventual selectee served as a Schedule C Special Assistant to the Principal Deputy Under Secretary of Defense for Policy from December 2001 through August 2002. In September 2002, she was reassigned to another Schedule C position as a Special Assistant to the Director, Program Analysis and Evaluation. Official records indicate the eventual selectee occupied this position until her conversion to the career position of Assistant for Plans and Integration on January 25, 2004. However, according to her resume, the eventual selectee assumed the duties of the career position beginning in March of 2003. Based on her description, as the Assistant for Plans and Integration, the eventual selectee: (1) developed and integrated force employment and planning policy guidance related to homeland defense; (2) developed, organized, and coordinated policy guidance working groups with other DOD components related to homeland defense activities; (3) ensured policy oversight for force employment issues; and (4) provided recommendations to the Deputy Assistant Secretary for Force Planning and Employment and the Assistant Secretary of Defense for Homeland Defense. The agency contracted with OPM’s Philadelphia field office to advertise and rate applications for the career position of Assistant for Plans and Integration in the Office of the Assistant Secretary of Defense for Homeland Defense. On November 20, 2003, OPM sent DOD a selection package containing a certificate listing eight eligible candidates in the order of their rating and their applications. Based on documents OPM provided to us, a 10-point compensable veteran had the highest rating, and the eventual selectee was listed fourth. In a December 2, 2003, memorandum to the selecting official, the Principal Director, Organizational Management and Support referred to the eventual selectee by name as the agency’s “primary candidate.” On December 30, 2003, the selecting official interviewed the veteran’s preference eligible, who voluntarily declined consideration for the position at that time. On the same day, the agency selected the former Schedule C appointee to the career position. We did not receive a complete merit staffing file from DOD for this conversion. Based on the available documents, there is some evidence that DOD may have pre-selected the former Schedule C employee for the position. Pre-selecting an applicant for a career position would violate the statutory prohibition against granting unauthorized advantages to individuals in the hiring process. Since we were unable to review the resumes of all the applicants, we could not make a determination as to the comparative qualifications of the candidates, including the veteran’s preference eligible that declined the position. Since this conversion did not occur in the presidential election review period, it was not subject to OPM’s pre-appointment review. Department of Health and Human Services (HHS) The eventual selectee worked as a statutory-at-will Title 42 Research Fellow at the National Institutes of Health for almost 5 years prior to applying for the career position of Health Scientist Administrator. According to her resume, as a research fellow, she was assigned to a variety of work details involving both scientific and administrative duties within the Basic Neurosciences Program at the National Institute for Neurological Disorders and Stroke. On December 18, 2004, the selecting official expressed interest in hiring the eventual selectee (identifying the individual by name) as a program officer and requested the creation of a full-time equivalent position for that purpose. This request to hire the eventual selectee was approved by agency officials in an electronic mail correspondence dated December 22, 2004. On January 12, 2005, the health scientist administrator position was created in the Division of Basic Neuroscience and Behavioral Research. According to the position description, as health scientist administrator, the individual would be responsible for organizing an extramural research program to support the study of neurobiology as it relates to chemical addiction and neuroimmunology in AIDS research. The agency advertised the position, 19 individuals applied, and four candidates were certified as eligible for consideration. The eventual selectee had the lowest rating of the four candidates (one candidate subsequently declined prior to the selection being made), but was selected for the position on February 1, 2005. The documents provided to us suggest the position may have been created specifically to hire the eventual selectee. This would represent a pre- selection of an applicant, which would violate the statutory prohibition against granting unauthorized advantages to individuals in the hiring process. Since this conversion did not occur in the presidential election review period and did not involve a Schedule C or Noncareer SES appointee, it was not subject to OPM’s pre-appointment review. The eventual selectee served in a statutory-at-will Title 42 position as Scientific Review Administrator from March 2001 to March 2005. Prior to this, the eventual selectee had served within the Division of Extramural Activities at the National Institute for Allergy and Infectious Diseases (NIAID) for several years in a competitive position. She listed the following responsibilities for the statutory at-will position on her resume: (1) performed special scientific reviews and lead the initiation, planning, management, and oversight of division activities related to international research funding and resource management, (2) represented the division director at meetings that discuss extramural international grants, contracts, and capacity sustainability in developed and developing countries, (3) helped develop policies that affect foreign research funding and methods to improve financial management of international awards, and (4) managed the NIAID Select Agent process to assist with the biodefense program. On April 9, 2004, agency officials, including the selecting official, circulated a position description for the recruitment of a director for the Office of International Extramural Activities. The routing slip stated that this position description was to recruit the eventual selectee by name. On June 24, 2004, a new interdisciplinary competitive service position was created in the Division of Extramural Activities. On January 12, 2005, the agency advertised the new position as Director, Office of International Extramural Activities. The position description stated that the director, among other duties, would: (1) lead the initiation, planning, management, and oversight of division activities related to international research funding and resource management, (2) be the senior advisor and management official concerning the management of international research awards, (3) help develop policies and systems that affect foreign research funding, and assist international parties to design contract management systems, and (4) be the expert advisor to the NIAID on biodefense issues. The agency advertised the position; three applicants applied; and only the eventual selectee was certified as eligible for consideration. The agency selected her for the position on February 4, 2005. Two factors taken together create the appearance that the individual may have been preselected for this position, including: (1) the routing slip circulated within the agency specifically identifying the eventual selectee by name, and (2) clear and extensive overlap/similarities between responsibilities previously cited by the eventual selectee and the duties listed in the description of the newly created position. Pre-selecting an individual or applicant for a competitive position grants an unauthorized preference to an individual in the employment process, and is a violation of federal law. Since this conversion did not occur in the presidential election review period and did not involve a Schedule C or Noncareer SES appointee, it was not subject to OPM’s pre-appointment review. Prior to appointment to the career position, the eventual selectee served for over 2 years as a Schedule C Special Assistant to the Deputy Director in the Office of Child Support Enforcement. According to the special assistant position description, the eventual selectee served as a principal source of advice and counsel to the Deputy Director/Commissioner on various assignments, projects and work groups involving program policies, reviews, evaluations, plans, and approaches that affected the Office of Child Support Enforcement. On April 4, 2004, HHS created, and 2 weeks later, advertised the Information Management and Dissemination Coordinator position in the Immediate Office of the Assistant Secretary for Planning and Evaluation. The position description provided that the coordinator would be the senior advisor for the office’s information dissemination and communication program, leading content and technical aspects of the development and use of technology-based solutions. Eleven applicants were certified as eligible for consideration, including three veteran’s preference eligibles. After preference points were allocated, both the eventual selectee and a veteran’s preference eligible candidate received the highest scores of 100 points on the rating tool. In accordance with law, the veteran’s preference eligible candidate was listed first on the certificate of eligibles, but the agency passed over this candidate to choose the selectee for the position on September 19, 2004. Agencies may pass over veteran’s preference eligible candidates based on suitability grounds, or may ask OPM to disqualify an eligible candidate due to medical reasons. However, in exercising their authority, agencies must file written reasons for passing over a veteran’s preference eligible, either with OPM or with a delegated examining authority. The selection certificate contained a handwritten notation referring to a veteran’s preference objection letter. However, no such letter or approval from the examining office was contained in the file provided to us. Based on the available documents, it appears the agency did not follow, or did not properly document that it followed, statutorily-required veteran’s preference hiring procedures for competitive service appointments. Since this conversion involved a former Schedule C appointee and took place during the presidential election review period, this conversion was subject to OPM’s pre-appointment review. The agency should have referred the case to OPM prior to appointing the selectee to the position. Based on available documents from both HHS and OPM, there is no indication whether this referral or review occurred. The eventual selectee served as a Title 42 Biologist in the Office of Policy Analysis for over 2 years prior to applying for the career position. Based on her resume, in the Biologist position the eventual selectee carried out a range of responsibilities related to scientific program reporting, strategic planning, and program evaluation. On February 8, 2005, HHS advertised a career position of Health Scientist Administrator in the Office of Policy Analysis. The position description listed responsibility for scientific planning, program reporting, and the development of materials on biodefense research efforts. Eleven applicants applied for the position, including a 5-point compensable veteran and a 10-point compensable veteran. Applicants were assigned scores based on their online responses and a computer-based scoring system. Agencies are required to assign additional points to qualifying veteran’s scores during the rating and ranking process. Based on the applicant listing HHS provided, applicant scores were not adjusted for veteran’s preference. Had the adjustment for veteran’s preference been made, both the 5-point and 10-point compensable veterans would have rated higher and been listed ahead of the eventual selectee on the certificate of eligibles. In the absence of veteran’s preference points, three individuals (the eventual selectee, another nonveteran, and the 10-point compensable veteran) were referred to the selecting official, and because veteran’s preference points were not assigned, the eventual selectee had the highest score on the certificate of eligibles. Selection was made on March 1, 2005. Based on the documents provided to us, it appeared that HHS did not apply veteran’s preference points for two applicants. Had HHS assigned veteran’s preference points, both of the veterans would have ranked higher than the eventual selectee on the final rating sheet, and the agency would have had to file written reasons for passing them over. Since this conversion did not occur in the presidential election review period and did not involve a Schedule C or Noncareer SES appointee, it was not subject to OPM’s pre-appointment review. Prior to the conversion, the eventual selectee served for 6 months in the SES Limited Term Position. According to the position description, as Director he was responsible for designing HHS’s implementation of the Competitive Sourcing Initiative and serving as the Acting Director of the Administrative Operations Service (AOS). In his resume, the eventual selectee emphasized his role as Acting Director of AOS, stating that he was responsible for providing HHS and other federal customers nationwide administrative and technical services in areas such as (1) building operations, surplus real property, leasing, security, property management, warehousing, logistics and management services; (2) printing, duplicating and typesetting; (3) operation of reference libraries; (4) mail distribution and handling; (5) claims service for Public Health Service components nationwide under specific statutory authorities; (6) acquisition service; (7) pharmaceutical, medical, dental supplies to federal agencies and other related nonfederal customers; (8) technical graphics and photography services; and (9) a wide range of technology and telecommunications services. On May 24, 2002, HHS advertised the SES Director of AOS position, open to qualified federal employees. In both the position description and the vacancy announcement, one of the mandatory professional/managerial qualifications specified for the position was “experience in delivering and managing a wide range of administrative support services to a diverse, complex, and large customer base requiring such services such as (1) building operations, surplus real property, leasing, security, property management, warehousing, logistics and management services; (2) printing, duplicating and typesetting; (3) operation of reference libraries; (4) mail distribution and handling; (5) claims service for PHS components nationwide under specific statutory authorities; (6) acquisition service; (7) pharmaceutical, medical, dental supplies to federal agencies and other related nonfederal customers; (8) technical graphics and photography services; and (9) a wide range of technology and telecommunications services.” An agency panel rated the eventual selectee and two other candidates as highly qualified for the position, then referred all three to the selecting official. Four other candidates were also referred to the selecting official as qualified applicants based on their SES status. The selecting official, who was also the eventual selectee’s supervisor at that time, selected him for the position on July 8, 2002. Although an internal agency memo directs the selecting official to provide a brief statement of the rationale for hiring individuals, no such documentation was in the file we were provided on this selection. Based on the documents, it appears the agency may have tailored the qualifications for the career position for the purpose of improving an individual’s prospects for employment. Further, one of the mandatory technical qualifications specified for the new position appeared to be tailored to the selectee’s previous experience. Such a situation could both unnecessarily limit the applicant pool, but also provide an individual an unfair advantage. Including such a specific qualification without prior consultation with OPM would also violate qualifications standards regulations for General SES positions. There was no evidence within the file to suggest the agency consulted with OPM prior to setting the qualifications standards. The selectee’s executive qualifications were approved by an OPM- administered QRB on July 30, 2002. The eventual selectee was appointed Program Manager of the E-Grants Initiative, a SES Limited Term position, on March 24, 2002, and held the position for approximately 2 years. Prior to this appointment, the eventual selectee had been a career civil servant at the National Institutes of Health in various positions from 1981 through 1999, prior to leaving to work in the private sector, then being rehired as an excepted service consultant to HHS in 2001. On February 10, 2002, the eventual selectee was reinstated to a career position at HHS. According to his application, as Program Manager of the E-Grant Initiative, the eventual selectee led the management and development of Grants.gov and built consensus through outreach to grantors and grantees as well as making presentations to OMB, HHS leadership and the Federal Chief Financial Officers (CFO) Council. Beginning in June 2003, the role of acting director for the newly created Office of Grants Management and Policy was added to the eventual selectee’s responsibilities in the SES Limited Term position. On October 10, 2003, HHS created, and then 3 weeks later advertised, the director position open to all sources. According to the vacancy announcement, the duties and responsibilities within the Immediate Office of the Director were to support government electronics grants, including outreach to grantors and grantees, and interfacing with OMB, the Federal Council and HHS leadership. Additionally, the director would also be responsible for the Division of Grants Policy and the Division of Grants Oversight and Review. As a mandatory professional/technical qualification for this position, applicants were required to possess progressively responsible management experience in the E-Grants Initiative that demonstrated in-depth knowledge of outreach efforts and interfaces with OMB, Federal CFO Council, and HHS leadership on Grants.gov, and included at least one year of specialized experience in the E-Grants Initiative at the GS-15 or equivalent level. There were two printed rosters listing the seven applicants included in the merit staffing file we were provided. The first roster listed five candidates as qualified, with hand-written annotations towards the not qualified category for 4 of these 5. The second roster listed only the eventual selectee as qualified and only he was referred to the rating panel. Since there were no documents included for the other applicants within the file, a review of the referral process to the rating panel was not possible. The selecting official, who was the selectee’s supervisor at the time, chose him for the position on January 30, 2004. Based on the documents, it appears the agency may have tailored the qualifications for the career position for the purpose of improving an individual’s prospects for employment. First, the duties defined closely correlated with those performed by the eventual selectee in the Limited Term SES Program Manager position. Further, one of the mandatory professional technical qualifications appeared to be tailored to the selectee’s experience. In addition, the requirement of in-depth knowledge of the E-grants program likely excluded any nonfederal applicants. Including such a specific qualification without prior consultation with OPM would violate qualifications standards regulations for General SES positions. There was no evidence within the file to suggest the agency consulted with OPM prior to setting the qualifications standards. Defining the scope or manner of competition or the requirements for any position for the purpose of improving or injuring the prospects of any particular person for employment would violate federal law. The selectee’s executive qualifications were approved by an OPM- administered QRB on April 2, 2004. The eventual selectee served as a Schedule C Confidential Assistant to the Executive Secretary for a year and a half prior to applying for the career position. Before this, the eventual selectee worked for the Secretary for almost 4 years as a policy analyst when the Secretary was Governor of Wisconsin. According to his resume, as the Confidential Assistant to the Executive Secretary, the eventual selectee (1) directly supported the Secretary of HHS by coordinating briefing materials, (2) provided relevant information regarding meetings, briefings, and speaking engagements, (3) reviewed and produced documents requiring the Secretary’s approval, (4) developed the Secretary’s daily calendar, (5) facilitated meetings with the administrative staff of the office, and (6) analyzed policy decisions for their potential impact on the Department. On June 4, 2002, HHS posted a vacancy announcement for the re- established career position of Policy Coordinator in the Office of the Secretary. According to the position description and vacancy announcement, the individual selected for this position would (1) review documents requiring the Secretary’s approval, (2) be a technical source of information, (3) represent the Executive Secretary to other government officials, (4) develop background papers to brief the Secretary and other department officials, and (5) coordinate and facilitate meetings. Of the 53 applicants, 5 were referred to the selecting official after being assigned numerical scores and 4 were referred as eligible based on their career status. The eventual selectee received the highest numerical score; however, the file contained no documentation to show how points were assigned or who rated the applicants. The selecting official, who was also the selectee’s supervisor at the time, made the selection on August 1, 2002. The duties and responsibilities of the Schedule C and career positions overlap substantially and are located in the same office with a similar supervisory structure, giving the appearance that the agency may have tailored qualifications for the position for the purpose of improving an individual’s prospects for employment. Although the selectee received the highest score, there is no documentation explaining how the score was assigned, or who assigned it. Since this conversion did not occur in the presidential election review period, it was not subject to OPM’s pre-appointment review. Prior to appointment to the SES career position as Director, Human Resources Center in Atlanta, the eventual selectee served in a similar position for a year under a Limited Term appointment. Before this, she had held career positions for 23 years. At the time of the SES Limited Term appointment on May 18, 2003, the eventual selectee had achieved a position within HHS at the GS-15 level. HHS posted a vacancy announcement on USA Jobs for a SES position as Director, Human Resources Center in Atlanta, who would be responsible for providing a variety of human resources services to the Center for Disease Control and Prevention in Atlanta. The announcement was open from March 22, 2004, to April 2, 2004, a period of 12 days. Eleven applicants applied, and of these, three were certified as eligible for consideration. Selection was made on April 14, 2004. HHS posted the position on USA Jobs for only 12 days. This appears to be a violation of OPM’s regulations which require SES job listings to be posted for a minimum of 14 days. The selectee’s executive qualifications were approved by an OPM- administered QRB on May 21, 2004. Prior to appointment to the SES career position as Director, Division of Financial Operations, the eventual selectee served as a Project Manager (with Acting Director duties for the Division of Financial Operations) for over a year under a Limited Term SES appointment. At the time of the conversion, the eventual selectee had been employed with HHS for almost 32 years, 30 of which were in career positions. HHS posted a vacancy announcement on USA Jobs for the director position to lead the operation of HHS’ Debt Collection Center and other core financial accounting systems managed by the division. The announcement opening date was August 13, 2002, and it closed on August 19, 2002, a period of 7 days. Four applicants applied, and of these, three were certified as eligible for consideration. Selection was made on October 3, 2002. HHS posted the career SES position on USA Jobs for only 7 days. This appears to be a violation of OPM’s regulations which require SES job listings to be posted for 14 days. The selectee’s executive qualifications were approved by an OPM- administered QRB on February 21, 2003. From Schedule C GS-0301-13/02, Staff Assistant, Federal Emergency Management Agency (FEMA) The eventual selectee served as a Schedule C Staff Assistant at FEMA, initially in the Office of General Counsel (OGC), then in the Regional Operations Directorate for 14 months prior to applying for the career position. Although the staff assistant position description provided by the agency outlined a primarily advisory and administrative support role, the eventual selectee’s resume listed the following responsibilities as part of the position (1) conceive and implement new initiatives and projects to facilitate and integrate emergency management programs, and (2) formulate, present, and execute budgets. Before joining the federal government, the eventual selectee had worked for 4 years in the private sector for the former FEMA Director prior to his appointment. Beginning on January 28, 2003, the agency advertised the Program Specialist position for 2 weeks. The duties listed for this position in the vacancy announcement included among others (1) conceive and implement new initiatives and projects to strengthen emergency management programs, (2) be the authoritative resource for coordinating and developing long-term planning for regional offices, (3) allocate resources in accordance with short and long range plans, (4) maintain an intimate knowledge of agency policies, programs and directives, and (5) formulate, present, and execute the budget. Of 39 applicants, only the eventual selectee and a career FEMA employee, with over 10 years experience at FEMA as an Emergency Management Specialist, were certified as eligible for consideration. Both candidates were rated as best qualified for the position. The agency referred both the career employee and the eventual selectee to the selecting official on separate certificates. On March 3, 2003, the selecting official, who was also the eventual selectee’s supervisor at the time, chose her for the position. The selecting official did not provide further justification for the hiring decision. Although agencies have discretion when hiring among a limited pool of eligible candidates, the agency selected a Schedule C appointee with limited experience over a career employee with over 10 years of relevant emergency management experience. It is unclear why the two candidates received the same rating in the consideration process. Since this conversion did not occur in the presidential election review period, it was not subject to OPM pre-appointment review. Prior to the appointment to the career excepted service position, the eventual selectee served for less than 1 year in a variety of work details as a Noncareer SES appointment at DHS. According to the eventual selectee’s resume, in these positions she advised high-ranking agency officials on relevant policy issues, guided department-wide efforts to develop and implement policy initiatives, coordinated public outreach, and handled managerial and administrative duties. On March 26, 2004, the agency submitted an official personnel action request to convert the selectee by name to the position of International Programs Coordinator. Four days later, on March 30, a position description was authorized for an International Programs Coordinator, with duties to include advising and assisting the Office of International Affairs in the administration of international policy and planning in Greater Europe. The conversion was made the same day. The agency did not advertise the opening, and there is no evidence suggesting any other candidates were considered for the position. Based on the documents provided to us, two factors create the appearance that the individual may have been pre-selected for this position. These factors include: (1) the agency’s request to convert the selectee by name prior to authorizing the position description and (2) not providing an opportunity for other candidates to apply. By law, agencies may not grant unauthorized preference to any particular individual or applicant to improve her prospects in the employment process. This conversion was not subject to pre-appointment review because OPM does not review appointments to the excepted service during the presidential election review period. Department of Housing and Urban Development (HUD) Prior to appointment to the career excepted service (non-Schedule C) position, the eventual selectee served for almost 4 years as a Schedule C Special Assistant in the Office of Insured Housing-Multifamily Mortgage Division in the Office of General Counsel. Based on the eventual selectee’s resume, duties in that Special Assistant position included providing legal counsel, interpreting existing and proposed multi-family insurance program statutes, regulations, and other legal documents. Beginning on September 9, 2004, the agency advertised the career position at the GS-12, 13, and 14 levels for 2 weeks. The eventual selectee and one field office attorney applied for the position. The field office attorney had over 26 years of experience in the agency, including several years as a Freedom of Information Act (FOIA) officer in a field office. The eventual selectee identified handling of FOIA requests as one element of his experience on his application; however, his accompanying resume had no mention of FOIA experience. Based on agency excepted service hiring protocols, both candidates were considered qualified for the position and were referred to the selecting official. No interviews were conducted prior to the selection. The Deputy Assistant Secretary for Human Resource Management stated that the eventual selectee qualified at the GS-13 level and the field office attorney qualified at the GS-14 level. They were referred to the selecting official separately on a GS-13 selection roster and GS-14 selection roster respectively. Although this conversion was compliant with the agency’s excepted service hiring procedures, our work raised questions concerning the selection. With the written applications as the primary source of information considered, the agency selected a Schedule C appointee with minimal experience in FOIA issues, a key requirement of the position, over an attorney with 26 years of legal experience at the agency, including several years as a FOIA officer. This conversion was not subject to pre-appointment review because OPM does not review appointments to the excepted service during its presidential election review period. Department of Justice (DOJ) Prior to his appointment to the career excepted service (non-Schedule C) position, the eventual selectee served for almost 2 years as a Schedule C Senior Advisor to the Assistant Attorney General in the Tax Division. According to his resume, the eventual selectee’s experience included (1) providing written and oral legal counsel concerning ongoing tax litigation as a Senior Advisor to the Assistant Attorney General; (2) serving for approximately 6 months in a temporary detail as an appellate attorney in the Office of Immigration Litigation, and (3) over 25 years of experience as a civil trial attorney in a private firm. His resume does not list any experience in immigration litigation other than the approximately 6-month detail in the Office of Immigration Litigation. According to the Immigration Judge’s position description, the eventual selectee would preside at quasi-judicial hearings to determine the issues arising in exclusion, deportation, and related proceedings. The special knowledge and abilities required for this position include, among others: “a thorough knowledge of the numerous immigration and nationality laws, both past and present, and the regulations and rules of the Immigration Naturalization Service issued thereunder,” “expert knowledge of judicial practice,” and “a proven ability to assure a fair hearing.” On April 4, 2004, the agency appointed the selectee to the Immigration Judge position. In an internal memo requesting a higher pay rate for the appointment, a DOJ Chief Immigration Judge stated that the selectee was eminently well- qualified for the position, but did not cite any immigration litigation experience beyond the selectee’s temporary detail as justification for this assessment. Converting a Schedule C appointee with less than 6 months of immigration law experience to an Immigration Judge position raises questions about the fairness of the conversion. This conversion was not subject to pre-appointment review because OPM does not include excepted service positions in its review of appointments made during the presidential election review period. From Schedule C Appointment GS-1035-15/01, Deputy Director, Office of Public Affairs, Office of Public Affairs To Excepted Service Appointment, Title 28 U.S.C. 536, GS-1035-15/02, Public Affairs Specialist, Federal Bureau of Investigation (FBI) Prior to appointment to the career excepted service (non-Schedule C) position, from February 2001 until December 2002, the eventual selectee served as a Schedule C Deputy Director in the Office of Public Affairs. According to her application for the FBI position, her duties as Deputy Director included, among others, serving as a spokesperson for the department and Attorney General and preparing the Attorney General and other senior officials for press conferences. According to the position description, the Public Affairs Specialist for the FBI would serve as a liaison between the media and senior agency officials, as well as a public relations advisor to the agency. DOJ created the position on December 6, 2002. The selectee was appointed 3 days later. DOJ did not advertise the opening, and there is no evidence suggesting any other candidates were considered for the position. As part of the excepted service, the FBI is required to apply veteran’s preference to its appointments. While this selection was consistent with the FBI’s current Merit Promotion and Placement Plan, since there was no opportunity for other candidates to apply, the agency apparently did not apply the statutory requirements of veteran’s preference to this hiring decision. This conversion was not subject to pre-appointment review because OPM does not review appointments to the excepted service. Consumer Product Safety Commission (CPSC) Prior to applying for the career position, the eventual selectee had served for over a year as a Schedule C Special Assistant in the office of the Chairman. According to his resume, as a Special Assistant, the eventual selectee directly supported the Chairman by (1) providing senior level policy advice on current and developing issues, (2) preparing written materials for presentation at external speaking engagements, (3) acting as a liaison with internal and external parties, and (4) drafting documents as needed. On November 14, 2003, CPSC posted a vacancy announcement for the SES career position of Director, Office of International Programs and Intergovernmental Affairs. Based on the vacancy announcement, the director would oversee and coordinate the Commission’s international and intergovernmental efforts related to product safety standards. The desired qualifications listed in the announcement closely matched the eventual selectee’s previous experience in the private sector, and as an elected official to the New Mexico State Legislature, as listed on his resume. Twenty-four applicants applied, and of these, nine were considered qualified for the position and assigned numerical scores. The eventual selectee received the highest numerical score and the selecting official selected him for the career position on December 19, 2003. Because this is an appointment to the career SES, CPSC submitted the selectee’s case to OPM for approval on February 13, 2004. An OPM- administered QRB denied the agency’s request citing weakness in three of the five Executive Core Qualifications (ECQ). The QRB also noted that the selectee’s lack of managerial experience would be a handicap to successful performance in the SES. Based on comments from CPSC’s Executive Director, the selectee revised his ECQ statement by citing different examples from his experiences. Although the selectee refers to his “career as a senior manager and leader” the only concrete examples he provided of his experiences in the ECQs relate to his 15-month position at the CPSC or his 2-terms as an elected official in the New Mexico State Legislature. However, in describing his specific role and duties for each of these positions on his resume, he does not mention managerial or supervisory duties for either. Using the revised ECQs, CPSC resubmitted its request. OPM pointed out that the resubmission was provided to a different QRB, which was not involved or familiar with the initial QRB’s concerns or decision. This QRB approved the appointment on April 2, 2004. Although the selectee modified his second submission to OPM, the primary basis for the selectee’s qualifications remained his experience from the 15-month appointment at the CPSC and 2-terms as a State Representative. It is unclear whether or how this revised submission addressed the concerns raised by the initial QRB regarding the candidate not meeting the “demonstrated executive experience” required for SES positions by 5 U.S.C 3393, or the “well-honed executive skills and broad perspective of government” recommended by OPM guidance on the SES. Because this is an appointment to the career SES, CPSC submitted the selectee’s case to OPM for approval. On February 13, 2004, an OPM- administered QRB denied the agency’s request due to weakness in three of the five ECQs. After CPSC resubmitted its request, a different QRB approved the appointment on April 2, 2004. Environmental Protection Agency (EPA) Prior to applying for the career position, the eventual selectee served for over 2 years in a variety of work details initially as a Schedule C appointee, then as an administratively determined position in the Immediate Office of the EPA Administrator. Before this, the eventual selectee had worked for 3 years as a Special Assistant for the Administrator when she was Governor of New Jersey, and had also worked state level political campaigns. In addition to providing administrative support and policy advice to the EPA Administrator, she listed the following experiences on her resume from an 11-month detail to the Facilities Management and Services Division: (1) managed the daily operation of Personnel Security Staff, (2) maintained working knowledge of Executive Orders and OPM guidelines pertaining to national security positions, (3) established and implemented new internal processes and policies regarding personnel security, and (4) worked with other governmental agencies to facilitate personnel security clearances. On April 25, 2003, EPA advertised the new career position of Management Specialist in the Office of Administration and Resources Management, 2 weeks prior to the position’s creation. According to the vacancy announcement, the individual in this position would (1) serve as an advisor to the Director, (2) perform special projects such as planning for the review, development and execution of personnel and physical security plans and programs, (3) maintain awareness of the major policy and program initiatives relevant to Security Staff, and (4) work with other federal agencies and the private sector regarding the personnel and physical security plans and programs. Thirty-five individuals applied for the position, including at least seven with veteran’s preference. Individuals were assigned scores based on their online responses and a computer-based scoring system. By law, agencies are required to assign additional points to qualifying veteran’s scores during the rating and ranking process. Based on the applicant listing EPA provided to us, the scores were not adjusted for veteran’s preference. Had the veteran‘s preference points been applied, five of the seven individuals that had veteran’s preference would have ranked higher than the selectee. Even without the points applied, the eventual selectee shared the highest score with a 5-point compensable veteran. Despite this tied score, the agency referred only the eventual selectee to the selecting official. On May 18, 2003, the agency converted the selectee to the career position. According to EPA, the selectee had eleven months of relevant experience for the career position at the time of her selection. Based on the documents, EPA did not apply veteran’s preference points for seven applicants. Other factors also suggest there may have been preselection. Although, even in the absence of the added veteran’s preference points, two applicants, (a veteran and the eventual selectee), shared the same examination score, the agency only referred the selectee to the selecting official. Also, the close correlation between the duties listed by the selectee for her most recent detail and those of the newly created career position may have given the selectee an advantage in the rating process. Finally, the position was created and filled immediately prior to the former Administrator’s resignation. This conversion was not subject to OPM’s pre-appointment review because it occurred before the presidential election review period. Small Business Administration (SBA) The eventual selectee served as a Schedule C Special Assistant in the Office of the Administrator for over 3 years prior to applying for the career position. Before this, the eventual selectee had worked in Los Angeles for almost 5 years at the private business owned by the Administrator prior to his appointment as SBA Administrator. According to her application, as the Special Assistant to the Administrator, the eventual selectee managed the Administrator’s calendar, coordinated meetings with other officials, answered correspondence, and performed special projects of a sensitive nature, among other duties. On August 11, 2004, SBA posted a vacancy announcement for the newly created career position of Administrative Resources Coordinator in the Los Angeles Regional Office. According to the vacancy announcement, the coordinator would (1) interact with senior level officials within the SBA, (2) perform special projects for the Regional Administrator, (3) manage budget allocations, (4) maintain the Regional Office Executive Scorecard, and (5) interact with diverse groups in a variety of settings. There were six applicants for the position, and of these, three were considered eligible. The eventual selectee received the highest rating of all the candidates, and the selecting official chose her for the position on September 10, 2004. Because conversions of Schedule C appointees to career positions during OPM’s presidential election review period are subject to OPM pre- appointment review, the agency submitted this conversion to OPM for approval on October 26, 2004. On February 22, 2005, OPM returned the agency request without formal action, citing the fact that the period of pre- appointment review had passed. However, OPM raised concerns to SBA that the appointment did not adhere to merit system principles because the quality ranking factors in the vacancy announcement and crediting plan did not appear to be supported by the position description. In OPM’s view, this discrepancy may have limited the applicant pool and interfered with open and fair competition. OPM encouraged SBA to review the hiring process and ensure it met merit system principles before appointing the selectee to the position. On February 25, 2005, the Director of SBA’s Denver Office of Human Capital Management advised the Chief Human Capital Officer that he had reviewed the recruitment file and the appointment of the selectee met merit system principles because it was made from a legally constituted certificate of eligibles. Based on this review, the agency appointed the selectee to the Administrative Resources Coordinator position on March 6, 2005. The Chief Human Capital Officer at SBA told us that SBA was in the process of moving one of its regional offices from San Francisco to Los Angeles, and that this position was created as part of the standard configuration for a regional office. As OPM suggested, several factors raise questions about the fairness of this conversion. At the time of selection, the selectee had been working for the SBA Administrator, both in the private sector and at SBA, for about 8 years. Additionally, the position, which was established less than 6 months before the 2004 election was located in the eventual selectee’s hometown. SBA did respond to OPM’s observation, but it is not clear from the documents whether SBA adequately addressed OPM’s concerns, and there was no documentation in the file indicating OPM contacted the SBA further concerning this appointment or the agency’s response. Because the agency selected a Schedule C appointee for a career position during the presidential election review period, it submitted the conversion to OPM for approval in October 26, 2004. On February 22, 2005, OPM returned the agency request without action citing the fact that the period of pre-appointment review had passed but expressed concern with the merit staffing process. In addition to the contact named above, Sarah Veale, Assistant Director; Carolyn L. Samuels, Lisa Van Arsdale, and Jeffrey McDermott made key contributions to this report.
A federal employee conversion occurs whenever an individual changes from one personnel status or service to another without a break in federal government service of more than 3 days. This report focuses on conversions of individuals from noncareer to career positions. Federal agencies must use appropriate authorities and follow proper procedures in making these conversions. GAO was asked to determine for departments and selected agencies (1) the number and characteristics of all noncareer to career conversions occurring during the period from May 1, 2001, through April 30, 2005, and (2) whether appropriate authorities were used and proper procedures were followed in making these conversions at the GS-12 level and above. Twenty-three of the 41 departments and agencies selected for review reported converting 144 individuals from noncareer to career positions from May 1, 2001, through April 30, 2005. The other 18 departments and agencies reported making no conversions during this period. Four agencies accounted for 95, or 66 percent, of the 144 reported conversions: the Departments of Health and Human Services (36), Justice (23), Defense (21), and Treasury (15). Of the 144 reported conversions, almost two-thirds were from Limited Term Senior Executive Service (SES) positions (47) and Schedule C positions (46). Limited Term SES appointments may be made for up to 36 months and can include federal employees who previously held career positions. Schedule C appointments are generally noncompetitive and are for positions graded GS-15 and below that involve determining policy or that require a close confidential relationship with key agency officials. Of these 144 individuals, 64 were converted to career SES positions, 47 to career competitive service positions, and 33 to career excepted service (non-Schedule C) positions. Agencies used appropriate authorities and followed proper procedures in making the majority (93) of the 130 conversions reported at the GS-12 level or higher. However, for 37 of these conversions it appears that agencies did not follow proper procedures or agencies did not provide enough information for us to make an assessment. For 18 of the 37 of these conversions, it appears that agencies did not follow proper procedures. Some of the apparent improper procedures included: selecting former noncareer appointees who appeared to have limited qualifications and experience for career positions, creating career positions specifically for particular individuals, and failing to apply veteran's preference in the selection process. Seven of the 18 conversions were subject to OPM review and approval; 2 because they fell within the presidential election pre-appointment review period as prescribed by OPM and 5 because they were to SES level positions. For the remaining 19 conversions, agencies did not provide enough information for GAO to fully assess the process used by the agency in making the conversion. This was largely attributable to the types of appointments involved. Sixteen of these 19 conversions were to career excepted service (non-Schedule C) positions at the Department of Justice. For appointments to excepted service positions, OPM does not require agencies to follow OPM's competitive hiring provisions or to maintain records of the rating, ranking, and selection process, as it requires for competitive service appointments (although most of these conversions are subject to the merit system principles). These unique hiring procedures and limited documentation requirements for excepted service positions resulted in GAO having insufficient information to reconstruct the Department of Justice's decision-making process to convert these individuals. For the remaining three cases, the Department of Health and Human Services could not locate certain files.
The X-33 and X-34 programs were part of an effort that began in 1994— known as the Reusable Launch Vehicle Technology/Demonstrator Program (Reusable Launch Vehicle Program)—to pave the way to full- scale, commercially-developed, reusable launch vehicles reaching orbit in one stage. In embarking on the Reusable Launch Vehicle Program, NASA sought to significantly reduce the cost of developing, producing and operating launch vehicles. NASA’s goal was to reduce payload launch costs from $10,000 per pound on the space shuttle to $1,000 per pound. It planned to do so, in part, by finding “new ways of doing business” such as using innovative design methods, streamlined acquisition procedures, and creating industry-led partnerships with cost sharing to manage the development of advanced technology demonstration vehicles. The vehicles were seen as the “stepping stones” in what NASA described as an incremental flight demonstration program. The strategy was to force technologies from the laboratory into the operating environment. The X-34 Project started in 1995 as a cooperative agreement between NASA and Orbital Sciences Corporation (Orbital). The project was to demonstrate streamlined management and procurement, industry cost sharing and lead management, and the economics of reusability. However, the industry team withdrew from the agreement in less than 1 year, for a number of reasons including changes in the projected profitability of the venture. NASA subsequently started a new X-34 program with a smaller vehicle design. It was intended only as a flight demonstration vehicle to test some of the key features of reusable launch vehicle operations, such as quick turn-around times between launches. Under the new program, NASA again selected Orbital as its contractor in August 1996, awarding it a fixed price, $49.5 million contract. Under the new contract, Orbital was given lead responsibility for vehicle design, fabrication, integration, and initial flight testing for powered flight of the X-34 test vehicle. The contract also provided for two options, which were later exercised, totaling about $17 million for 25 additional experimental flights and, according to a project official, other tasks, including defining how the flight tests would be undertaken. Under the new effort, NASA’s Marshall Space Flight Center was to develop the engine for the X-34 as part of its Low Cost Booster Technology Project. The initial budget for this development was about $18.9 million. In July 1996, NASA and Lockheed Martin Corporation and its industry partners entered into a cooperative agreement for the design, development, and flight-testing of the X-33. The X-33 was to be an unmanned technology demonstrator. It would take off vertically like a rocket, reaching an altitude of up to 60 miles and speeds to about Mach 13 (13 times the speed of sound), and land horizontally like an airplane. The X-33 would flight test a range of technologies needed for future launch vehicles, such as thermal protection systems, advanced engine design and lightweight fuel tanks made of composite materials. The vehicle would not actually achieve orbit, but based on the results of demonstrating the new technologies, NASA envisioned being in a better position to make a decision on the feasibility and affordability of building a full-scale system. Under the initial terms of the cooperative agreement, NASA’s contribution was fixed at $912.4 million and its industry partners’ initial contribution was $211.6 million. In view of the potential commercial viability of the launch vehicle and its technologies, the industry partners also agreed to finance any additional costs. During a test in November 1999, one of the fuel tanks failed due to separation of the composite surface. Following the investigation, NASA and Lockheed Martin agreed to replace the composite tanks with aluminum tanks. In February 2001, NASA announced it would not provide any additional funding for the X-33 or X-34 programs under its new Space Launch Initiative. The Space Launch Initiative is intended to be a more comprehensive, long-range plan to reduce high payload launch costs. NASA’s goal is still to reduce payload launch cost to $1,000 per pound to low Earth orbit but it is not limited to single-stage-to-orbit concepts. Specifically, the 2nd Generation Program’s objective is to substantially reduce the technical, programmatic, and business risks associated with developing reusable space transportation systems that are safe, reliable and affordable. NASA has budgeted about $900 million for the SLI initial effort and, in May 2001, it awarded initial contracts to 22 large and small companies for space transportation system design requirements, technology risk reduction, and flight demonstration. In subsequent procurements in mid- fiscal year 2003, NASA plans to select at least two competing reusable launch system designs. The following 2.5 to 3.5 years (through fiscal years 2005 or 2006) will be spent finalizing the preliminary designs of the selected space transportation systems, and maturing the specific technologies associated with those high-risk, high-priority items needed to develop the selected launch systems. Undertaking ambitious, technically challenging efforts like the X-33 and X- 34 programs—which involve multiple contractors and technologies that have yet to be developed and proven—requires careful oversight and management. Importantly, accurate and reliable cost estimates need to be developed, technical and program risks need to be anticipated and mitigated, sound configuration controls need to be in place, and performance needs to be closely monitored. Such undertakings also require a high level of communication and coordination. Not carefully implementing such project management tools and activities is a recipe for failure. Without realistically estimating costs and risks, and providing the reserves needed to mitigate those risks, management may not be in a position to effectively deal with the technical problems that cutting-edge projects invariably face. In fact, we found that NASA did not successfully implement and adhere to a number of critical project management tools and activities. Specifically: NASA did not develop realistic cost estimates in the early stages of the X- 33 program. From its inception, NASA officials considered the program to be high risk, with a success-oriented schedule that did not allow for major delays. Nevertheless, in September 1999, NASA’s Office of the Inspector General (OIG) reported that NASA’s cost estimate did not include a risk analysis to quantify technical and schedule uncertainties. Instead, the cost estimate assumed that needed technology would be available on schedule and as planned. According to the OIG, a risk analysis would have alerted NASA decision-makers to the probability of cost overruns in the program. Since NASA’s contribution to the program was fixed—with Lockheed Martin and its industry partners responsible for costs exceeding the initial $1.1 billion—X-33 program management concluded that there was no risk of additional government financial contributions due to cost overruns. They also believed that the projected growth in the launch market and the advantages of a commercial reusable launch vehicle would provide the necessary incentive to sustain industry contributions. NASA did not prepare risk management plans for both the X-33 and X-34 programs until several years after the projects were implemented. Risk management plans identify, assess, and document risks associated with cost, resource, schedule, and technical aspects of a project and determine the procedures that will be used to manage those risks. In doing so, they help ensure that a system will meet performance requirements and be delivered on schedule and within budget. A risk management plan for the X-34 was not developed until the program was restructured in June 2000. Although Lockheed Martin developed a plan to manage technical risks as part of its 1996 cooperative agreement for the X-33, NASA did not develop its own risk management plan for unique NASA risks until February 2000. The NASA Administrator and the NASA Advisory Council have both commented on the need for risk plans when NASA users partnering arrangements such as a cooperative agreement. Furthermore, we found that NASA’s risk mitigation plan for the X-33 program provided no mechanisms for ensuring the completion of the program if significant cost growth occurred and/or the business case motivating industry participation weakened substantially. Sept. 24, 1999. responsibility for key tasks and deliverables and provide a yardstick by which to measure the progress of the effort. According to the OIG, NASA did not complete a configuration management plan for the X-33 until May 1998—about 2 years after NASA awarded the cooperative agreement and Lockheed Martin began the design and development of a flight demonstration vehicle. Configuration management plans define the process to be used for defining the functional and physical characteristics of a product and systematically controlling changes in the design. As such, they enable organizations to establish and maintain the integrity of a product throughout its lifecycle and prevent the production and use of inconsistent product versions. By the time the plan was implemented, hardware for the demonstration vehicle was already being fabricated. Communications and coordination were not effectively facilitated. In a report following the failure of the X-33’s composite fuel tank, the investigation team reported that the design of the tank required high levels of communication, and that such communication did not occur in this case. A NASA official told us that some NASA and Lockheed personnel, who had experience with composite materials and the phenomena identified as one of the probable causes for the tank’s failure, expressed concerns about the tank design. However, because of the industry-led nature of the cooperative agreement, Lockheed Martin was not required to react to such concerns and did not request additional assistance from NASA. The Government Performance and Results Act of 1993 requires federal agencies to prepare annual performance plans to establish measurable objectives and performance targets for major programs. Doing so enables agencies to gauge the progress of programs like the X-33 and X-34 and in turn to take quick action when performance goals are not being met. For example, we reported in August 1999 that NASA’s Fiscal Year 2000 Performance Plan did not include performance targets that established a clear path leading to a reusable launch vehicle and recommended such targets be established. Without relying on these important project management tools up front, NASA encountered numerous problems on both the X-33 and X-34 programs. Compounding these difficulties was a decrease in the projected commercial launch market, which in turn lessened the incentive of NASA’s X-33 industry partners to continue their investments. In particular, technical problems in developing the X-33’s composite fuel tanks, aerospike engines, heat shield, and avionics system resulted in significant schedule delays and cost overruns. After two program reviews in 1998 and 1999, the industry partners added a total of $145.6 million to the cooperative agreement to pay for cost overruns and establish a reserve to deal with future technical problems and schedule delays. However, NASA officials stated that they did not independently develop their own cost estimates for these program events to determine whether the additional funds provided by industry would be sufficient to complete the program. Also, these technical problems resulted in the planned first flight being delayed until October 2003, about 4.5 years after the original March 1999 first flight date. After the composite fuel tank failed during testing in November 1999, according to NASA officials, Lockheed Martin opted not to go forward with the X-33 Program without additional NASA financial support. Lockheed Martin initially proposed adding $95 million of its own funds to develop a new aluminum tank for the hydrogen fuel, but also requested about $200 million from NASA to help complete the program. Such contributions would have increased the value of the cooperative agreement to about $1.6 billion or about 45 percent (about $500 million) more than the $1.1 billion initial cooperative agreement funding. NASA did not have the reserves available to cover such an increase. The agency did, however, allow Lockheed Martin to compete, in its 2nd Generation Program solicitation for the additional funds Lockheed Martin believed it needed to complete the program. Similarly, NASA started the X-34 Project, and the related NASA engine development project, with limited government funding, an accelerated development schedule, and insufficient reserves to reduce development risks and ensure a successful test program. Based on a NASA X-34 restructure plan in June 2000, we estimate that NASA’s total funding requirements for the X-34 would have increased to about $348 million—a 307-percent ($263 million) increase from the estimated $86 million budgeted for the vehicle and engine development projects in 1996. Also, since 1996, the projected first powered flight had slipped about 4 years from September 1998 to October 2002 due to the cumulative effect of added risk mitigation tasks, vehicle and engine development problems, and testing delays. Most of the cost increase (about $213 million) was for NASA-directed risk mitigation tasks initiated after both projects started. For example, in response to several project technical reviews and internal assessments of other NASA programs, the agency developed a restructure plan for the X- 34 project in June 2000. This plan included consolidating the vehicle and engine projects under one NASA manager. The project would be managed with the NASA project manager having the final decision-making authority; Orbital would be relegated to a more traditional subordinate contractor role. Under the plan, the contract with Orbital would also be rescoped to include only unpowered flights; Orbital would have to compete for 2nd Generation Program funding for all the powered flight tests. The plan’s additional risk mitigation activities would have increased the X-34 project’s funding requirements by an additional $139 million, which included about $45 million for additional engine testing and hardware; $33 million for an avionics redesign; $42 million for additional project management support and personnel; and $18 million to create a contingency reserve for future risk mitigation efforts. NASA is revising its acquisition and management approach for the 2nd Generation Program. Projects funded under the program will be NASA-led rather than industry-led. NASA also plans to increase the level of insight into the program’s projects, for example, by providing more formal reviews and varying levels of project documentation from contractors depending on the risk involved and the contract value. NASA also required that all proposals submitted in response to its research announcement be accompanied by certifiable cost and pricing data. Finally, NASA discouraged the use of cooperative agreements since these agreements did not prove to be effective contract vehicles for research and development efforts where large investments are required. While it is too early to tell if the agency measures aimed at avoiding the problems experienced in the X-33 and X-34 programs will be sufficient, these experiences show that three critical areas need to be addressed. These relate to (1) adequate project funding and cost risk provisions, (2) the effective and efficient coordination and communication required by many individual but related efforts, and (3) periodically revalidating underlying assumptions by measuring progress toward achieving a new safe, affordable space transportation system that meets NASA’s requirements. First, the technical complexity of the 2nd Generation Program requires that NASA develop realistic cost estimates and risk mitigation plans, and accordingly set aside enough funds to cover the program’s many projects. NASA plans to invest substantially more funds in the 2nd Generation Program than it did in the previous Reusable Launch Vehicle Program, and plans to provide reserves for mitigating program risk. For example, the agency plans to spend about $3.3 billion over 6 years to define system requirements for competing space transportation systems and related risk reduction activities. Most of this amount, about $3.1 billion, is for risk- reduction activities, such as the development of new lightweight composite structures, durable thermal protection systems, and new high performance engine components. NASA officials told us that an important way they plan to mitigate risk is by ensuring adequate management reserves in the 15- to 20-percent range, or higher if needed. They also acknowledged the need for adequate program cost estimates on which to base reserve requirements. However, we are still concerned about the timely preparation of cost estimates. The 2nd Generation deputy program manager stated that, based on the scope of the first contracts awarded, the program office planned to update their cost estimate this summer before NASA conducted a separate, independent technical review and cost estimate in September 2001. Thus, neither of these important analyses were completed prior to the first contract awards. We believe that until the program office completes it own updated cost estimate and NASA conducts an independent cost and technical review, a credible estimate of total program costs and the adequacy of planned reserves will not be available. Also, NASA is still in the process of developing the documentation required for the program, including a risk mitigation plan. NASA policy requires that key program documentation be finalized and approved prior to implementing a program. Second, NASA will face coordination and communication challenges in managing the 2nd Generation Program. As noted earlier, NASA recently awarded initial contracts for systems engineering and various risk reduction efforts to 22 different contractors. Yet to successfully carry out the program NASA must, early on, have coordinated insight into all of the space transportation architectures being proposed by these contractors and their related risk reduction activities. Clearly, this will be a significant challenge. The contractors proposing overall architecture designs must be aware of all the related risk reduction development activities affecting their respective designs. It may also prove difficult for contractors proposing space transportation system designs to coordinate work with other contractors without a prime contractor to subcontractor relationship. NASA’s own Aerospace Technology Advisory Committee, made up of outside experts, has also expressed serious concerns about the difficulty of integrating these efforts effectively. The need for improvement in coordination and communications in all NASA programs has been noted in the past and is not unique to the X-33 and X-34 programs. We and other NASA investigative teams have found and noted similar problems with other NASA programs such as the Propulsion Module for the International Space Station, and several other projects including the two failed Mars missions. NASA’s Space Launch Initiative Program would benefit from lessons learned from past mishaps. At the request of the House Science Committee, we are undertaking a review of NASA’s lessons learned process and procedures. The principal objectives of this review are to determine (1) how NASA captures and disseminates lessons learned and (2) if NASA is effectively applying lessons learned toward current programs and projects. We will report the results of our evaluation in December of this year. The third challenge is establishing performance measures that can accurately gauge the progress being made by NASA and its contractors. NASA officials told us that they plan to periodically reassess the assumptions underlying key program objectives to ensure that the rationale for developing specific technology applications merits continued support. They also told us that they were in the process of establishing such metrics to measure performance. Ensuring that the results from the 2nd Generation Program will support a future decision to develop reusable launch vehicles also deserves attention in NASA’s annual Performance Plan. The plan would be strengthened by recognizing the importance of clearly defined indicators which demonstrate that NASA is (1) on a path leading to an operational reusable launch vehicle and (2) making progress toward its objective of significantly reducing launch costs, and increasing safety and reliability compared to existing systems. Affected NASA Enterprise and Center performance plans would also be strengthened with the development of related metrics. Mr. Chairman, this concludes my statement. I would be happy to answer any questions you or other Members of the Subcommittee may have. We interviewed officials at NASA headquarters in Washington D.C., NASA’s Marshall Space Flight Center, Huntsville, Alabama, and at the NASA X-33 program office at Palmdale, California to (1) determine the primary program management factors that contributed to the difficulties experienced in the X-33 and X-34 programs, and (2) to identify steps that need to be taken to avoid repeating those problems within the Space Launch Initiative framework. We also talked to representatives of NASA’s Independent Program Assessment Office located at the Langley Research Center, Hampton, Virginia and the OIG located at NASA headquarters and Marshall Space Flight Center. At these various locations we obtained and analyzed key program, contractual and procurement documentation for the X-33, X-34 and 2nd Generation programs. Further, we reviewed reports issued by the NASA’s OIG and Independent Program Assessment Office pertaining to the management and execution of the X-33 and X-34 programs, and NASA Advisory Council minutes regarding NASA’s efforts to develop reusable launch vehicles. In addition, we reviewed other NASA internal reports documenting management issues associated with program formulation and implementation of other NASA programs. We also reviewed applicable NASA policy regarding how NASA expects its programs and projects to be implemented and managed. We conducted our review from August 2000 to June 2001 in accordance with generally accepted government auditing standards.
This testimony discusses the National Aeronautics and Space Administration's (NASA) X-33 and X-34 reusable launch vehicle programs. The two programs experienced difficulties achieving their goals primarily because NASA did not develop realistic cost estimates, timely acquisition and risk management plans, and adequate and realistic performance goals. In particular, neither program fully (1) assessed the costs associated with developing new, unproven technologies, (2) provided for the financial reserves needed to deal with technical risks and accommodate normal development delays, (3) developed plans to quantify and mitigate the risks to NASA, or (4) established performance targets showing a clear path leading to an operational reusable launch vehicle. As a result, both programs were terminated. Currently, NASA is in the process of taking steps in the Second Generation Reusable Launch Vehicle Program to help avoid problems like those encountered in the X-33 and X-34 programs.
Traditionally, real estate brokers have offered a full, “bundled” package of services to sellers and buyers, including marketing the seller’s home or assisting the buyer’s search, holding open houses for sellers and showing homes to buyers, preparing offers and assisting in negotiations, and coordinating the steps to close the transaction. Because real estate transactions are complex and infrequent for most people, many consumers benefit from a broker’s specialized knowledge of the process and of local market conditions. Still, some consumers choose to complete real estate transactions without a broker’s assistance, including those who sell their properties on their own, or “for-sale-by-owner.” For many years, the industry has used a commission-based pricing model, with sellers paying a percentage of the sales price as a brokerage fee. Brokers acting for sellers typically invite other brokers to cooperate in the sale of the property and offer a portion of the total commission to whoever produces the buyer. Agents involved in the transaction may be required to split their shares of the commission with their brokers. Under this approach, brokers and agents receive compensation only when sales are completed. Common law has generally considered both brokers cooperating in the sale of a home to have a fiduciary responsibility to represent the seller’s interests, unless the buyer’s broker has specifically agreed to represent the buyer’s interests. In recent years, alternatives to this traditional full-service brokerage model have become more common, although industry analysts and participants told us that they still represent a small share of the overall market. Discount full-service brokerages charge a lower commission than the prevailing local rate, but offer a full package of services. Discount limited- service brokerages offer a limited package of services or allow clients to choose from a menu of “unbundled” services and charge reduced fees on a commission or fee-for-service basis. Most local real estate markets have an MLS that pools information about homes that area brokers have agreed to sell. Participating brokers use an MLS to “list” the homes they have for sale, providing other brokers with detailed information on the properties, including how much of the commission will be shared with the buyer’s agent. An MLS serves as a single, convenient source of information that provides maximum exposure for sellers and facilitates the home search for buyers. Each MLS is a private entity with its own membership requirements and operating policies and procedures. According to NAR, approximately 900 MLSs nationwide are affiliated with the trade association, whose more than 1 million members represent approximately 60 percent of all active licensed real estate brokers and agents. NAR has affiliations with 54 state and territorial associations and more than 1,600 local associations. When one of these local associations owns and operates an MLS, this NAR-affiliated MLS is expected to follow NAR’s model guidelines for various operational and governance issues, such as membership requirements and rules for members’ access to and use of listing information. If a local association or its MLS fails to comply with these guidelines, it can lose important insurance coverage provided through NAR or have its charter membership in NAR revoked. An MLS that is not affiliated with NAR is not bound by these guidelines. Individual states regulate real estate brokerage, establishing licensing and other requirements for brokers and agents. Of the two categories of state- licensed real estate practitioners, brokers generally manage their own offices, and agents, or salespeople, must work for licensed brokers. States generally require brokers to meet more educational requirements than agents, have more experience, or both. For the purposes of this report, we generally refer to all licensed real estate practitioners as brokers. Generally, a state commission, led by appointees who may have a professional background in real estate, oversees implementation of and compliance with state requirements and may respond to complaints about brokers or agents or take disciplinary action. Federal agencies do not play a day-to-day regulatory role in real estate brokerage, although DOJ and FTC enforce compliance with federal antitrust laws in this market, as they do for many other markets. Banks may obtain charters at the federal or state level, and their activities are subject to oversight by federal or state regulators. The Office of the Comptroller of the Currency, which is a bureau within the Department of the Treasury (Treasury), charters and regulates national banks. State- chartered banks are overseen by state regulators and, if they have federal deposit insurance, a federal regulator. Many companies that own or control banks are regulated by the Board of Governors of the Federal Reserve System (Federal Reserve) as bank holding companies. Under the 1999 Gramm-Leach-Bliley Act (Pub. L. No. 106-102), bank holding companies may qualify as financial holding companies and thereby engage in a range of financial activities broader than those traditionally permitted for bank holding companies, such as securities and insurance underwriting. Some states permit state-chartered banks to engage in real estate brokerage, but national banks and financial holding companies may not engage in such activity. The Gramm-Leach-Bliley Act permits financial holding companies and financial subsidiaries of national banks to engage in activities that the Federal Reserve and the Treasury deem, through order or regulation, to be financial in nature, incidental to such financial activity, or both complementary to a financial activity and not posing substantial risk to the safety and soundness of depository institutions or the financial system generally. In late 2000, the Federal Reserve and the Treasury released a proposed regulation to allow banking companies to enter real estate brokerage under some circumstances. However, from fiscal years 2003 to 2005, amendments to appropriations laws precluded the Federal Reserve and the Treasury from issuing such regulations. Legislation was introduced in the 109th Congress to prohibit financial holding companies and national banks from engaging in real estate brokerage activities. Legislation was also introduced to permit such activity. A number of factors can influence the degree of price competition in the real estate brokerage industry. Some economists have observed that brokers typically compete more on nonprice factors, such as service quality, than on price. Evidence from academic literature and industry participants with whom we spoke highlighted several potential causes of this apparent lack of price competition. These potential causes include broker cooperation, largely through MLSs, which can discourage brokers from competing with one another on price; resistance from traditional full- service brokers to brokers who offer discounted prices or limited services; limited pressure from consumers for lower prices; and state antirebate and minimum service laws and regulations, which some argue may limit pricing and service options for consumers. The real estate brokerage industry has a number of attributes that economists normally associate with active price competition. Most notably, the industry has a large number of brokerage firms and individual licensed brokers and agents—approximately 98,000 active firms and 1.9 million active brokers and agents in 2004, according to the Association of Real Estate License Law Officials. Although some local markets are dominated by 1 or a few large firms, market share in most localities is divided among many small firms, according to industry analysts. In addition, the industry has no significant barriers to entry, since obtaining a license to engage in real estate brokerage is relatively easy and the capital requirements are relatively small. For discussions of nonprice competition among brokers, see J.H. Crockett, “Competition and Efficiency in Transacting: The Case of Residential Real Estate Brokerage,” AREUEA Journal, vol. 10, no. 2 (1982); D.R. Epley and W.E. Banks, “The Pricing of Real Estate Brokerage for Services Actually Offered,” Real Estate Issues, vol. 10, no. 1 (1985); T.J. Miceli, “The Welfare Effects of Non-Price Competition Among Real Estate Brokers,” Journal of the American Real Estate and Urban Economics Association, vol. 20, no. 4 (1992); and G.K. Turnbull, “Real Estate Brokers, Nonprice Competition and the Housing Market,” Real Estate Economics, vol. 24, no. 3 (1996). Our review cites a number of academic studies that date back many years because, in large part, there is not a large body of more recent research on the real estate brokerage industry. However, we found that older research findings in this area have been consistent with more recent studies, as well as with testimonial evidence we obtained in interviews with industry analysts and market participants. For the most part, the economic literature and available data related to real estate commissions cover existing home sales and not new construction. common rate within most markets, and they generally cited rates of 5 percent to 6 percent as typical now. Some economists have cited certain advantages to the commission-based model that is common in real estate brokerage, most notably that it provides sellers’ brokers with an incentive to get the seller the highest possible price. Moreover, uniformity in commission rates within a market at a given time does not necessarily indicate a lack of price competition. But some economists have noted that in a competitive marketplace, real estate commission rates could reasonably be expected to vary across markets or over time—that is, to be more sensitive to housing market conditions than has been traditionally observed. For example, commission rates within a market at a given time do not appear to vary significantly on the basis of the price of the home. Thus, the brokerage fee, in dollar terms, for selling a $300,000 home is typically about three times the fee for selling a $100,000 home, although the time or effort required to sell the two homes may not differ substantially. Similarly, commission rates do not appear to have changed as much as might be expected in response to rapidly rising home prices in recent years. Between 1998 and 2003, the national median sales price of existing homes, as reported by NAR, increased 35 percent, while inflation over the same period was 10 percent, leaving an increase of some 25 percent in the inflation-adjusted price of housing. According to REAL Trends, average commission rates fell from an estimated 5.5 percent in 1998 to an estimated 5.1 percent in 2003, a decrease of about 7 percent. Thus, with the increase in housing prices, the brokerage fee for selling a median-priced home increased even as the commission rate fell. Some economists have suggested that uniformity in commission rates can lead brokers to compete on factors other than price in order to gain market share. For example, brokers might hire more agents in an effort to win more sellers’ listings. Brokers may also compete by spending more on advertising or offering higher levels of service to attract clients. Although some of these activities can benefit consumers, some economic literature suggest that such actions lead to inefficiency because brokerage services could be provided by fewer agents or at a lower cost. For example, although advertising can be effective in providing buyers and sellers with information about broker services, the consumer benefit from brokers’ expenditures on advertising or promotions aimed at acquiring listings may be less than their cost to the broker. To the extent that commission rates may have declined slightly in recent years, the change may be the result in part of rapidly rising home prices, which have generated higher brokerage industry revenues even with lower commission rates. However, competition from increasing numbers of discount, fee-for-service, and other nontraditional brokerage models may have also contributed to the decline. These nontraditional models typically offer lower fees, and although they currently represent only about 2 percent of the market, they may be putting some downward pressure on the fees charged by traditional brokerages. Factors related to the cooperation among brokers facilitated by MLSs, some brokers’ resistance to discounters, and consumer attitudes may inhibit price competition within the real estate brokerage industry. While MLSs provide important benefits to consumers by aggregating data on homes for sale and facilitating brokers’ efforts to bring buyers and sellers together, the cooperative nature of the MLS system can also in effect discourage brokers from competing with one another on price. Because participating in an MLS in the areas where they exist is widely considered essential to doing business, brokerage firms may have an incentive to adopt practices that comply with MLS policies and customs. As previously noted, MLSs facilitate cooperation in part by enabling brokers to share information on the portion of the commission that sellers’ brokers are offering to buyers’ brokers. In the past, some MLSs required participating brokers to charge standard commission rates, but this practice ended after the Supreme Court ruled, in 1950, that an agreement to fix minimum prices was illegal under federal antitrust laws. Subsequently, some MLSs adopted suggested fee schedules, but this too ended after DOJ brought a series of antitrust actions in the 1970s alleging that this practice constituted price fixing. Today, MLSs no longer establish standard commission rates or recommend how commissions should be divided among brokers. MLS listings do show how much sellers’ brokers will pay other brokers for cooperating in a sale, according to industry participants. When choosing among comparable homes for sale, brokers have a greater incentive—all else being equal—to first show prospective buyers homes that offer other brokers the prevailing commission rate than homes that offer a lower rate. Therefore, even without formal policies to maintain uniform rates, individual brokers’ reliance on the cooperation of other brokers to bring buyers to listed properties may help maintain a standard commission rate within a local area, at least for buyers’ brokers. Traditional brokers may discourage price competition by resisting cooperation with brokers and firms whose business models depart from charging conventional commission rates, according to several industry analysts and participants we spoke with. A discount broker may advertise a lower commission rate to attract listings, but the broker’s success in selling those homes, and in attracting additional listings in the future, depends in part on other brokers’ willingness to cooperate (by showing the homes to prospective buyers) in the sale of those listings. Some discount full-service and discount limited-service brokerage firms we interviewed said that other brokers had refused to show homes listed by discounters. In addition, traditional brokers may in effect discourage discount brokers from cooperating in the sale of their listings by offering discounters a lower buyer’s broker commission than the prevailing rate offered to other brokers. This practice can make it more difficult for discount brokers to recruit new agents because they may earn more working for a broker who receives the prevailing commission from other brokers. Some traditional full-service brokers have argued that discount brokers often do less of the work required to complete the transaction and, thus, deserve a smaller portion of the seller’s commission. Representatives of discount brokerages told us they believed that reduced commission offers are in effect “punishment” for offering discounts to sellers and are intended as signals to other brokers to conform to the typical pricing in their markets. Pressure from consumers for lower brokerage fees appears to be limited, although it may be increasing, according to our review of economics literature and to several industry analysts and participants. Consumers may accept a commission rate of about 6 percent as an expected cost of selling a home, in part because that has been the accepted pricing model for so long, and some consumers may not know that rates can be negotiated. Buyers may also have little concern about commission rates because sellers directly pay the commissions. Sellers may be reluctant to reduce the portion of the commission offered to buyers’ brokers because doing so can reduce the likelihood that their home will be shown. In addition, home sellers who have earned large profits as housing prices have climbed in recent years may have been less sensitive to the price of brokerage fees. However, some brokers and industry analysts noted that the growth of firms offering lower commissions or flat fees has made an increasing number of consumers aware that there are alternatives to traditional pricing structures and that commission rates are negotiable. Although state laws and regulations related to real estate licensing can protect consumers, DOJ and FTC have expressed concerns that some of these laws and regulations may also unnecessarily hinder competition among brokers and limit consumer choice. At least 14 states appear to prohibit, by law or regulation, real estate brokers from giving consumers rebates on commissions or to place restrictions on this practice. Proponents say such laws and regulations help ensure that consumers choose brokers on the basis of the quality of service as well as price, rather than just on the rebate being offered. Opponents of antirebate provisions argue that such restrictions serve only to limit choices for consumers and to discourage price competition by preventing brokers from offering discounts. Proponents also note that offering a rebate is one of the few ways to reduce the effective price of buyer brokerage services since commissions are typically paid wholly by the seller. In March 2005, DOJ’s Antitrust Division filed suit against the Kentucky Real Estate Commission, arguing that the commission’s administrative regulation banning rebates violated federal antitrust laws. In its complaint, DOJ argued that the regulation unreasonably restrained competition to the detriment of consumers, making it more difficult for them to obtain lower prices for brokerage services. In July 2005, DOJ and the commission proposed a settlement agreement which, if approved by the court, would require the commission to cease enforcing its regulation prohibiting rebates and other inducements. Ten states are considering or have passed legislation that requires brokers to provide a minimum level of service when they represent consumers. Such provisions generally require that when a broker agrees to act as a consumer’s exclusive representative in a real estate transaction, the broker must provide such services as assistance in delivering and assessing offers and counteroffers, negotiating contracts, and answering questions related to the purchase and sale process. Advocates of minimum service standards argue that they protect consumers by ensuring that brokers provide a basic level of assistance. Further, full-service brokers argue that such standards prevent them from having to unfairly shoulder additional work when the other party uses a limited-service broker. Opponents of these standards argue that they restrict consumer choice and raise costs by impeding brokerage models that offer limited services for a lower price. In April and May 2005, DOJ wrote to state officials in Oklahoma, and DOJ and FTC jointly wrote to officials in Alabama, Missouri, and Texas, discouraging adoption of these states’ proposed minimum service laws and regulations. The letters argued that the proposed standards in these states would likely harm consumers by preventing brokers from offering certain limited- service options and therefore requiring some sellers to buy brokerage services they would otherwise choose to perform themselves. They also cited a lack of evidence that consumers have been harmed by limited- service brokerage. Despite the concerns raised by DOJ and FTC, the governors in all 4 states subsequently signed minimum service standards into law. Similarly, while state licensing rules for real estate brokers and agents may ensure standards of quality that protect consumers, these rules may also restrict consumers’ ability to choose among services and prices, ultimately reducing competition. For example, in 2004, a federal district court found unconstitutional a California real estate licensing law that required the operator of a for-sale-by-owner Web site to obtain a brokerage license in order to advertise property listings without providing any additional brokerage services. The court found that the law impermissibly differentiated between publications displaying the same basic content on their Web sites, noting that newspapers were not required under the law to obtain a brokerage license simply to display property listings on their Web sites. The Internet has increased consumers’ access to information about properties for sale and has facilitated new approaches to real estate transactions. Many brokers post information on their Web sites—in varying degrees of detail—on properties they have contracted to sell, enabling consumers to obtain such information without consulting a broker. The Internet also has fostered the creation or expansion of a number of Internet-oriented firms that provide real estate brokerage or related services, including discount brokers and broker referral services. Whether the Internet will be more widely used in real estate brokerage depends in part on the extent to which listing information is widely available. Like discount brokerages, Internet-oriented brokerage firms, especially those offering discounts, may also face resistance from traditional brokers and may especially be affected by state laws that prohibit them from offering rebates to consumers. In addition, certain factors—such as the lack of a uniform sales contract—may inhibit the use of the Internet for accomplishing the full range of activities needed for real estate transactions. The Internet allows consumers direct access to listing information that has traditionally been available only from brokers. Before the Internet was widely used to advertise and display property listings, MLS data (which comprise a vast majority of all listings) were compiled in an “MLS book” that contained information on the properties listed for sale with MLS- member brokers in a given area. In order to view the listings, buyers generally had to use a broker, who provided copies of listings that met the buyer’s requirements via hard copy or fax. Today, information on properties for sale—either listed on an MLS or independently, such as for-sale-by- owner properties—is routinely posted on Web sites, often with multiple photographs or virtual tours. For example, NAR’s Realtor.com Web site features more than 2 million properties listed with MLSs around the country, and most brokers also maintain their own Web sites with information on properties for sale in their area. Buyers may also search for non-MLS listed properties on the Web sites of companies that help owners market their properties themselves. Thus, the Internet has allowed buyers to perform much of the search and evaluation process independently, before contacting a broker. Sellers of properties can also benefit from the Internet because it can give their listings more exposure to buyers. For example, according to NAR, Realtor.com—which provides information on approximately 95 percent of all homes listed with MLSs around the country—had 6.2 million unique visitors in February 2005. Sellers who choose to sell their homes without the assistance of a broker can advertise their properties on a multitude of “for-sale-by-owner” Web sites. Sellers may also use the Internet to research suitable asking prices for their homes by comparing the attributes of their houses with others listed in their area. Despite more active participation of some buyers and sellers in the transaction process, some industry analysts and participants noted that because of the complexity of real estate transactions, some buyers and sellers will always desire the assistance of a broker to help them navigate the process. Unlike transactions that can now be completed entirely on the Internet—such as purchasing airline tickets or trading securities—real estate transactions are likely to continue to involve at least some in-person services for the foreseeable future. Although Internet-oriented brokerages and related firms represent only a small portion of the real estate brokerage market at present, the Internet has made different service and pricing options more widely available to consumers. Among these options are full-service and limited-service discount brokerages, information and referral companies, and alternative listing Web sites. Full-service discount brokerages offer buyers and sellers full-service real estate brokerage services—including listing properties in the MLS, conducting open houses, negotiating contracts, and assisting with closings—but advertise lower than traditional commissions, for example between 3 percent and 4.5 percent. These types of brokerages existed before widespread use of the Internet, but many have gained exposure and become more viable as a result of the Internet. In addition, by posting listings online, displaying photographs and virtual tours of homes for sale, and communicating with buyers and sellers by e-mail, some of these companies say that they have been able to cut brokerage costs, allowing them to offer rebates to buyers or discounted commissions to sellers. Limited-service discount brokerages provide fewer services than full- service brokerages but also offer lower commission rates or offer their services for flat fees. For example, some firms market a full array of brokerage services for a reduced commission but do not list homes in the MLS. Other firms charge a flat fee for marketing and advertising homes and, for additional fees, will list a property in the MLS and show the home to prospective buyers. Although these types of discount brokers have existed since at least the 1970s, industry participants told us that the Internet has allowed them to grow in number and size in recent years, in part because they can market their services to a larger population of buyers and sellers. Information and referral companies, including some that are licensed real estate brokers, provide resources for buyers and sellers—such as home valuation tools and access to property listings—and make referrals of those consumers to local brokers. Some of these companies charge referral fees to brokers and then rebate a portion of that fee back to buyers and sellers. It is through the Internet that these companies are able to efficiently reach potential consumers and offer those customers services and access to brokers. Alternative listing Web sites offer alternatives to the MLS, allowing sellers who want to sell their homes themselves to advertise their properties to buyers and giving buyers another source of information on homes for sale. These alternative listing sites include the Web sites of local newspapers, Craig’s List, and “for-sale-by-owner” Web sites. These services, which generally do not provide buyers and sellers with the assistance of a licensed broker, are limited to providing consumers with a venue for advertising homes and viewing properties for sale. Several factors could limit the extent to which the Internet is used in real estate transactions. A key factor is the extent to which information about properties listed in an MLS is widely available. Currently, buyers may view MLS-listed properties on many Web sites, including broker and MLS Web sites and on Realtor.com. NAR has considered a policy on Virtual Office Web sites (VOW) that would allow brokers to selectively exclude their MLS listings from being displayed on certain other brokers’ Web sites and would prohibit certain types of companies, such as information and referral companies, from operating VOWs. Proponents of this policy argue that listings are the work product, and thus the property, of the selling broker, who should have control over how the listings are used. Proponents maintain that brokers should be able to prevent certain companies—such as information and referral companies—from using their listings simply to earn referral fees. NAR and others have also argued that freely posting MLS data—such as addresses, descriptions of properties, and property tax information—on the Internet may compromise the security and privacy of their clients. Opponents of the VOW policy argue that it is anticompetitive because it would unfairly limit Internet-oriented brokers’ ability to provide their clients with access to MLS listings through their Web sites. They argue that NAR already has policies on the appropriate distribution of MLS information, and that their rules should treat information disseminated via the Internet no differently than information distributed via traditional bricks-and-mortar brokerages. They also note that measures can be taken to address security and privacy concerns related to MLS listings on the Internet, such as restricting the number of listings that result from an online search. Some opponents also expressed concern that some Internet- oriented brokers would not be able to compete if—in a market dominated by a single player—they were selectively excluded from displaying that player’s listings. Even with few restrictions on the availability of information about properties for sale, Internet-oriented brokerage firms may face other challenges. First, Internet-oriented brokers we spoke with described resistance, similar to that previously described, involving some traditional brokerages that refused to show the Internet-oriented brokerages’ listed properties or offered them buyers’ brokers commissions that were less than those offered to other brokers. However, the online availability of listing information may discourage such behavior by enabling buyers to more easily detect whether a broker is avoiding other brokers’ listings that are of interest. Second, some Internet-oriented companies said that state antirebate laws and regulations could affect them disproportionately, since their model often was built around such rebates. Finally, certain factors may inhibit the use of the Internet for accomplishing the full range of activities needed for real estate transactions. For example, some companies told us that they would like to make greater use of the Internet to facilitate the execution of the contract used in the purchase and sale of a property. However, they said that there is no single, uniform sales contract for residential real estate, and state laws vary with respect to which disclosures must accompany a sales contract. They also said that state laws vary in their requirements for physical copies of signed contracts, attorneys’ involvement in signing a contract, and the circumstances under which a contract may be rescinded. As a result, it would be difficult to develop an online platform that could be used nationwide for residential real estate contracts. Further, industry participants told us that no uniform technology currently exists to facilitate the assistance that brokers often provide in other aspects of the real estate transaction, such as coordinating inspections, appraisals, financing, title searches, and settlements. Our review of certain state statutes and regulations showed that approximately 30 states may potentially authorize state-chartered banks or their operating subsidiaries to engage in some real estate brokerage activities. However, we also found that because only a small number of banks in these states appeared to have taken advantage of this authority, the effect on competition and consumers was likely minimal. We reviewed the state statutes and regulations that NAR and the Conference of State Bank Supervisors, using the broadest interpretations, identified as potentially authorizing banks’ brokerage activity. While many of these laws are ambiguous and subject to interpretation by state regulators, it appears that at least 5 states and the District of Columbia provide relatively clear authority for banks or their subsidiaries to engage in real estate brokerage. An additional 8 states permit involvement in other real-estate-related activities or in unspecified activities that might be approved by the state. At least 7 states could potentially permit banks to conduct real estate activities as an incidental power, an activity closely related to banking, or an activity that is financial in nature. Many of the remaining states could potentially allow state-chartered banks to conduct real estate activities to the extent that national banks or other federal depository institutions are allowed to do so. The exact number of state-chartered banks that engage in real estate brokerage is unknown because not all state regulators track such activity. However, available data and interviews with real estate, banking, and state regulatory officials suggest that such activity is very limited among the approximately 5,700 state-chartered banks nationwide. In separate surveys in 2001, NAR and the Conference of State Bank Supervisors identified only eight states where state-chartered banks had engaged in at least some real estate brokerage activity. More recent data were not available, but regulators and industry officials told us that they doubted that this activity had expanded significantly since 2001. They noted that real estate brokerage is not typically part of a bank’s business model, and that banks in small communities may be reluctant to compete with local real estate brokers that may be clients of the banks. We spoke with officials from banks engaged in real estate brokerage, bank regulators, and real estate industry representatives in Iowa and Wisconsin—two states identified as having the most banks involved in real estate brokerage in 2001. The seven such banks we identified in these states were all in small communities that had few or no other real estate brokers, and some of these banks noted that their presence provided an additional option for local residents. None of the banks we spoke with offered brokerage services that were different than those offered by traditional brokerages, and none offered discount brokerage services. Most of the bank officials said that real estate brokerage was not a large portion of their business. They said their primary goal was not to link brokerage customers to the bank’s mortgage financing and added that most of their brokerage customers in fact obtained their mortgages outside of the bank. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies to the Secretary of Housing and Urban Development, the Attorney General, and the Chairman of the Federal Trade Commission. We will make copies available to others upon request. This report will also be available at no charge on GAO’s Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or woodd@gao.gov if you or your staffs have any questions about this report. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. In addition to the contact named above, Jason Bromberg, Assistant Director; Tania Calhoun; Emily Chalmers; Evan Gilman; Christine Houle; Austin Kelly; Cory Roman; and Julianne Stephens made key contributions to this report. This bibliography includes articles cited in our report and selected other sources from our review of literature on the structure and competitiveness of the residential real estate brokerage industry. Anglin, P. and R. Arnott. “Are Brokers’ Commission Rates on Home Sales Too High? A Conceptual Analysis.” Real Estate Economics, vol. 27, no. 4 (1999): 719-749. Arnold, M.A. “The Principal-Agent Relationship in Real Estate Brokerage Services.” Journal of the American Real Estate and Urban Economics Association, vol. 20, no. 1 (1992): 89-106. Bartlett, R. “Property Rights and the Pricing of Real Estate Brokerage.” The Journal of Industrial Economics, vol. 30, no. 1 (1981): 79-94. Benjamin, J.D., G.D. Jud and G.S. Sirmans. “Real Estate Brokerage and the Housing Market: An Annotated Bibliography.” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 217-278. ----- “What Do We Know about Real Estate Brokerage?” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 5-30. Carney, M. “Costs and Pricing of Home Brokerage Services.” AREUEA Journal, vol. 10, no. 3 (1982): 331-354. Crockett, J.H. “Competition and Efficiency in Transacting: The Case of Residential Real Estate Brokerage.” AREUEA Journal, vol. 10, no. 2 (1982): 209-227. Delcoure, N. and N.G. Miller. “International Residential Real Estate Brokerage Fees and Implications for the US Brokerage Industry.” International Real Estate Review, vol. 5, no. 1 (2002): 12-39. Epley, D.R. and W.E. Banks. “The Pricing of Real Estate Brokerage for Services Actually Offered.” Real Estate Issues, vol. 10, no. 1 (1985): 45-51. Federal Trade Commission. The Residential Real Estate Brokerage Industry, vol. 1 (Washington, D.C.: 1983). Goolsby, W.C. and B.J. Childs. “Brokerage Firm Competition in Real Estate Commission Rates.” The Journal of Real Estate Research, vol. 3, no. 2 (1988): 79-85. Hsieh, C. and E. Moretti. “Can Free Entry Be Inefficient? Fixed Commissions and Social Waste in the Real Estate Industry.” The Journal of Political Economy, vol. 111, no. 5 (2003): 1076-1122. Jud, G.D. and J. Frew. “Real Estate Brokers, Housing Prices, and the Demand for Housing.” Urban Studies, vol. 23, no. 1 (1986): 21-31. Knoll, M.S. “Uncertainty, Efficiency, and the Brokerage Industry.” Journal of Law and Economics, vol. 31, no. 1 (1988): 249-263. Larsen, J.E. and W.J. Park. “Non-Uniform Percentage Brokerage Commissions and Real Estate Market Performance.” AREUEA Journal, vol. 17, no. 4 (1989): 422-438. Mantrala, S. and E. Zabel. “The Housing Market and Real Estate Brokers.” Real Estate Economics, vol. 23, no. 2 (1995): 161-185. Miceli, T.J. “The Multiple Listing Service, Commission Splits, and Broker Effort.” AREUEA Journal, vol. 19, no. 4 (1991): 548-566. ----- “The Welfare Effects of Non-Price Competition Among Real Estate Brokers.” Journal of the American Real Estate and Urban Economics Association, vol. 20, no. 4 (1992): 519-532. Miceli, T.J., K.A. Pancak, and C.F. Sirmans. “Restructuring Agency Relationships in the Real Estate Brokerage Industry: An Economic Analysis.” Journal of Real Estate Research, vol. 20, no. 1/2 (2000): 31-47. Miller, N.G. and P.J. Shedd. “Do Antitrust Laws Apply to the Real Estate Brokerage Industry?” American Business Law Journal, vol. 17, no. 3 (1979): 313-339. Munneke, H.J. and A. Yavas. “Incentives and Performance in Real Estate Brokerage.” Journal of Real Estate Finance and Economics, vol. 22, no. 1 (2001): 5-21. Owen, B.M. “Kickbacks, Specialization, Price Fixing, and Efficiency in Residential Real Estate Markets.” Stanford Law Review, vol. 29, no. 5 (1977): 931-967. Schroeter, J.R. “Competition and Value-of-Service Pricing in the Residential Real Estate Brokerage Market.” Quarterly Review of Economics and Business, vol. 27, no. 1 (1987): 29-40. Sirmans, C.F. and G.K. Turnbull. “Brokerage Pricing under Competition.” Journal of Urban Economics, vol. 41, no. 1 (1997): 102-117. Turnbull, G.K. “Real Estate Brokers, Nonprice Competition and the Housing Market.” Real Estate Economics, vol. 24, no. 3 (1996): 293-316. Yavas, A. “Matching of Buyers and Sellers by Brokers: A Comparison of Alternative Commission Structures.” Real Estate Economics, vol. 24, no. 1 (1996): 97-112. Yinger, J. “A Search Model of Real Estate Broker Behavior.” The American Economic Review, vol. 71, no. 4 (1981): 591-605. Zumpano, L.V. and D.L. Hooks. “The Real Estate Brokerage Market: A Critical Reevaluation.” AREUEA Journal, vol. 16, no. 1 (1988): 1-16.
Consumers paid an estimated $61 billion in residential real estate brokerage fees in 2004. Because commission rates have remained relatively uniform--regardless of market conditions, home prices, or the effort required to sell a home--some economists have questioned the extent of price competition in the residential real estate brokerage industry. Further, while the Internet offers time and cost savings to the process of searching for homes, Internet-oriented brokerage firms account for only a small share of the brokerage market. Finally, there has been ongoing debate about the potential competitive effects of bank involvement in real estate brokerage. GAO was asked to discuss (1) factors affecting price competition in the residential real estate brokerage industry, (2) the status of the use of the Internet in residential real estate brokerage and potential barriers to its increased use, and (3) the effect on competition and consumers of residential real estate brokerage by state-chartered banks in states that permit this practice. The residential real estate brokerage industry has competitive attributes, but its competition appears to be based more on nonprice variables--such as quality, reputation, or level of service--than on brokerage fees, according to a review of the academic literature and interviews with industry analysts and participants. One potential cause of the industry's apparent lack of price variation is the use of multiple listing services (MLS), which facilitates cooperation among brokers in a way that can benefit consumers but may also discourage participating brokers from deviating from conventional commission rates. For instance, an MLS listing gives brokers information on the commission that will be paid to the broker who brings the buyer to that property. This practice potentially creates a disincentive for home sellers or their brokers to offer less than the prevailing rate, since buyers' brokers may show high-commission properties first. Some state laws and regulations may also affect price competition, such as those prohibiting brokers from giving clients rebates on commissions. Although such laws and regulations can protect consumers, the Department of Justice and the Federal Trade Commission have argued that they may also unnecessarily limit competition and reduce consumers' choices. The Internet has changed the way consumers look for real estate and has facilitated the creation and expansion of alternatives to traditional brokers. A variety of Web sites allows consumers to access property information that once was available only by contacting brokers directly. The Internet also has fostered the growth of nontraditional residential real estate brokerage models, including discount brokers and broker referral services. However, industry participants and analysts cited several obstacles to more widespread use of the Internet in real estate transactions, including restrictions on listing information on Web sites, some traditional brokers' resistance to cooperating with nontraditional firms, and certain state laws and regulations. Although about 30 states potentially authorize state-chartered banks or their operating subsidiaries to engage in some form of residential real estate brokerage, few banks in these states appear to have done so. GAO's contacts with seven banks engaged in brokerage in two states found that they were located in small communities with few other brokerage options, and that their brokerage services did not differ significantly from those of other local real estate brokers. In general, because residential real estate brokerage by state-chartered banks appears to be so limited, its effect on competition and consumers has likely been minimal.